id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
263095915
pes2o/s2orc
v3-fos-license
ARAŞTIRMALARI DERGİSİ The Anatolian region, renowned for its abundant artistic heritage spanning diverse civilizations, boasts a history entwined with the thirteenth century Ottoman Empire. Within world history, the Ottoman Empire holds a pivotal role, particularly in the realms of art, craftsmanship, and science across its dominion. Though artistic hubs initially thrived in Bursa and Edirne, Istanbul emerged as the epicenter in the late 15th century. The intricate textiles, fabrics, and motifs originating here inspired numerous production centers. The 16th century, the Ottoman Empire ’s golden age, witnessed a zenith in handicrafts and textiles, notably silk weaving, which constituted a significant portion of Ottoman exports from the 16th to 18th centuries. Bursa’s silk weaving adhered to state -enacted laws and quality standards, a pio neering instance in textile history, reflecting the Empire’s commitment to top -notch production. This article delves into the historical panorama of Ottoman-era attire, textiles, fabrics, and motifs, underscoring the Empire’s dedication to arts, sciences, and crafts a testament to its robust structural framework and forward-looking vision. Introduction The Ottoman period, spanning over six centuries from the 13th to the early 20th century, holds a significant place in history.Beyond its political and military accomplishments, the Ottoman Empire also left a rich cultural and artistic legacy (Acar, 2016, p. 21).One remarkable aspect of this cultural heritage is the clothing, textiles, fabrics, and motifs that flourished during this era.The historical characteristics of Ottoman clothing and textiles provide valuable insights into the social, artistic, and cultural developments of the time (Belge, 2008, p. 33). During the Ottoman period, clothing and textiles played a multifaceted role.They served as symbols of status, expressions of identity, and reflections of artistic taste (Delibaş and Tezcan, 1986, p. 21).The fashion trends and craftsmanship of the era are evident in the intricately designed garments, the exquisite fabrics, and the skillful use of motifs (Gezer, 2012, p. 31-32).From the lavish attire of the sultans and nobility to the everyday clothing of the common people, Ottoman textiles encompassed a wide range of styles and techniques (Goodwin, 2003, p. 17). This article aims to explore the historical characteristics of clothing, textiles, fabrics, and motifs in the Ottoman period.It will delve into the fashion trends, weaving techniques, and the significance of patterns and motifs that defined Ottoman textiles (Gümüşer, 2011, p. 27-28).Furthermore, it will examine the social and cultural implications of clothing and textiles in the Ottoman society, shedding light on the diverse influences and artistic expressions of the era (Gürsu, 1999, p. 7). By understanding the historical context and characteristics of Ottoman clothing and textiles, we gain a deeper appreciation for the cultural heritage that has shaped our present.Exploring the intricacies of these textiles allows us to delve into the craftsmanship, symbolism, and artistic traditions that have stood the test of time (Hinton, 1995, p. 23). Through this comprehensive exploration of the historical aspects of clothing, textiles, fabrics, and motifs in the Ottoman period, we can unravel the fascinating stories woven into the very fabric of this remarkable era (İnalcık, 2008, p. 41). Methodology To explore the historical characteristics of clothing, textiles, fabrics, and motifs in the Ottoman period, a multi-faceted approach will be employed.This study will rely on a combination of primary and secondary sources, including historical documents, archival materials, scholarly research, and visual representations such as paintings, illustrations, and photographs (Bilgin, 2019, p. 11).In this context, in order to reveal the importance given to the textile subject of the period, T.C.The documents of the Presidency of the State Archives of the Presidency of the Ottoman Archives (BOA) were also examined (BOA, TS.MA.d,9448).The primary sources will provide valuable insights into the fashion trends, textile production techniques, and cultural significance of clothing in the Ottoman society (Tezcan, 1997, p. 195).Secondary sources, including academic articles, books, and scholarly analyses, will contribute to a comprehensive understanding of the subject matter.Additionally, visual representations will be utilized to examine and interpret the intricate details, patterns, and motifs of Ottoman textiles (Tezcan, 2000, p. 33). The analysis will be conducted through a comparative and contextual approach, considering the socio-cultural, political, and artistic influences that shaped the clothing and textile production during the Ottoman period (Tez, 2021, p. 37).Overall, this methodology aims to provide a thorough and well-rounded exploration of the historical characteristics of Ottoman clothing, textiles, fabrics, and motifs. Historical Characteristics of Ottoman Textile and Clothing The Ottoman Empire, which spanned seven hundred years and encompassed the Anatolian lands, rich with artistic heritage from numerous civilizations, placed great importance on the development of arts and crafts.The epicenter of artistic skill within the empire has always been the palace, initially in Bursa and Edirne, and since the late 15th century, the New Palace / Topkapı in Istanbul (Karal ve Uzunçarşılı, 1997, p. 63). In the Ottoman Empire, supreme authority rested with the sultan, who had control over the economic, political, social, cultural, and artistic aspects of the subjects' lives.The palace governed social interactions, trade, craftsmanship, as well as the affairs of various ethnic and religious minorities (Kretschmar and others, 1979, p. 23-24).Consequently, clothing during the Ottoman Period served as a means to highlight social distinctions between the courtiers and the general public.People were not free to dress according to their preferences both inside and outside the palace; rather, they adhered to a dress code established by the court (Öz, 1979, p. 27).Moreover, certain clothing restrictions were imposed to differentiate between Muslims and non-Muslims.For instance, a law enacted by the judge of Istanbul in 1568 prohibited Jewish men and women from wearing colorful robes made of fine wool and silk, adorning their heads with vibrant turbans, and donning shalwar trousers made of colored silk satin or silk-cotton blend fabrics (Küçükerman, 1996, p. 33).These prohibitions aimed to prevent Jews from undermining their social status by wearing such garments. It is known that the palace imposed restrictions on women's attire, meticulously regulating every detail, from collar depth to fabrics and colors (İpek, 2012, p. 3).Although there is a lack of sufficient documentation regarding women's clothing during the early Ottoman period, visual sources such as miniature manuscripts, written records like kadı registers, and illustrated travel books offer some insights into the subject.Initially, women's clothing included garments such as dresses, shalwars, belts, and headscarves, with the addition of the ferace (Renda, 1993, p. 9).Following the Conquest of Istanbul, Muslim women were socially secluded to distinguish them from non-Muslims, and this limited their participation in urban life (Gürsoy, 2004, p. 62). The relegation of women to the background in nearly all aspects of social life also influenced their clothing, resulting in a relatively stable women's fashion until the 18th century (İpek, 2012, p. 5).From the 15th to the 18th century, significant changes in women's clothing were scarce; the tradition of covering the head, inherited from the Anatolian Seljuks, persisted, but the face remained uncovered.The shalwar, inner robe, shirt, ferace, and caftan retained their traditional forms.Due to the privacy surrounding women and the absence of a tradition of preserving women's clothing in the palace, very few examples of Ottoman women's fashion have survived to the present day (Koçu, 1967, p. 27).(Renda, 1993, p. 33). Clothing wearing in the palace itself followed a meticulous procedure.For instance, when the Sultan would wear a fur caftan for a change of seasons, only then could other members of the palace wear it.In their daily lives, the sultans would wear shalwar trousers, a shirt robe, and either a short caftan or a long caftan in jacket form (Mahir, 2017, p. 41).On official occasions, they would wear a long-sleeved robe with buttons from the elbow to the wrist.Occasionally, they would also wear a sleeved caftan.The only distinction between the military attire of the sultans and their civilian clothing was the armored lining in their war garments.Surviving examples include caftans with floral patterns on the outside and armor on the inside, or satin on the outside with armored shirts on the inside (Sevin, 1990, p. 53).Although various sources state that sultans had the freedom to make their own clothing choices, until the 19th century, this freedom was limited to variations in fabric, pattern, color, accessories, or basic clothing due to adherence to traditions (İnalcık, 2008, p. 27). The 16th century, often referred to as the golden age of the Ottoman Empire, also saw the flourishing of art, craftsmanship, and textiles.Silk fabrics played a significant role in Ottoman exports between the 16th and 18th centuries (Tezcan, 2000, p. 33).Workshops in Bursa and Istanbul catered to both the general public and the palace, while also fulfilling orders from abroad.The regulations governing silk weaving in Bursa were established through state-enacted laws in the 16th century, which also set standards for the quality of materials and fabrics used.Silk, woven through intricate and timeconsuming processes, was highly valued for its luxury and expense.These laws aimed to prevent fraudulent weaving, maintain consistent quality, and adjust prices based on prevailing conditions.A law enacted in 1502 for the production of silk textiles in Bursa exemplifies the importance placed on the weaving sector by the state and the rigorous control exercised by the palace. 1 The law aimed to improve the standards of weaving, which had declined in the preceding period.It specified the permissible amounts and additives for lac usage.The law mentions over a hundred looms producing substandard fabrics in Bursa and calls for an equal number of master weavers to be summoned to Bursa to address this issue, with the promise of punishment for those who fail to meet the required standards. 2This law highlights the significance of Bursa silk fabrics during that era. Ottoman textiles were produced in three main branches: cotton-linen, woolen and silk.Although quality cotton yarn was produced in Denizli before the Ottoman Empire and in most of Anatolia during the Ottoman period, cotton had to be imported from the Far East, especially India, due to insufficient local supply.During the Ottoman period, cotton was grown in most parts of Anatolia.However, this cotton was hard and its fibers were short.India's soft and long-fiber cotton was always preferred.These thin and soft cottons were used especially in men's turbans and women's scarves.The same applied to some woolen fabrics, which needed to be imported from Europe (Sevin, 1990, p. 54).However, it is known that extensive woolen production took place in Ankara and its surrounding areas, where Angora goats were raised in Central Anatolia. The center of silk weaving was Bursa, the initial capital of the Ottoman Empire.Since silkworm breeding was not well-established during the early periods, silk weaving in Bursa relied on raw silk imported from Iran and Azerbaijan.The development of silkworm breeding in Bursa coincided with the reign of Yavuz Sultan Selim.When the Sultan embarked on the Eastern Expedition, he prohibited silk trade from the East to Bursa and expelled Eastern merchants from the city, thus creating a shortage of raw materials for the weavers for the next ten years (Karal ve Uzunçarşılı, 1997, p. 83).This ban was lifted only during the reign of Suleiman the Magnificent.The earliest document on silkworm breeding in Bursa dates back to 1587 and pertains to the rental of a mulberry orchard for leaf usage (Atasoy and others, 2001, p. 72).After this date, silk production in Bursa experienced significant development and reached its peak in the 18th century. By the end of the 17th century, the Ottoman weaving industry was thriving.Cotton, silk, and wool products were not only meeting domestic demand but also being exported to foreign countries (Baker, 1995, p. 37).During this period, Bursa, Ankara, Edirne, Erzincan, Diyarbakır, Trabzon and İzmir became the prominent weaving centers.Ottoman weavings were not only exported but also sent abroad as diplomatic gifts (Küçükerman, 1996, p. 43).Ottoman velvets, renowned for their superior quality in design, materials, and craftsmanship, as well as Kemha fabrics, known as brocade abroad, were highly sought after in Europe and Asia (Tezcan, 2002, p. 405).Furthermore, even in distant provinces of the Empire, governors appointed by the sultan would dress in Ottoman fashion and have Ottoman silks brought to them.For instance, a statue of a 1 BOA, TS.MA.d, 1947, H-29-12-995, Tarih: 29 Zilhicce 995 (30 Kasım 1587), "Endowment Ledger: Comprising the revenues and expenses of the sacred endowments of the Holy City of Mecca, under the supervision of the custodian of the noble cloth, Osman Çavuş, a member of the divan guard.This ledger details the funds collected from the treasury and other locations during various months of the year 995, as well as the available resources.It includes receipts from raw silk, red silk for gold embroidery, gold, silver, linen, raw fabric, and black satin sales, along with expenditures for goldsmiths, gold embroiderers, weavers, and other craftsmen, including wages paid, and various other related expenses.The document also covers expenses for simkeşlers, gold embroiderers, kemhacıs, and other artisans, as well as other miscellaneous costs.The ledger provides comprehensive insights into financial transactions and outlays.Further reference is made to the same."governor in a square in Bucharest is depicted wearing a caftan and a turban.The value of Ottoman silk fabrics, which were already highly priced, further increased when exported, leading to the development of the weaving industry abroad as foreign weavers sought to imitate Turkish fabrics (Aydın, 2000, p. 57). As diplomatic relations with the West improved, there was an increasing interest in Western fabrics within the palace.To meet the growing demand, it became necessary to import materials from Europe.During that time, the import and export relations between the Ottoman Empire and Europe were balanced, but the fabrics from India posed a challenge as the Empire had no exports to that country (Baker, 1995, p. 39). In the 18th century, during the Tulip Era, Ottoman women began to participate in social life, especially in palace circles, through activities such as picnics and Bosphorus cruises.This participation was reflected primarily in clothing.The close relationships between palace women and the wives of Western ambassadors, as well as the stories of Ottoman ambassadors in the West, fueled an increased interest in Western fashion among the palace circles (Sevin, 1990, p. 56).Vibrant colors like pinks, greens, and blues replaced the plain colors used in the abaya, and small navy collars adorned with lace replaced crew necks.In the 18th century, while headdresses and their decorations continued to change alongside clothing, towards the end of the century, the skirt and top were separated, and European-style fashion was adopted (Gümüşer, 2011, p. 73). These shifts towards Westernization within palace circles had a detrimental impact on domestic textile production.At the beginning of the century, measures were taken to make domestic production more appealing, such as reducing the proportion of silk in fabrics and using lightweight silk instead of the heavy silk velvets that were popular in the past (Gümüşer, 2011, p. 74).However, these measures resulted in a decline in weaving quality.In 1715, Sultan Ahmet III ordered the Istanbul judge to limit the use of gold and silver in fabrics (Tezcan, 1997, p. 195).However, based on the examples found in palace collections, it is evident that these prohibitions were not very effective.However, based on the examples found in palace collections, it is evident that these prohibitions were not very effective.Despite warnings from the palace to reduce the use of brightly colored fabric, neither the weavers, tailors, nor the merchants who sold it, especially the palace women, took these warnings into consideration (Tezcan, 1996, p. 29). From the second half of the 18th century onwards, the weaving industry witnessed a remarkably rapid decline, with a substantial decrease in the number of looms throughout the empire.This decline can be attributed primarily to the invasion of the Ottoman market by cheap and high-quality European textile products during the Industrial Revolution, rendering domestic goods unable to compete.The impact of this invasion was particularly felt in the early 19th century, with France supplying broadcloth, satin, and cotton fabrics, England contributing velvet fabrics, and India providing silk fabrics (Tezcan and Baker, 1996, p. 31). The close relationship between the Ottoman Empire and France was facilitated by Nakşidil Sultan, the French wife of Sultan Abdülhamid I, who was a cousin of Napoleon's wife Josephine.This connection continued during the reign of Sultan Selim III, who succeeded Abdülhamid I. Sultan Selim III, a sultan who embraced innovation, Western influence, and desired reform and modernization in his country, established new weaving workshops near the mosque he had built in Üsküdar (İpek, 2012, p. 5).Skilled weavers from France were brought to these workshops, where Western-style fabrics, later known as Selimiye, were produced.The same workshops also manufactured pillow covers and upholstery velvet.Unfortunately, these workshops were burned down during the Janissary revolt in 1814, although other workshops in Üsküdar continued production for a period of time (Gümüşer, 2011, p. 78).Sultan Selim III recognized the detrimental impact of foreign goods on the country's industry and prohibited state dignitaries from purchasing imported fabrics.He even advised his viziers to wear only fabrics woven in Istanbul and encouraged the use of local goods (Nutku, 1984, p. 33). Despite limited descriptions in historical documents and the scarcity of Ottoman ceremonial accounts, it is known that members of the dynasty predominantly wore velvet and colorful fabrics woven with precious metal threads, known as has.While sultans typically favored explicit materials for their daily attire within the palace, they adorned themselves with intricately patterned fabrics woven with gold and silver alloys for grand public ceremonies (Sevin, 1990, p. 59).Miniatures suggest that the clergy and ulema, on the other hand, typically wore plain fabrics (Mahir, 2017, p. 41).The sultans donned different garments for religious holidays, the succession of a deceased sultan, the enthronement ceremony, and other special occasions like the reception of ambassadors (Artan, 1992, p. 112-113). For instance, the new sultan who assumed power following the passing of a predecessor would wear solely black, dark blue, and purple caftans during the five-day mourning period observed throughout the Ottoman Empire (Sevin, 1990, p. 61).Afterward, lighter-colored caftans would signify their transition out of mourning.On the same day, separate ceremonies were held for the funeral of the deceased sultan and the coronation of the heir to the throne (Dean, 1994, p. 23).The coexistence of the new sultan's enthronement and the funeral necessitated the choice of dark colors.Additionally, whenever the sultan appeared before the public, they would be dressed in splendid garments designed to leave a lasting impression (Belge, 2008, p. 49).transformation.This movement, closely associated with the personality of the Sultan and initiated by the Sultan himself, sought to legalize and institutionalize the Ottoman Empire's opening to the West (Kretschmar and others, 1979, p. 82). Sultan Mahmud II's decision to disband the Janissary Corps in 1826 and establish the Asakir-i Mansure-i Muhammediye, a Western-style army with its distinctive uniforms, discipline, and professionalism, led to the Westernization of military clothing (Kretschmar and others, 1979, p. 83-84).Consequently, Ottoman sultans began to dress more like Western commanders.The traditional shalwar gradually evolved into trousers, tapering down from the bottom.Uniforms in black or navy blue emerged, featuring collars, chest brooches, and striped trousers.Palace members and officials wore a long jacket known as a "center" made of dark broadcloth over their trousers (Küçükerman, 1996, p. 48).This jacket was accompanied by a wide collar, a small necktie, a soft white cloak worn underneath, a front-closed vest over the shirt, and the growing popularity of booties as footwear.Both traditional and Western-style garments were worn by the sultans.Following Mahmud II, all subsequent sultans donned modern uniforms adorned with gold and silver brocades on the sleeves, collar, and chest (Korkmaz, 2005, p. 37). Picture 5. Sultan II.Abdülmecit's jacket and set (Delibaş and Tezcan, 1986, p. 80) During the same era, as Western-style clothing gained acceptance, efforts were made to mitigate the economic damage it caused within and around the palace.The impacts of the Industrial Revolution, coupled with the opulence and extravagance of the Tulip Era that characterized the 18th century, prompted the state to implement certain economic measures in the 19th century (Keskiner, 1995, p. 27).Under the reign of Sultan Mahmud II, similar to what was done during Selim's reign, the use of certain fabrics by civil servants was prohibited.Simultaneously, measures were taken to enhance the availability of domestic materials (Sipahioğlu, 1992, 57). However, despite attempts to make domestically produced fabrics more affordable compared to imports, their costliness persisted.To address this issue, fabric factories were established under state auspices.The most advanced integrated fabric and apparel industry of the period, known as Feshane, was founded in the Eyüp district of Istanbul in 1834.Recognizing that the existing clothing system aligned with the limited capacities of small-scale handicraft workshops, it was deemed necessary to elevate the new uniforms to higher standards (Walter and others, 2001, p. 28).Thus, substantial investments were made in new product design and production, specifically to accommodate changes in the military system and clothing.Feshane not only served as a weaving and apparel school but also laid the groundwork for the contemporary textile and clothing industry (Sipahioğlu, 1992, p. 59).During the reign of Sultan Abdülmecit, there was a need to develop existing stateowned factories in response to the increasing influx of European goods.In 1844, the administration of Feshane was entrusted to the Belgians, and the factory was equipped with machinery imported from Belgium.Additionally, the Izmit Aba factory was established to fulfill the army's demand for clothing fabrics through domestic production (Gürsu, 1999, p. 11).Despite the existence of capitulations, which granted certain privileges to foreign traders, the abundance of raw materials allowed for the establishment of privately-owned factories as well. These factories, both state-run and privately initiated, adopted European machinery, employed skilled artisans and workers, and offered a temporary glimmer of hope, despite facing difficulties in competing with imported products (Alpat, 2010, p. 93).Over time, however, the importation of finished goods increased in tandem with the exportation of raw materials (Züber, 1972, p. 41). During the reign of Sultan Abdulaziz, hand looms began to disappear throughout the Ottoman Empire, and the raw materials that were exported abroad returned to the Ottoman market as imported products.However, significant efforts to develop the weaving industry could not be undertaken during this period (William, 1932, p. 23).Nevertheless, the existing factories and their production capacities established earlier were attempted to be preserved.Among them were Feshane, which produced fezzes for the army and was rebuilt after a fire in 1868, and the Hereke factory, which produced furnishings for the palace.While the Hereke Factory is known for its prestigious products such as silk and wool carpets, its focus on military undergarments remains relatively unknown (Sevin, 1990, p. 63). Despite the sultans' attempts to address the economic challenges created by Westernization in the textile sector, clothing, fabrics, and decorations in the palace and its surroundings were predominantly imported (Aslanapa, 1999, p. 73).As a result, during the Tanzimat and Constitutional Monarchy periods, the general population continued to maintain their traditional clothing style, creating a stark contrast with palace members who wore jackets, waistcoats, neckties, and high-heeled shoes, as well as the wealthy individuals (Yatman, 1945, p. 21).The emergence of fashion in women's clothing began primarily in major urban centers such as Istanbul and Izmir, where women became more integrated into social life due to Westernization movements. Pera became a fashion hub, and tailors of Greek and Armenian origin began to follow Parisian fashion trends.The introduction of colorful abayas and lightweight yasmaks in the second half of the 19th century elicited a response from conservative circles, leading Sultan Abdulhamit II to replace the abaya with the chador, which was deemed more suitable for Islamic veiling (Alvarado, 1993, p. 33).These garments were made from silk, wool, and cotton fabrics in predominantly dark blue, purple, navy blue, turquoise, cyan, redbud, beige, and black colors (Akbil, 1970, p. 43).However, as the urban ladies' fashion curiosity evolved into a necessity, an innovative solution was found to turn veiling into an ornament with a new fashion trend as soon as the sultan's prohibition came into effect (Tezcan, 2002, p. 406). At the beginning of the 20th century, it is believed that the political and economic difficulties that the state had faced for over a century naturally led to a reduction in the budget allocated to textiles, but it did not limit the dressing preferences of the palace.The establishment of Casa Botter or Maison Botter, a fashion house opened in 1902 and catering to the court, witnessed numerous fashion shows over the years and continues to exist on Istiklal Street in Beyoğlu to this day (Züber, 1972, p. 43).Picture 6.The Woman from the Palace is a watercolor painting made around 1810, and the figure in the picture, with its deep-cut gown, exemplifies the palace fashion of the late 18th century (Renda, 1993, p. 272). Picture 7. Women on a Walk, an oil painting dated 1887 and signed by Osman Hamdi Bey.The changes in the society were tried to be documented with the women dressed as an example of the Western style fashion of the period (Renda, 1993, p. 179). Results The Ottoman Empire had a rich and diverse textile and clothing tradition, influenced by various cultural and historical factors.Ottoman clothing reflected the social status, rank, and occupation of individuals, as well as their religious and cultural affiliations (Aktepe, 2009, p. 39).Fabrics used in Ottoman clothing included silk, velvet, kemha, brocade, woolen sof, and cotton, among others.These fabrics were often adorned with intricate patterns, motifs, and embroidery (Yatman, 1945, p. 35 influences.Textiles and fabrics were used not only for clothing but also for decorative purposes in architecture, interiors, and religious artifacts (Nutku, 1984, p. 35). Westernization and its Impact on Ottoman Clothing and Textiles: The adoption of Western fashion elements, such as vibrant colors, tailored garments, and European-style designs, had a significant influence on Ottoman clothing during the period under study (Bilgin, .2019, p. 14). The importation of European textiles, particularly from France, England, and India, created tough competition for domestic textile production (William, .1932, p. 27). Traditional Ottoman fabrics, including broadcloth, satin, and cotton, were gradually replaced by imported fabrics due to their cheaper prices and higher quality (Sipahioğlu, .1992, p. 33). Economic Challenges and Preservation Efforts: The weaving industry in the Ottoman Empire faced a rapid decline, particularly in the second half of the 18th century and throughout the 19th century, due to the competition from European textile products (Keskiner, 1995, p. 46). The closure of traditional hand looms and the decline in the number of looms produced in the empire resulted in the loss of domestic textile production capacity (Koçu, 1967, p. 41). Efforts were made by the Ottoman government to preserve existing fabric factories, such as Feshane and Hereke, which produced military uniforms and furnishings for the palace, respectively (Atasoy and others, 2001, p. 39).Fabric factories were established to increase domestic production, but challenges such as high production costs and the inability to compete with imported goods persisted (Kretschmar and others, 1979, p. 91). Cultural Significance of Clothing and Textiles: Clothing choices and styles within the Ottoman court were symbols of power, prestige, and adherence to cultural norms (Nutku, .1984, p. 37).Specific garments and colors were associated with different ceremonies and occasions, reflecting the hierarchy and symbolism embedded in Ottoman society (Tezcan and Rogers, 1986, p. 71). The motifs and designs used in Ottoman textiles showcased a fusion of Islamic, Persian, and European artistic influences, reflecting the cultural diversity of the empire (İnalcık, 2008, p. 47). Ottoman textiles were renowned for their intricate patterns and craftsmanship, serving as a means of cultural expression and identity (Hinton, 1995, p. 29). Implications for Future Research: Further research is needed to explore the specific techniques, materials, and motifs used in Ottoman textiles and their historical significance. The economic, social, and cultural impacts of Westernization on Ottoman clothing and textiles warrant further investigation.Comparative studies between different regions within the Ottoman Empire can provide insights into regional variations in clothing, fabrics, and motifs.These findings contribute to our understanding of the historical characteristics of clothing, textiles, fabrics, and motifs in the Ottoman Period.The impact of Westernization, economic challenges faced by the weaving industry, the cultural significance of clothing, and the artistic elements embedded in Ottoman textiles are key aspects that shape our understanding of this historical era.Further research in this field can deepen our knowledge and provide a more comprehensive understanding of the subject matter. Discussion The article "Historical Characteristics of Clothing, Textiles, Fabrics, and Motifs in the Ottoman Period" sheds light on the rich and diverse aspects of fashion and textiles during the Ottoman Empire.The examination of historical clothing, textiles, fabrics, and motifs provides valuable insights into the cultural, social, and economic contexts of the era (İnalcık, 2008, p. 51). One notable aspect highlighted in the article is the influence of Westernization on Ottoman clothing and textiles.As the empire established closer relations with the West, there was an increasing demand for Western fabrics within the palace circles.This resulted in the importation of materials from Europe, which had a significant impact on domestic textile production.The adoption of Western fashion elements, such as vibrant colors and European-style clothing, not only changed the aesthetics but also had implications for the local weaving industry (Bilgin, 2019, p. 21). The economic consequences of Westernization on the Ottoman textile sector are also discussed in the article.With the Industrial Revolution and the influx of cheap and high-quality European textile products, local goods struggled to compete in the market.This led to a decline in the weaving industry and the loss of traditional hand looms.The Ottoman government attempted to address this situation by implementing measures to promote domestic production, establishing fabric factories, and preserving existing production capacities.However, the challenges of cost, quality, and competition with imported products persisted (Tezcan and Rogers, 1986, p. 84). Furthermore, the article touches upon the significance of clothing and textiles within the Ottoman court and society.The clothing choices of the sultans and members of the palace played a role in showcasing power, prestige, and adherence to cultural norms.Specific garments and colors were associated with different ceremonies and occasions, reflecting the hierarchy and symbolism embedded in Ottoman culture (Sipahioğlu, 1992, p. 37). The study of motifs and designs in Ottoman textiles also reveals the artistic and cultural influences of the period.Ottoman textiles were renowned for their intricate patterns, blending elements from various sources, including Islamic, Persian, and European traditions.These motifs not only showcased the craftsmanship of the weavers but also served as a means of cultural expression and identity (Sevin, 1990, p. 71). Conclusion "Historical Characteristics of Clothing, Textiles, Fabrics, and Motifs in the Ottoman Period" provides a comprehensive exploration of the multifaceted aspects of fashion, textiles, and motifs during the Ottoman Empire.The article sheds light on the impact of Westernization, the economic challenges faced by the weaving industry, the significance of clothing within the Ottoman court, and the artistic elements embedded in 2 BOA, TS.MA.d, 8616, H-21-08-957, Tarih: 21 Şaban 957 (4 Eylül 1550), "Document Summary: Ledger endorsed with the seal and signature of Sheikh Sinan of the Imaret of Sultan Murad II: Concerning the account records, detailing the revenues derived from the proceeds of silks sold through Babacan in Bursa, along with expenses incurred for ship freight, camel charges, and intermediary fees for dellals."Korkut Ata Türkiyat Araştırmaları Dergisi Uluslararası Türk Dili ve Edebiyatı Araştırmaları Dergisi The Journal of International Turkish Language & Literature Research Sayı 12/ Eylül 2023 Picture 2 . Detail from a miniature about the funeral of Suleiman the Magnificent.Dark-colored caftans were worn(Atasoy and others, 2001, p. 24).Korkut Ata Türkiyat Araştırmaları Dergisi Uluslararası Türk Dili ve Edebiyatı Araştırmaları Dergisi The Journal of International Turkish Language & Literature Research Sayı 12/ Eylül 2023 Picture 3. A miniature depicting II.Beyazıt's enthronement ceremony(Sözen, 1992, p. 323).Picture 4. A miniature about Prince Mehmet's arrival at the İbrahim Pasha Palace in Atmeydanı (Atasoy and others, 2001, p. 25).The process of Westernization in the attire of Ottoman sultans began during the reign of Sultan Mahmud II.The changes that emerged during the Tanzimat period transformed into a comprehensive reform movement aimed at embracing Western practices, albeit often characterized by imitation and superficiality rather than genuine Korkut Ata Türkiyat Araştırmaları Dergisi Uluslararası Türk Dili ve Edebiyatı Araştırmaları Dergisi The Journal of International Turkish Language & Literature Research Sayı 12/ Eylül 2023 Korkut Ata Türkiyat Araştırmaları Dergisi Uluslararası Türk Dili ve Edebiyatı Araştırmaları Dergisi The Journal of International Turkish Language & Literature Research Sayı 12/ Eylül 2023 Picture 1. Woman from the Palace Holding a Rose in Her Hand, an oil painting from the 17th century Korkut Ata Türkiyat Araştırmaları Dergisi Uluslararası Türk Dili ve Edebiyatı Araştırmaları Dergisi The Journal of International Turkish Language & Literature Research Sayı 12/ Eylül 2023 ).Motifs commonly found in Ottoman textiles and fabrics included geometric shapes, floral patterns, calligraphic elements, and stylized animal figures.These motifs were often inspired by nature, Islamic art, and Persian, Central Asian, and European
2023-09-27T15:04:59.532Z
2023-09-18T00:00:00.000
{ "year": 2023, "sha1": "a6ef304f113a6f39b851d0f40688ef9cf302b464", "oa_license": "CCBY", "oa_url": "https://doi.org/10.51531/korkutataturkiyat.1352164", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "352f6b5ef2812322d89618c622c050fd1e9a97fd", "s2fieldsofstudy": [ "Art", "History" ], "extfieldsofstudy": [] }
67856213
pes2o/s2orc
v3-fos-license
GANSynth: Adversarial Neural Audio Synthesis Efficient audio synthesis is an inherently difficult machine learning task, as human perception is sensitive to both global structure and fine-scale waveform coherence. Autoregressive models, such as WaveNet, model local structure at the expense of global latent structure and slow iterative sampling, while Generative Adversarial Networks (GANs), have global latent conditioning and efficient parallel sampling, but struggle to generate locally-coherent audio waveforms. Herein, we demonstrate that GANs can in fact generate high-fidelity and locally-coherent audio by modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Through extensive empirical investigations on the NSynth dataset, we demonstrate that GANs are able to outperform strong WaveNet baselines on automated and human evaluation metrics, and efficiently generate audio several orders of magnitude faster than their autoregressive counterparts. INTRODUCTION Neural audio synthesis, training generative models to efficiently produce audio with both highfidelity and global structure, is a challenging open problem as it requires modeling temporal scales over at least five orders of magnitude (∼0.1ms to ∼100s). Large advances in the state-of-the art have been pioneered almost exclusively by autoregressive models, such as WaveNet, which solve the scale problem by focusing on the finest scale possible (a single audio sample) and rely upon external conditioning signals for global structure (van den Oord et al., 2016). This comes at the cost of slow sampling speed, since they rely on inefficient ancestral sampling to generate waveforms one audio sample at a time. Due to their high quality, a lot of research has gone into speeding up generation, but the methods introduce significant overhead such as training a secondary student network or writing highly customized low-level kernels (van den Oord et al., 2018;Paine et al., 2016). Furthermore, since these large models operate at a fine timescale, their autoencoder variants are restricted to only modeling local latent structure due to memory constraints (Engel et al., 2017). On the other end of the spectrum, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have seen great recent success at generating high resolution images Berthelot et al., 2017;Kodali et al., 2017;Karras et al., 2018a;Miyato et al., 2018). Typical GANs achieve both efficient parallel sampling and global latent control by conditioning a stack of transposed convolutions on a latent vector, The potential for audio GANs extends further, as adversarial costs have unlocked intriguing domain transformations for images that could possibly have analogues in audio Wolf et al., 2017;Jin et al., 2017). However, attempts to adapt image GAN architectures to generate waveforms in a straightforward manner fail to reach the same level of perceptual fidelity as their image counterparts. Figure 1: Frame-based estimation of audio waveforms. Much of sound is made up of locallycoherent waves with a local periodicity, pictured as the red-yellow sinusoid with black dots at the start of each cycle. Frame-based techniques, whether they be transposed convolutions or STFTs, have a given frame size and stride, here depicted as equal with boundaries at the dotted lines. The alignment between the two (phase, indicated by the solid black line and yellow boxes), precesses in time since the periodicity of the audio and the output stride are not exactly the same. Transposed convolutional filters thus have the difficult task of covering all the necessary frequencies and all possible phase alignments to preserve phase coherence. For an STFT, we can unwrap the phase over the 2π boundary (orange boxes) and take its derivative to get the instantaneous radial frequency (red boxes), which expresses the constant relationship between audio frequency and frame frequency. The spectra are shown for an example trumpet note from the NSynth dataset. GENERATING INSTRUMENT TIMBRES GAN researchers have made rapid progress in image modeling by evaluating models on focused datasets with limited degrees of freedom, and gradually stepping up to less constrained domains. For example, the popular CelebA dataset (Liu et al., 2015) is restricted to faces that have been centered and cropped, removing variance in posture and pose, and providing a common reference for qualitative improvements Karras et al., 2018a) in generating realistic texture and fine-scale features. Later models then built on that foundation to generalize to broader domains (Karras et al., 2018b;Brock et al., 2019). The NSynth dataset (Engel et al., 2017) 2 was introduced with similar motivation for audio. Rather than containing all types of audio, NSynth consists solely of individual notes from musical instruments across a range of pitches, timbres, and volumes. Similar to CelebA, all the data is aligned and cropped to reduce variance and focus on fine-scale details, which in audio corresponds to timbre and fidelity. Further, each note is also accompanied by an array of attribute labels to enable exploring conditional generation. The original NSynth paper introduced both autoregressive WaveNet autoencoders and bottleneck spectrogram autoencoders, but without the ability to unconditionally sample from a prior. Follow up work has explored diverse approaches including frame-based regression models (Defossez et al., 2018), inverse scattering networks (Andreux & Mallat, 2018), VAEs with perceptual priors (Esling et al., 2018), and adversarial regularization for domain transfer (Mor et al., 2019). This work builds on these efforts by introducing adversarial training and exploring effective representations for noncausal convolutional generation as typical found in GANs. EFFECTIVE AUDIO REPRESENTATIONS FOR GANS Unlike images, most audio waveforms-such as speech and music-are highly periodic. Convolutional filters trained for different tasks on this data commonly learn to form logarithmically-scaled frequency selective filter banks spanning the range of human hearing (Dieleman & Schrauwen, 2014;Zhu et al., 2016). Human perception is also highly sensitive to discontinuities and irregularities in periodic waveforms, so maintaining the regularity of periodic signals over short to intermediate timescales (1ms -100ms) is crucial. Figure 1 shows that when the stride of the frames does not exactly equal a waveform's periodicity, the alignment (phase) of the two precesses over time. This condition is assured as at any time there are typically many different frequencies in a given signal. This is a challenge for a synthesis network, as it must learn all the appropriate frequency and phase combinations and activate them in just the right combination to produce a coherent waveform. This phase precession is exactly the same phenomena observed with a short-time Fourier transform (STFT), which is composed of strided filterbanks just like convolutional networks. Phase precession also occurs in situations where filterbanks overlap (window or kernel size < stride). In the middle of Figure 1, we diagram another approach to generating coherent waveforms loosely inspired by the phase vocoder (Dolson, 1986). A pure tone produces a phase that precesses. Unwrapping the phase, by adding 2π whenever it crosses a phase discontinuity, causes the precessing phase to grow linearly. We then observe that the derivative of the unwrapped phase with respect to time remains constant and is equal to the angular difference between the frame stride and signal periodicity. This is commonly referred to as the instantaneous angular frequency, and is a time varying measure of the true signal oscillation. With a slight abuse of terminology we will simply refer to it as the instantaneous frequency (IF) (Boashash, 1992). Note that for the spectra at the bottom of Figure 1, the pure harmonic frequencies of a trumpet cause the wrapped phase spectra to oscillate at different rates while the unwrapped phase smoothly diverges and the IF spectra forms solid bands where the harmonic frequencies are present. CONTRIBUTIONS In this paper, we investigate the interplay of architecture and representation in synthesizing coherent audio with GANs. Our key findings include: • Generating log-magnitude spectrograms and phases directly with GANs can produce more coherent waveforms than directly generating waveforms with strided convolutions. • Estimating IF spectra leads to more coherent audio still than estimating phase. • It is important to keep harmonics from overlapping. Both increasing the STFT frame size and switching to mel frequency scale improve performance by creating more separation between the lower harmonic frequencies. Harmonic frequencies are multiples of the fundamental, so low pitches have tightly-spaced harmonics, which can cause blurring and overlap. • On the NSynth dataset, GANs can outperform a strong WaveNet baseline in automatic and human evaluations, and generate examples ∼54,000 times faster. • Global conditioning on latent and pitch vectors allow GANs to generate perceptually smooth interpolation in timbre, and consistent timbral identity across pitch. DATASET We focus our study on the NSynth dataset, which contains 300,000 musical notes from 1,000 different instruments aligned and recorded in isolation. NSynth is a difficult dataset composed of highly diverse timbres and pitches, but it is also highly structured with labels for pitch, velocity, instrument, and acoustic qualities (Liu et al., 2015;Engel et al., 2017). Each sample is four seconds long, and sampled at 16kHz, giving 64,000 dimensions. As we wanted to included human evaluations on audio quality, we restricted ourselves to training on the subset of acoustic instruments and fundamental pitches ranging from MIDI 24-84 (∼32-1000Hz), as those timbres are most likely to sound natural to an average listener. This left us with 70,379 examples from instruments that are mostly strings, brass, woodwinds, and mallets. We created a new test/train 80/20 split from shuffled data, as the original split was divided along instrument type, which isn't desirable for this task. ARCHITECTURE AND REPRESENTATIONS Taking inspiration from successes in image generation, we adapt the progressive training methods of Karras et al. (2018a) to instead generate audio spectra 3 . While e search over a variety of hyperparameter configurations and learning rates, we direct readers to the original paper for an in-depth analysis (Karras et al., 2018a), and the appendix for complete details. Briefly, the model samples a random vector z from a spherical Gaussian, and runs it through a stack of transposed convolutions to upsample and generate output data x = G(z), which is fed into a discriminator network of downsampling convolutions (whose architecture mirrors the generator's) to estimate a divergence measure between the real and generated distributions . As in Karras et al. (2018a), we use a gradient penalty to promote Lipschitz continuity, and pixel normalization at each layer. We also try training both progressive and nonprogressive variants, and see comparable quality in both. While it is not essential for success, we do see slightly better convergence time and sample diversity for progressive training, so for the remainder of the paper, all models are compared with progressive training. Unlike Progressive GAN, our method involves conditioning on an additional source of information. Specifically, we append a one-hot representation of musical pitch to the latent vector, with the musically-desirable goal of achieving independent control of pitch and timbre. To encourage the generator to use the pitch information, we also add an auxiliary classification (Odena et al., 2017) loss to the discriminator that tries to predict the pitch label. For spectral representations, we compute STFT magnitudes and phase angles using TensorFlow's built-in implementation. We use an STFT with 256 stride and 1024 frame size, resulting in 75% frame overlap and 513 frequency bins. We trim the Nyquist frequency and pad in time to get an "image" of size (256, 512, 2). The two channel dimension correspond to magnitude and phase. We take the log of the magnitude to better constrain the range and then scale the magnitudes to be between -1 and 1 to match the tanh output nonlinearity of the generator network. The phase angle is also scaled to between -1 and 1 and we refer to these variants as "phase" models. We optionally unwrap the phase angle and take the finite difference as in Figure 1; we call the resulting models "instantaneous frequency" ("IF") models. We also find performance is sensitive to having sufficient frequency resolution at the lower frequency range. Maintaining 75% overlap we are able to double the STFT frame size and stride, resulting in spectral images with size (128, 1024, 2), which we refer to as high frequency resolution, "+ H", variants. Lastly, to provide even more separation of lower frequencies we transform both the log magnitudes and instantaneous frequencies to a mel frequency scale without dimensional compression (1024 bins), which we refer to as "IF-Mel" variants. To convert back to linear STFTs we just use the approximate inverse linear transformation, which, perhaps surprisingly does not harm audio quality significantly. It is important for us to compare against strong baselines, so we adapt WaveGAN , the current state of the art in waveform generation with GANs, to accept pitch conditioning and retrain it on our subset of the NSynth dataset. We also independently train our own waveform generating GANs off the progressive codebase and our best models achieve similar performance to WaveGAN without progressive training, so we opt to primarily show numbers from WaveGAN instead (see appendix Table 2 for more details). Beyond GANs, WaveNet (van den Oord et al., 2016) is currently the state of the art in generative modeling of audio. Prior work on the NSynth dataset used an WaveNet autoencoder to interpolate between sounds (Engel et al., 2017), but is not a generative model as it requires conditioning on the original audio. Thus, we create strong WaveNet baselines by adapting the architecture to accept the same one-hot pitch conditioning signal as the GANs. We train variants using both a categorical 8-bit mu law and 16-bit mixture of logistics for the output distributions, but find that the 8-bit model is more stable and outperforms the 16-bit model (see appendix Table 2 for more details). METRICS Evaluating generative models is itself a difficult problem: because our goals (perceptually-realistic audio generation) are hard to formalize, the most common evaluation metrics tend to be heuristic and have "blind spots" (Theis et al., 2016). To mitigate this, we evaluate all of our models against a diverse set of metrics, each of which captures a distinct aspect of model performance. Our evaluation metrics are as follows: • Human Evaluation We use human evaluators as our gold standard of audio quality because it is notoriously hard to measure in an automated manner. In the end, we are interested in training networks to synthesize coherent waveforms, specifically because human perception is extremely sensitive to phase irregularities and these irregularities are disruptive to a listener. We used Amazon Mechanical Turk to perform a comparison test on examples from all models presented in Table 1 (this includes the hold-out dataset). The participants were presented with two 4s examples corresponding to the same pitch. On a five-level Likert scale, the participants evaluate the statement "Sample A has better audio quality / has less audio distortions than Sample B". For the study, we collected 3600 ratings and each model is involved in 800 comparisons. • Inception Score (IS) (Salimans et al., 2016) propose a metric for evaluating GANs which has become a de-facto standard in GAN literature Miyato et al., 2018;Karras et al., 2018a). Generated examples are run through a pretrained Inception classifier and the Inception Score is defined as the mean KL divergence between the imageconditional output class probabilities and the marginal distribution of the same. IS penalizes models whose examples aren't each easily classified into a single class, as well as models whose examples collectively belong to only a few of the possible classes. Though we still call our metric "IS" for consistency, we replace the Inception features with features from a pitch classifier trained on spectrograms of our acoustic NSynth dataset. • Pitch Accuracy (PA) and Pitch Entropy (PE) Because the Inception Score can conflate models which don't produce distinct pitches and models which produce only a few pitches, we also separately measure the accuracy of the same pretrained pitch classifier on generated examples (PA) and the entropy of its output distribution (PE). • Fréchet Inception Distance (FID) (Heusel et al., 2017) propose a metric for evaluating GANs based on the 2-Wasserstein (or Fréchet) distance between multivariate Gaussians fit to features extracted from a pretrained Inception classifier and show that this metric correlates with perceptual quality and diversity on synthetic distributions. As with Inception Score, we use pitch-classifier features instead of Inception features. Table 1 presents a summary of our results on all model and representation variants. Our most discerning measure of audio quality, human evaluation, shows a clear trend, summarized in Figure 2. Quality decreases as output representations move from IF-Mel, IF, Phase, to Waveform. The highest quality model, IF-Mel, was judged comparably but slightly inferior to real data. The WaveNet baseline produces high-fidelity sounds, but occasionally breaks down into feedback and self oscillation, resulting in a score that is comparable to the IF GANs. RESULTS While there is no a priori reason that sample diversity should correlate with audio quality, we indeed find that NDB follows the same trend as the human evaluation. Additionally, high frequency resolution improves the NDB score across models types. The WaveNet baseline receives the worst NDB score. Even though the generative model assigns high likelihood to all the training data, the Table 1. autoregressive sampling itself has a tendency gravitate to the same type of oscillation for each given pitch conditioning, leading to an extreme lack of diversity. Histograms of the sample distributions showing peaky distributions for the different models can be found in the appendix. FID provides a similar story to the first two metrics with significantly lower scores for for IF models with high frequency resolution. Comparatively, Mel scaling has much less of an effect on the FID then it does in the listener study. Phase models have high FID, even at high frequency resolution, reflecting their poor sample quality. Many of the models do quite well on the classifier metrics of IS, Pitch Accuracy, and Pitch Entropy, because they have explicit conditioning telling them what pitches to generate. All of the high-resolution models actually generate examples classified with similar accuracy to the real data. As this accuracy and entropy can be a strong function of the distribution of generated examples, which most certainly does not match the training distribution due to mode collapse and other issues, there is little discriminative information to gain about sample quality from differences among such high scores. The metrics do provide a rough measure of which models are less reliably generating classifiable pitches, which seems to be the low frequency models to some extent and the baselines. QUALITATIVE ANALYSIS While we do our best to visualize qualitative audio concepts, we highly recommend the reader to listen to the accompanying audio examples provided at https://goo.gl/magenta/ gansynth-examples. Notice that the real data completely overlaps itself as the waveform is extremely periodic. The WaveGAN and PhaseGAN, however, have many phase irregularities, creating a blurry web of lines. The IFGAN is much more coherent, having only small variations from cycle-to-cycle. In the Rainbowgrams below, the real data and IF models have coherent waveforms that result in strong consistent colors for each harmonic, while the PhaseGAN has many speckles due to phase discontinuities, and the WaveGAN model is quite irregular. Figure 3 visualizes the phase coherence of examples from different GAN variants. It is clear from the waveforms at the top, which are wrapped at the fundamental frequency, that the real data and IF models produce waveforms that are consistent from cycle-to-cycle. The PhaseGAN has some phase discontinuities, while the WaveGAN is quite irregular. Below we use Rainbowgrams (Engel et al., 2017) to depict the log magnitude of the frequencies as brightness and the IF as the color on a rainbow color map. This visualization helps to see clear phase coherence of the harmonics in the real data and IFGAN by the strong consistent colors. In contrast, the PhaseGAN discontinuities appear as speckled noise, and the WaveGAN appears largely incoherent. INTERPOLATION As discussed in the introduction, GANs also allow conditioning on the same latent vector the entire sequence, as opposed to only short subsequences for memory intensive autoregressive models like WaveNet. WaveNet autoencoders, such as ones in (Engel et al., 2017), learn local latent codes that control generation on the scale of milliseconds but have limited scope, and have a structure of their own that must be modelled and does not fit a compact prior. In Figure 4 we take a pretrained WaveNet autoencoder 5 and compare interpolating between examples in the raw waveform (top), the distributed latent code of a WaveNet autoencoder, and the global code of an IF-Mel GAN. Interpolating the waveform is perceptually equivalent to mixing between the amplitudes of two distinct sounds. WaveNet improves upon this for the two notes by mixing in the space of timbre, but the linear interpolation does not correspond to the complex prior on latents, and the intermediate sounds have a tendency to fall apart, oscillate and whistle, which are the natural failure modes for a WaveNet model. However, the GAN model has a spherical gaussian prior which is decoded to produce the entire sound, and spherical interpolation stays well-aligned with the prior. Thus, the perceptual change during interpolation is smooth and all intermediate latent vectors are decoded to produce sounds without additional artifacts. As a more musical example, in the audio examples, we interpolate the timbre between 15 random points in latent space while using the pitches from the prelude to Bach's Suite No. 1 in G major 6 . As seen in appendix Figure 7, the timbre of the sounds morph smoothly between many instruments while the pitches consistently follow the composed piece. CONSISTENT TIMBRE ACROSS PITCH While timbre slightly varies for a natural instrument across register, on the whole it remains consistent, giving the instrument its unique character. In the audio examples 7 , we fix the latent conditioning variable and generate examples by varying the pitch conditioning over five octaves. It's clear that the timbral identity of the GAN remains largely intact, creating a unique instrument identity for the given point in latent space. As seen in appendix Figure 7, the Bach prelude rendered with a single latent vector has a consistent harmonic structure across a range of pitches. FAST GENERATION One of the advantages of GANs with upsampling convolutions over autoregressive models is that the both the training and generation can be processed in parallel for the entire audio sample. This is quite amenable to modern GPU hardware which is often I/O bound with iterative autoregressive algorithms. This can be seen when we synthesize a single four second audio sample on a TitanX GPU and the latency to completion drops from 1077.53 seconds for the WaveNet baseline to 20 milliseconds for the IF-Mel GAN making it around 53,880 times faster. Previous applications of WaveNet autoencoders trained on the NSynth dataset for music performance relied on prerendering all possible sounds for playback due to the long synthesis latency 8 . This work opens up the intriguing possibility for realtime neural network audio synthesis on device, allowing users to explore a much broader pallete of expressive sounds. RELATED WORK Much of the work on deep generative models for audio tends to focus on speech synthesis (van den Oord et al., 2018;Wang et al., 2017). These datasets require handling variable length conditioning (phonemes/text) and audio, and often rely on recurrent and/or autoregressive models for variable length inputs and outputs. It would be interesting to compare adversarial audio synthesis to these methods, but we leave this to future work as adapting GANs to use variable-length conditioning or recurrent generators is a non-trivial extension of the current work. In comparison to speech, audio generation for music is relatively under-explored. van den Oord et al. (2016) and propose autoregressive models and demonstrate their ability to synthesize musical instrument sounds, but these suffer from the aforementioned slow generation. first applied GANs to audio generation with coherent results, but fell short of the audio fidelity of autoregressive likelihood models. Our work also builds on multiple recent advances in GAN literature. propose a modification to the loss function of GANs and demonstrate improved training stability and architectural robustness. Karras et al. (2018a) further introduce progressive training, in which successive layers of the generator and discriminator are learned in a curriculum, leading to improved generation quality given a limited training time. They also propose a number of architectural tricks to further improve quality, which we employ in our best models. The NSynth dataset was first introduced as a "CelebA of audio" (Liu et al., 2015;Engel et al., 2017), and used WaveNet autoencoders to interpolate between timbres of musical instruments, but with very slow sampling speeds. Mor et al. (2019) expanded on this work by incoporating an adversarial domain confusion loss to achieve timbre transformations between a wide range of audio sources. Defossez et al. (2018) achieve significant sampling speedups (∼2,500x) over wavenet autoencoders by training a frame-based regression model to map from pitch and instrument labels to raw waveforms. They consider a unimodal likelihood regression loss in log spectrograms and backpropagate through the STFT, which yeilds good frequency estimation, but provides no incentive to learn phase coherency or handle multimodal distributions. Their architecture also requires a large amount of channels, slowing down sample generation and training. CONCLUSION By carefully controlling the audio representation used for generative modeling, we have demonstrated high-quality audio generation with GANs on the NSynth dataset, exceeding the fidelity of a strong WaveNet baseline while generating samples tens of thousands of times faster. While this is a major advance for audio generation with GANs, this study focused on a specific controlled dataset, and further work is needed to validate and expand it to a broader class of signals including speech and other types of natural sound. This work also opens up possible avenues for domain transfer and other exciting applications of adversarial losses to audio. Issues of mode collapse and diversity common to GANs exist for audio as well, and we leave it to further work to consider combining adversarial losses with encoders or more straightforward regression losses to better capture the full data distribution. A MEASURING DIVERSITY ACROSS GENERATED EXAMPLES Table 3, including adding a pitch classifier to the end of the discriminator as in AC-GAN. All models were trained with the ADAM optimizer (Kingma & Ba, 2014). We sweep over learning rates (2e-4, 4e-4, 8e-4) and weights of the auxiliary classifier loss (0.1, 1.0, 10), and find that for all variants (spectral representation, progressive/no progressive, frequency resolution) a learning rate of 8e-4 and classifier loss of 10 perform the best. As in the original progressive GAN paper, both networks use box upscaling/downscaling and the generators use pixel normalization, where n, h, w, and c refer to the batch, height, width, and channel dimensions respectively, x is the activations, and C is the total number of channels. The discriminator also appends the standard deviation of the minibatch activations as a scalar channel near the end of the convolutional stack as seen in Table 3. Since we find it helpful to use a Tanh output nonlinearity for the generator, we normalize real data before passing to the discriminator. We measure the maximum range over 100 examples and independently shift and scale the log-magnitudes and phases to [-0.8, 0.8] to allow for outliers and use more of the linear regime of the Tanh nonlinearity. We train each GAN variant for 4.5 days on a single V100 GPU, with a batch size of 8. For nonprogressive models, this equates to training on ∼5M examples. For progressive models, we train on 1.6M examples per a stage (7 stages), 800k during alpha blending and 800k after blending. At the last stage we continue training until the 4.5 days completes. Because the earlier stages train faster, the progressive models train on ∼11M examples. For the WaveNet baseline, we also adapt the open source Tensorflow implementation 11 . The decoder is composed of 30 layers of dilated convolution, each of 512 channels and receptive field of 3, and each with a 1x1 convolution skip connection to the output. The layers are divided into 3 stacks of 10, with dilation in each stack increasing from 2 0 to 2 9 , and then repeating. We replace the audio encoder stack with a conditioning stack operating on a one-hot pitch conditioning signal distributed in time (3 seconds on, 1 second off). The conditioning stack is 5 layers of dilated convolution, increasing to 2 5 , and then 3 layers of regular convolution, all with 512 channels. This conditioning signal is then passed through a 1x1 convolution for each layer of the decoder and added to the output of each layer, as in other implementations of WaveNet conditioning. For the 8-bit model we use mulaw encoding of the audio and a categorical loss, while for the 16-bit model we use a quantized mixture of 10 logistics (Salimans et al., 2017). WaveNets converged to 150k iterations in 2 days with 32 V100 GPUs trained with synchronous SGD with batch size 1 per GPU, for a total batch size of 32.
2019-02-22T20:40:10.897Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "725c650ae8db8d9a57e4a7b15a555dbe69b67054", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e6af3c7f069e2f053da5b14e8dd903235ac35df1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
55319827
pes2o/s2orc
v3-fos-license
Modeling and analysis of loaded multilayered magneto-electro-elastic structures composite materials : Applications The aim of this study is to develop a two-scale tool allowing the detailed analysis of the behavior of fiber-reinforced magneto-electro-elastic composite plates. The work is divided into two major sections. The first one deals with the homogenization of the properties of each layer based on the Mori-Tanaka mean field approach where all needed effective coefficients of each layer are determined. In the second one and in order to perform the analysis of the behavior of the obtained magneto-electro-elastic multilayered plate, the Stroh formalism is used. It allows to predict the effective behavior of such plates and the spatial distribution of the local fields along the layers. Introduction The behavior of active materials often exhibits multiphysical coupling effect.Moreover, the use of composite materials is increasingly used to combine the different advantages of each material. Magneto-electro-elastic composites represent a new class of materials with several potential applications in modern nanoscience and nanotechnology.The interaction between electric polarization and magnetization offers new possibilities for functional materials such sensors and actuators. In addition to being a multiphysical material and in order to analyze materials with magneto-electric coupling, it is important to be able to determine the distribution of the physical fields within these heterogeneous structures [1]. Many analytical and mathematical models are developed to predict new heterogeneous magneto-electro-elastic composite materials.Li [2] studied the average magnetoelectro-elastic field in a multi-inclusion or inhomogeneities embedded in an infinite matrix.Feng, et al. [3] investigated the effective properties of composite consisting of piezomagnetic inhomogeneities embedded in a non-piezomagnetic matrix by using a unified energy method and the Mori-Tanaka and Dilute approaches.Zhang and Soh [4] extended the micromechanical Self Consistent, Mori-Tanaka and Dilute to study the coupled magneto-electro-elastic composite materials.The effective properties of multiphase and coated magneto-electro-elastic heterogeneous materials have been investigated by Bakkali et al [5] based on various micromechanical models.Some approaches have been proposed to deal with fully coupled magneto-electro-elastic laminates.Several explicit expressions have been found by Kim [6] to calculate the magnetic, electric, elastic, piezoelectric, magneto-elastic and magneto-electric effective properties.On the other hand, similar results have been obtained in [7,8].More recently, L.M. Sixto-Camacho et al [9] use the asymptotic homogenization to derive the local problems and the corresponding homogenized coefficients of periodic thermo-magneto-electro-elastic heterogeneous media.The theory is applied to obtain analytical expressions for all effective properties of an important class of periodic multilaminated composites. The Mori Tanaka model presented in this paper is used to predict the effective magneto-electro-elastic coefficients.This models permit to take into account the effect of phase number and concentrations, shape inclusions, as well as its polling orientation.Results for a two-phase composite material (Piezo-electric/Piezo-magnetic) with fibrous microstructure are presented. The macroscale equilibrium equations are solved analytically using the Stroh formalism [10][11] associated with the propagation matrix.It should be noted that a same analysis has been proposed by [15] to deal with piezoelectric fiber actuators.However, in this multiscale analysis the behavior of each layer was obtained by periodic homogenization, which needs a more inextricable numerical procedure. This formalism will provide solutions for multifunctional multilayered plate, to predict the mechanical, electrical and magnetic behaviors near or across the interface of material layers. The coupled multiscale analysis procedure is illustrated through two model problems.The first model problem presents the behavior of a sandwich plate made of three heterogonous magneto-electro-elastic layers under a surface mechanical load.The second problem describes the evolution of some physical properties of graded material ADVANCED ELECTROMAGNETICS, VOL. 5, NO. 3, DECEMBER 2016 under a surface mechanical load composed by six heterogeneous magneto-electro-elastic layers. These numerical results should be of interest to the design of magnet-electro-elastic composite laminates. Constitutive laws and equilibrium equations of magneto-electro-elastic material The constitutive equations for the magneto-electro-elastic medium relating stress σij, electric displacement Di and magnetic induction Bi to strain εkl, electric field Ek and magnetic field Hl, exhibiting linear coupling between magnetic, electric and elastic field can be written as: where the elastic strain    are the elastic, piezo-electric, piezo-magnetic, magnetoelectric, dielectric and magnetic permeability constants respectively.The following gradient expressions are used: where In order to make easy the manipulation of these equations, particular notations will be used.These notations are identical to those using the conventional subscripts except that the lower case subscripts assume the range of 1-3, while capital subscripts take the range of 1-5, and repeated capital subscripts are summed over 1-5.With these notations, the magneto-electro-elastic constant can be represented as follows [5]: The generalized strain field denoted by ZMn can be expressed as: Similarly, the generalized stress field ΣiJ is given by: The equations of equilibrium, in the absence of body force and free charge and current, can be written as: In what follows we will study the response of a multilayer subjected to uniaxial loading while considering each layer as a magneto-electro-elastic material composed of either a piezo-magnetic matrix with different volume fraction of piezo-electric inclusion, or a piezo-electric matrix with different volume fraction of inclusion piezo-magnetic. Micromechanics modelling In this section, the effective properties of two kinds of magneto-electro-elastic composites are computed based on the Mori-Tanaka micromechanical mean field approach.The first one is constituted of a piezo-magnetic matrix (CoFe2O4) reinforced by aligned fibrous piezo-electric inclusions (BaTiO3) and the second one is constituted of a piezo-electric matrix (BaTiO3) reinforced by aligned fibrous piezo-magnetic inclusions (CoFe2O4).The micromechanics modeling is divided on two steps: The localization step which relate the local fields with the global ones and the homogenization step which is based on average techniques.A representative volume element V of the composite is considered.The Macroscopic fields are related to the local ones by the mean operator: For an N-phase composite, the macroscopic fields are reformulated as: where 'i' points to the i th phase and i f is the associated volume fraction.i Z and i  represent the local uniform fields. Moreover, the overall constitutive equations that represent the effective behavior of the composite and each of its constituent (phase p) are given respectively by: In order to make the transition scale between the local uniform fields (phases) and the macroscopic fields (composite), the localization tensors are introduced.One can write the localization equations as follow [5]. Based on averaging techniques (Eqs.7 and 8) and using the localization equations (Eq.10), the expression of the effective properties is obtained: For the case of the magneto-electro-elastic composite considered in this article, the expression of the effective properties is given by: E represents the properties of the matrix.The localization tensor is a function of the phases' properties as well as of the shape of the inclusions.The localization tensor could be estimated based on different micromechanical models.In this paper the Mori-Tanaka model, known to be accurate and of its ease in implementation, is considered.Its expression is given by [5]. in which, and II T is the magneto- electro-elastic interaction tensor that is function of the properties of the matrix and the shape of the inclusion.Details about the computation of the interaction tensor are given in [5].Some Numerical results are presented bellow for the considered magneto-electro-elastic composites [5].The used properties are listed in table 1.The electro-magnetic moduli are presented in figure 1.The evolution of these coefficients versus the volume fraction of the piezo-electric inclusions is shown. A numerical data of the effective coefficients for different volume fractions of inclusions is also presented in table 2 Figure 1: The effective magneto-electric moduli α11 et α33 presented for a fibrous magneto-electro-elastic composite constituted of piezo-electric inclusions embedded in a piezomagnetic matrix versus the volume fraction of the piezoelectric inclusions. Stroh formalism solution for the macroscopic fields In order to analyze and design materials and devices with magneto-electric coupling, it is important to determine the distribution of the physical fields within these heterogeneous structures. We use the Stroh formalism, described by Ting [11], to obtain a general solution of Eq (6).The state variables satisfy the case of simply supported boundary condition.Solutions of extended displacement vector and traction vector are respectively assumed to be as follow:   (15) where p = n/Lx , q = m/Ly; n and m are two positive integers. Based on the Stroh formalism [10][11], the vector [ , , , , ] a a a a a a  by: Stresses should satisfy the equations of equilibrium, which in terms of the vector a, yields the following eigenequation: The linear eigensystem to be solved is then: where In order to obtain the extended displacement and traction vectors at any depth, say in layer k, we propagate the solution from the bottom of the surface to the z-level, i.e., where is the thickness of layer j. The propagating relation can be used repeatedly so that one can propagate the physical quantities from the bottom surface z=0 to the top surface z=H of the layered plate, then: Various combinations of mechanical and electrical loads may be considered at the top (z=H) and at the bottom (z=0) of the plate.The Eshelby-Stroh solution for the macroscale analysis of laminated piezo-electric composite structures has been used in earlier studies [12,13,14] Two-scale method results The multiscale framework is used to analyze two model problems.In the first problem, we consider a simply supported laminate consisting on sandwich multilayered composed by three magneto-electro-elastic layers of equal thickness h=0.1 with Piezo-electric fiber volume fraction respectively of 0.1, 0.5 and 0.1 (figure 2).A z-direction traction with amplitude σ0=1N/m 2 is applied on the top of the surface z=0.3 m.Responses are calculated for fixed horizontal coordinates (x,y)=(0.75Lx,0.25Ly).This case will be compared to the one where the matrix is made by a piezo-electric material and the inclusion made by a piezo-magnetic material (figure 3).Table 2: Effective properties of fibrous Magneto-electro-elastic composites constituted of a piezomagnetic (CoFe2O4) matrix reinforced by piezo-electric (BaTiO3) inclusions (Cij in GPa; κij in 10 -9 C 2 /Nm 2 ; μij in Ns 2 /C 2 , eij in C/m 2 ; hij in N/Am and αij in 10 Figure 4 shows that the two composites behave in a different manner except at the intermediate layer where the volume fraction of the both inclusions is the same.The second model concerns the graded material shown in figure 6. In this example, we consider a simply supported laminate consisting on a graded material composed by six magneto-electro-elastic layers of equal thickness h=0.05 m.The first three layers are made by piezo-magnetic (CoFe2O4) matrix and piezo-electric (BaTiO3) inclusions with volume fraction ranging from 0.15 to 0.5.The last three layers are each made by a piezo-electric (BaTiO3) matrix and piezo-magnetic (CoFe2O4) inclusions with volume fraction varying from 0.5 to 0.15 (figure 6).A z-direction The behavior of this multilayer is compared with that of a composite consisting of layers made by piezo-magnetic (CoFe2O4) matrix with a 0.5 volume fraction of piezoelectric (BaTiO3) inclusion and to that of a composite of layers made by piezo-electric matrix with 0.5 volume fraction of piezo-magnetic inclusion.Figures 7 and 8 show the evolution of the electric and magnetic potential for these different multilayers.We observe that the evolution of the electrical potential of the three first layers of the graded material varies in the same direction as in the case of composite made by a piezomagnetic matrix with 0.5 volume fraction of piezo-electric inclusion. However, for the last three layers the electric potential varies in the same direction as the case of composite made by piezo-electric matrix with 0.5 volume fraction piezomagnetic inclusion. Conclusion In this paper, the micro-macro problem to obtain the homogenized effective coefficients of magneto-electroelastic heterogeneous media was derived based on the Mori Tanaka method.This homogenization model is used to obtain the effective elastic, piezo-electric, piezo-magnetic, dielectric, magnetic and magneto-electric coefficients.Stroh Formalism is devoted to predict macroscopic fields in a multilayered plate.This multiscale framework is used to analyze two problems, a sandwich multilayered plates and a graded material.This allowed us to predict the behavior of these multilayered rectangular plates under surface loads with defined inclusion direction.Apart from extending the multiscale method proposed by [15] to taking into account the magnetic effect, the present method is based on a more simple procedure (Mori-Tanaka) to describe the behavior of each layer. A genetic algorithm can be developed to optimize the best distribution of volume fraction of inclusion fiber for some defined physical constraints. 12) where I iJMn E and I f represent the properties of the inclusions and its associated volume fraction and M iJMn Figure 2 : Figure 2: Sandwich multilayer made by piezo-magnetic matrix with Piezo-electric fiber volume fraction (VF) respectively of 0.1, 0.5 and 0.1 Figures 4 and 5 Figures4 and 5present the evolution of the electric and magnetic potential along the thickness direction of these different sandwich multilayers.It is obvious that the potential variations for the two cases are completely different.Figure4shows that the two composites behave in a different manner except at the intermediate layer where the volume fraction of the both inclusions is the same. Figure 4 :Figure 5 : Figure 4 : Variation of the electric potential along the thickness direction in the sandwich plate caused by a surface load on the top surface σ0=1N/m 2 is applied on the top of the surface z=0.3 m.Responses are calculated for fixed horizontal coordinates (x,y)=(0.75Lx,0.25Ly). Figure 7 : Figure 7 : Variation of the electric potential along the thickness direction in the graded material caused by a surface load on the top surface Figure 8 : Figure 8 : Variation of the magnetic potential along the thickness direction in the graded material caused by a surface load on the top surface
2018-12-12T10:04:09.389Z
2016-12-19T00:00:00.000
{ "year": 2016, "sha1": "c0220031961b067853fa97723dc3d8cd950d3587", "oa_license": "CCBY", "oa_url": "https://www.aemjournal.org/index.php/AEM/article/download/426/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c0220031961b067853fa97723dc3d8cd950d3587", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
251690364
pes2o/s2orc
v3-fos-license
Tracing Trade Routes: Examining the Cargo of the 15th-Century Skaftö Wreck ABSTRACTS The Skaftö wreck of c.1440, situated north of Gothenburg, Sweden, was investigated between the years 2005 and 2009. Investigations revealed a variety of cargoes, such as copper and speiss ingots, barrels with lime and tar, bricks and roof tiles, and oak timber in the form of planks and boards. In order to identify the different cargo types found on the wreck, and, possibly, establish their geographical origin, a variety of analytical methods have been utilized. The present study accounts for the archaeological investigations of the cargo and for the analyses that have been conducted to date. Results are compared to and discussed in relation to other contemporaneous source material, both historical and archaeological. Based on this examination, it is concluded that the vessel was heading from the southeastern corner of the Baltic Sea, most likely Danzig (Gdańsk), aiming for the Western European market, possibly Bruges. Introduction Nearly 600 years ago, a heavily loaded merchant ship foundered off the island of Skaftö, situated approximately 70 km north of present-day Gothenburg on the now-Swedish west coast, which at that time was part of Norway. The vessel came to rest in less than 10 m of water, close to the shore, in a relatively sheltered strait (Figure 1). It remained hidden until the summer of 2003, when it was accidentally discovered by a local skin diver. Later that year, maritime archaeologists from Bohusläns Museum in Uddevalla conducted a diving inspection of the wreck site. Among the most notable features observed were a large number of copper ingots in the form of round and oval slabs. Other cargoes noted during inspection were barrels containing what was thought to be lime and tar. A suggested medieval date for the vessel was confirmed in 2004 by means of dendrochronology. Analysis concluded that the ship was likely built in the late 1430s of timber originating in present-day Poland (Linderson, 2004). Following a minor test excavation in 2005, a bigger research project was initiated in 2006 by the corresponding author. Further field campaigns were carried out in and, finally, in 2009(von Arbin, 2010. Investigations revealed that the ship lies flat on the sea floor, resting on its starboard side, with substantial parts of the stem, sternpost, keel and rudder intact. Of the starboard side, approximately 70% survives. The port side has almost entirely vanished, with the exception of a smaller portion in the lower after section of the vessel ( Figure 2). The reason for the starboard side being so well preserved is the massive load of cargo, which has effectively prevented the access of different wood decaying organisms. Despite the fact that only about 20 m 2 of the site, or less than 15%, has been subject to excavation, investigations have yielded a wealth of information regarding the technical features and general design of the vessel. With an estimated overall length of c.25 m, a height around 6 m, a beam of 8 m or more, and, possibly, two full decks, the ship must have been fairly big. In a number of previous articles, the corresponding author has suggested that it may be a representative of the so-called 'hulk'a shiptype whose features have long been disputed by maritime archaeologists and historians (see discussions in von Arbin, 2012Arbin, , 2013Arbin, , 2014. Moreover, parallels have been drawn to the wrecks of other big clinker-built vessels of similar construction, size, age and origin, in particular the Skjernøysund 3 (Auer & Maarleveld, 2013), Bøle (Daly & Nymoen, 2008) and Avaldsnes (Alopaeus & Elvestad, 2004) shipwrecks in Norway, and the so-called Copper Ship, also known as W-5, in Poland (Ossowski, 2014a). The present study, however, does not primarily target constructional issues. Instead, the focus will be on the different cargoes, their likely origins and handling, with the ultimate aim to try to model the intended original sailing route of the vessel. The medieval shipwrecks that have been found and investigated in Northern Europe to this day often constitute discarded and partially dismantled vessels. Typically, they have been stripped of cargo, equipment and personal belongings (e.g. Åkerlund, 1951;Bill, 1997, pp. 113-116;Hansson, 1960;Varenius, 1982). This is obviously not the case with the Skaftö wreck. Since the vessel sank fully loaded en route between two ports, it offers a rare opportunity to obtain detailed information on the cargo by means of various analytical methods. While the study relies largely on the results of scientific analyses, written sources and contemporaneous archaeological evidence are equally important. We hope that this cross-disciplinary approach will contribute to a wider understanding of the trading of bulk goods in Northern Europe during the late medieval period. Cargo Types Identified on the Wreck A number of different cargoes have been identified on the wreck, namely two different types of metal Figure 3. The different cargoes of the Skaftö wreck. A: The smaller assemblage of copper ingots. B: One of the assemblages of speiss ingots. C: One of the many lime barrels where the barrel has degraded while the content has remained more or less intact. D: Three exposed tar barrels. Behind them is a lime barrel, and in the foreground on the right hand side is a copper ingot. E: A portion of the timber cargo in test Trench 1. On the right hand side are two lime barrels. F: Brick and roof tile assemblage (A, C, D, E and F: Jens Lindström, Bohusläns Museum; B: Staffan von Arbin, Bohusläns Museum). ingots, lime, tar, timber, bricks and roof tiles (Figures 2 and 3). To a diver, the many round and oval copper ingots, revealed by their bright green colour, probably constitute the most distinguishing feature of the wreck site ( Figure 3a). Within the hull structure, ingots seem to be distributed in two major assemblages: a larger one deep in the hold of the stern area, consisting of approximately 70 fully or partially exposed ingots, and a smaller one amidships, containing at least 30 ingots that must have been stowed either on, or, perhaps more likely, immediately beneath the main deck. A further ingot, possibly deriving from the smaller assemblage, was found isolated on the seabed, just outside the hull. In the following account, the above ingots will be termed as Type 1-ingots. The ingots vary significantly in size. The largest round ingot retrieved during fieldwork measured 45 cm in diameter. It had a thickness of between 1.5 and 4 cm and weighed just over 11 kg. The largest oval specimen measured 69×41×6.5 cm and weighed 56.6 kg. Based on visible ingots and recovered specimens, their total weight has been estimated to be somewhere between 1.5 and 3.5 tons, however most likely around 3 tons. This rough estimation should be treated with caution though, since more ingots may be buried deeper down in the sediment. Apparently, ingots of approximately the same size and shape have been stacked together. The larger assemblage contains six or possibly seven discernible stacks, while the smaller one contains three. The number of visible ingots in each stack varies between approximately three and 12. No traces of containers of any sort have been observed, and neither are there any evidence of rope or wicker being used for keeping the stacks together. It is possible, though, that pine planks observed in the immediate vicinity of the largest assemblage were used as dunnage. In addition to copper, there is a smaller number of irregularly shaped ingots, which initially were believed to consist of iron ( Figure 3b). A simple check with a regular magnet, however, showed that they are only slightly magnetic. In the following, these ingots will be referred to as Type 2-ingots. Like the copper ingots, they are distributed in more or less well-defined assemblages. One such assemblage is located in the bottom of the hold, just before the large assemblage of Type 1-ingots. Another one is located amidships. Here, ingots appear to have been stored either on, or more probably, just beneath the main deck. A third assemblage was discovered in the foremost trench (Trench 1, Squares R1-R2), close to the bow. These ingots range in size from c.10×10×5 cm up to c.45×30×5 cm, but there are also much smaller, pebble-sized lumps. Barrel parts found in conjunction with ingots indicate that they were originally packed in barrels that subsequently disintegrated. In terms of volume, the main cargo of the vessel was most certainly lime, as evidenced by the significant number of lime-filled barrels. Such barrels have been recorded in most of the hold (Figure 3c). Since many of them have partially disintegrated, scattered lumps of lime are distributed over an even larger area. There seem to be several different barrel sizes, but due to degradation, measurements are generally very unsure. However, recovered barrel staves and barrel heads from Trench 2, located in the after part of the vessel, suggest that the largest and most common type of barrel may have held just over 90 l. When filled, each of these barrels must have weighed more than 300 kg. Tar also likely constituted a significant part of the cargo. Tar barrels seem to be mainly distributed in the after section of the vessel (Figure 3d). They are estimated to have contained approximately 80 l each. As can be seen on the site plan (Figure 2), barrels were mainly placed horizontally, with their ends facing towards the ends of the ship. Oak timber, in the form of planks and boards, was mainly carried in the bottom of the hold, stacked lengthwise on top of the mast-step buttresses. Spaces between buttresses were filled with shorter boards, placed crosswise. The preserved timber stock measures approximately 1 m in height. In addition there is what appears to be a smaller concentration of crosswise stacked boards higher up in the hold, just before amidships. These boards are placed on top of the lime barrels, in close conjunction with one of the cross-beams. Altogether, it appears as if the volume of timber carried on board must have been quite substantial (Figure 3e). Partially exposed timbers are heavily degraded due to wood borer attack. Still, two distinct groups of timber can be identified in this material. Group 1 consists of boards with a presumed original length of around 85 cm. Widths vary from c.15 to c.17 cm and maximum thickness is 2 cm. Group 2 consists of planks with presumed lengths exceeding 1.37 m. Here, widths vary between 23 and c.30 cm, and thicknesses between 3 and 6 cm. Bricks and roof tiles are assembled in the northwestern part of the wreck, that is, abaft the centre of the vessel (Figure 3f). The total number of bricks could be estimated to perhaps a few hundred, whereas tiles occur more sporadically. From their location on the site, one gets the impression that they must have been stored, if not on the main deck, high up in the hold. Bricks typically measure around 29.5 × 14 × 7.5 cm and recovered specimens weigh approximately 5 kg each. Some of them are broken, or, possibly, cut, in halves. Roof tiles are of the 'Monk and Nun' type. None of the tiles observed on the wreck are preserved for their entire length. The best preserved retrieved specimen is 31.5 cm long. It is slightly tapering towards one end, and widths thus vary between 11.5 and 13.7 cm. Bricks and tiles are all made of red-burning clays. Metal Ingots Introduction Already in 2007, analysis of three ingots of each type were carried out using scanning electron microscopy coupled with energy-dispersive X-ray spectroscopy, abbreviated SEM-EDS (Grandin, 2009). While SEM metallographic analysis can provide information regarding the production process, EDS-analysis is restricted to point analysis. This might result in a chemical composition that is not representative for the whole ingot. Another problem is the low penetration depth of the electron beam, which may result in a composition that is more representative of the surface than the interior of the ingot. For this reason, inductively coupled plasma mass spectrometry (ICP-MS) was carried out as part of the present study. High resolution ICP-MS has the advantage of being able to measure the relevant elements down to 1 part per million (ppm). Nearly all elements of the periodic table can be analysed, although silver, tin, antimony, tellurium, lead, bismuth, phosphor, sulphur, iron, cobalt, nickel, zinc, arsenic and selenium are particularly useful in this case. ICP-MS is thus currently the best choice for determining chemical composition. Additionally, it can be used to determine lead isotope ratios, which can provide provenance information for the ore. In this case, trace elemental data were obtained at the Laboratory for material sciences of the German Mining Museum, Bochum, while isotopes were measured at the Frankfurt Isotope and Element Research Center (FIERCE), Goethe University, Frankfurt am Main. Type 1-Ingots Type of Metal. Judging by their physical appearance, especially their discoid form and blistered surface, Type 1-ingots can be identified as so called Reißscheiben. This type of ingots -Reißscheiben literary meaning 'ripped-out discs', referring to the production technique (Figure 4), represent late medieval to early modern standard copper smelting operations (Weisgerber, 1999, p. 296;Zedler, 1742, p. 95). They have been found on a number of Northern European sites, dating from the early 15th to the late 17th century/early 18th century (for overviews, see Ossowski, 2014b, pp. 246-247;Werson, 2015, pp. 84-89;cf. Martinón-Torres et al., 2020). . The process of producing Reißscheiben, as described by Georgius Agricola in 1556 (Agricola, 1556(Agricola, /1950. Melted copper is poured in a crucible or, which is likely the case with the Skaftö ingots, a dug out hole in the ground. Water is then sprayed on the surface of the molten copper in order to solidify the metal. Immediately afterwards, the copper ingots are being pulled out using an iron hook. This is repeated until there is no liquid metal left. Depending on the shape of the crucible/hole, the Reißscheiben will get gradually smaller in diameter during the process (Anette Olsson, Bohusläns Museum). All the Type 1-ingots retrieved from the wreck, nine specimens in total, were sampled and analysed ( Figure 5). The results reveal those nine ingots to be chemically inhomogeneous, as five of them have significantly different trace elemental composition than the other four. These two groups will hereafter be referred to as Type 1a and Type 1b, respectively. Type 1a has noticeable higher amounts of impurities like tin, sulphur, zinc and selenium, while being significantly lower in nickel, antimony and arsenic. Antimony ranges up to unusually high amounts in Type 1b-ingots (Table 1, Figure 6). The existence of two chemically distinct groups of Reißscheiben could also be recognized on the contemporaneous Mönchgut 92 shipwreck, discovered in the proximity of Rügen, Germany (Werson, 2015), but was not fully understood at the time. Interestingly, two types of Reißscheiben were also present in the Copper Ship, which foundered in Gdańsk Bay, according to recent analyses (Garbacz-Klemka et al., 2014). It remains uncertain if chemical patterns observed in the Skaftö material resemble those derived from the Copper Ship Reißscheiben, as it appears as only semi-quantitative methods (SEM-EDS) were applied in the latter case. At present, however, it seems as the Skaftö copper does not compare chemically with any other known 15th century Reißscheiben finds (cf. Skowronek et al., 2021). The process of producing copper out of sulfidic ores during this period, as described by Suhling (1990, pp. 50-56), required different stages of roasting, smelting and refining with the major goal of separating sulphides from the copper. As a final step, copper ingots were refined in an open oven called a Garherd (Rößler, 1700(Rößler, /1980. Here, two types of ingots were fabricated: one sort for casting (Garkupfer), and another one for forging (Hammergarkupfer) (Suhling, 1990, p. 55). On the Venetian market, these two copper sorts occur already in the 14th century as rame duro (hard copper) and rame dolce (sweet [malleable] copper), respectively (Braunstein, 1977, p. 79). In order to be malleable, the copper has to be as pure as possible. Therefore, the copper intended as rame dolce was kept much longer in the oven (Suhling, 1990, p. 55). However, it appears that the smelters of the Skaftö Reißscheiben did not have much experience with this production. While they managed to reduce sulphur, tin and zinc, the amounts of arsenic and antimony rose to a point where the copper would have been unsuitable for any forging, as it would get brittle and too hard (cf. Gowland, 1914, p. 53). Probably they neglected the roasting process of the ore, which is especially important if arsenic and antimony bearing Fahlores are present. Once alloyed with copper, arsenic and antimony are difficult to remove (McKerrell & Tylecote, 1972) and thus require roasting prior to smelting. Therefore, Type 1b-ingots likely represent a failed attempt to produce a purified copper sort out of Type 1a-ingots. The proper process, later to be named 'Deutscher Kupferprozess' (German copper process) (Suhling, 1990, p. 53), does not seem to have been mastered at the time. In the mid-16th century Georgius Agricola (1556Agricola ( /1950) points out that 'If the copper is not perfectly smelted the cakes will be too thick, and cannot be taken out of the crucible easily'. Compared to other Reißscheiben finds, such as the ones from Heligoland (Stühmer et al., 1978, p. 16), Wiltshire (Martinón-Torres et al., 2020, p. 39) or the Elbe (Althoff, 1995, pp. 41-42), the Skaftö Reißscheiben appear to be much thicker, which further supports the conclusion regarding improper smelting operations. All values are given in parts per million (ppm) except for copper, which is given in weight percent (wt%). From analysis, it appears that the Type 1a-ingots are mainly from the smaller assemblage, located amidships, while all of the analysed Type 1b-ingots derive from the large copper assemblage, which is located in the stern area (Figure 2). At first glance, it would thus seem that the two copper qualities were kept separated during transport. There is, however, one exception from this pattern. One of the analysed ingots, inventory No. 29276:57, which, according to the analysis, could be identified as a Type 1a-ingot (Table 1, Figure 5), was found in the larger assemblage, together with ingots similar in size and shape. For two of the other ingots, inventory Nos. 29274:1a and 29274:1b (both Type 1ingots), their original positions are not known since they were salvaged without prior documentation in conjunction with the discovery of the shipwreck in 2003. Provenance. The lead isotope ratios are presented in Table 2. Type 1-ingots show some data spreading, as they range from 206 Pb/ 204 Pb = 18. 07-18.41 and 207 Pb/ 204 Pb = 15.645-15.656, respectively. The isotope ratios form significant cluster groups. All the Type 1aand one of the Type 1b-ingots (inventory S6:59) derive from ore deposits older than 400 Ma, indicating pre-Variscan orogeny. The remaining four Type 1b-ingots exhibit significant higher lead isotope ratios, thus deriving from genetically younger ores formed during Variscan orogeny (Figure 7). Figure 6. Mean trace elemental composition of the two types of copper ingots, with Type 1a n = 4 and Type 1b n = 5. While Type 1a ingots have higher amounts of impurities like tin (Sn), lead (Pb), sulphur (S) and zinc (Zn), amounts of arsenic (As), nickel (Ni) and antimony (Sb) are notably lower. It seems that some sort of refinement process lowered the amounts of rather volatile elements on the one hand but enriched those elements that are difficult to remove after smelting on the other hand. The high amounts of As and Sb in both types of copper ingots indicates the use of Fahlores (Tobias Skowronek). While Variscan copper ore deposits are very common in Central Europe, pre-Variscan copper is rather rare. In both Bohemia and the Bohemian-Saxon Ore Mountains (Erzgebirge), copper deposits of the latter age can be found, but their chemical composition is inconsistent with the one that produced the Skaftö copper ingots, as expressed by their data plotting on different µ-lines ( Figure 7). The St. Briccius Mine in the Erzgebirge is the only known deposit with similar chemical composition and age, but as only one lead isotope ratio is available it remains unclear if the data is reliable. The copper deposits of eastern Slovakia in the Spišsko-Gemerské area of Gelnica and Smolnik appear to be a much better match. Those deposits have both Lower Palaeozoic model ages (400-600 Ma) and the matching ore-chemistry expressed by the 10 µ-line. Their sulfidic polymetallic character (Cernysev et al., 1984, p. 312) would produce copper Reißscheiben similar to those studied here. For the remaining Type 1b-ingots, the copper deposits around Banská Bystrica (the former Neusohl) in central Slovakianamely Špania Dolina (former Herrengrund), Poniky and Richterova (Schreiner, 2007, pp. 25-27)show the best correlation. They too have the Fahlore characteristic that would produce copper with high amounts of antimony and arsenic equivalent to the Skaftö Reißscheiben (Hauptmann et al., 2016, p. 17). Type 2-Ingots Type of Metal. Three of the Type 2-ingots were analysed ( Figure 8), one from each of the mapped assemblages. Judging from their chemical composition (Table 1), they can be identified as speiss. Speiss constitutes a mixture of elements such as iron, copper, nickel and cobalt, with major amounts of arsenic and/or antimony. It has to be further differentiated between ferrous speiss, containing mainly iron and arsenic or antimony, and basemetal speiss, containing copper, nickel, iron, antimony, arsenic, cobalt, and often also sulphur and lead (Thornton et al., 2009, p. 308), which is the type present in the Skaftö wreck. Base-metal speiss forms when complex arsenic and antimony bearing ores are smelted (Bachmann, 1982, p. 29). It was regarded as an unwanted by-product, especially in medieval times, since copper can become trapped inside the speiss (Rehren et al., 1999, p. 77). Speiss can, on the other hand, act as a collector of precious metals, but its possible de-silvering during Figure 7. Lead isotope ratios of the Skaftö ingots in relation to those of copper ores worked during the 15th century. The copper ingots form significant groups, where some have older and other younger geologic ages well defined by the dividing 400 Ma line. All Type 2 (speiss) ingots plot with the younger group, indicating their similar origin. While the group deriving from younger geologic ages matches best with central Slovakian copper ores, the older group is likely to derive from eastern Slovakian ores. The crosses show the analytical error. Also, note the different µ-lines, as they indicate different ore chemistry. Data sources: central Slovakia: Schreiner (2007, pp. 218-221); eastern Slovakia: Cernysev et al. (1984, p. 316); Vogtland, Erzgebirge and Bohemia: Niederschlag et al. (2003, pp. 79-81); Mansfeld: Niederschlag et al. (2003, p. 81;Frotzscher, 2012, pp. 126-131). Lead evolution after Stacey & Kramers (1975, p. 216 ) (Tobias Skowronek). earlier periods is open to debate (Kassianidou, 1998). Here, however, the silver content is quite low and comparable to the amount in de-silvered copper ingots dating slightly later (Hauptmann et al., 2016, p. 15). It might well be the case that these ingots were formed unintentionallyas a by-productin the fabrication of the Type 1 or Reißscheiben ingots since both products are the result of the smelting of sulfidic (Fahl)ores. Provenance. The lead isotope ratios for the analysed Type 2-ingots are presented in Table 2. The three analysed ingots have lead isotope ratios of 206 Pb/ 204 Pb = 18.39-18.41 and 207 Pb/ 204 Pb = 15.64-15.65, respectively, and are thus rather homogenous. They have the same lead isotopic characteristic as the majority of the Type 1b-ingots (Figure 7), underlining the assumption of them being a by-product (see above). Thus, it is likely that they too derive from central Slovakia. Introduction Chemical analysis of the content of one of the lime barrels was conducted already in 2005 in order to define the type of lime present. Analysis showed that the sample consisted of calcium in the form of calcium carbonate (CaCO 3 ). However, due to budget constraints, it was not possible at that point to determine if the original content of the barrel had in fact been chalk (calcium carbonate, CaCO 3 ), slaked lime (calcium hydroxide, Ca(OH) 2 ) or burnt lime (calcium oxide, CaO) which was slaked by seawater in conjunction with the foundering of the ship and thereafter carbonated (Wranne et al., 2005). In order to further clarify this, samples from three different barrels, situated in the after, amidship and bow sections of the wreck and collected during the 2006 field campaign, were provided to Torben Seir at SEIR-materialeanalyse A/S in Denmark for microscopic thin-section analysis (Seir Hansen, 2006). The objective of this analysis was to define the composition and structure of the material, in order to be able to determine the type of lime and, ultimately, also the provenance of the limestone. Type of Lime The analysed material could be described as an inhomogeneous and partially porous mass of lime, which contains unevenly distributed solitary sand grains measuring up to 0.7 mm, as well as greyblack lumps of aggregated sand grains measuring up to 10 mm. Microscopically, the lime appears to consist of an aggregation of small lime crystals (calcite, CaO 3 ). Diffuse, irregular to rounded structures are frequently occurring in the samples but are unfortunately not possible to interpret further at this stage. As limestone fragments show evidence of being heated, it is possible to conclude that the content of the barrels was calcium oxide, that is, burnt limea composition also commonly known as quicklime. The presence of large, well-developed crystals of calcium hydroxide (portlandite) points in a similar direction, as portlandite is a mineral that appears in products containing burnt limestone, which has been in contact with water. Small pieces of charcoal are possibly residues of the firewood used in the burning process ( Figure 9). Sand grains and pieces of sandstone has given the lime slightly hydraulic properties. Burnt lime is transformed into calcium hydroxide in contact with water; a process known as slaking. This process results in a highly exothermic reaction and expansion of the material. As the water intrusion of the Skaftö lime barrels has been a slow process, lengthwise channels have gradually evolved into the lime mass. The presence of several generations of such channels indicate that slaking has occurred after the lime was packed in barrels. If the lime had been slaked before packing, signs of expansion would have been little or none. In contact with atmospheric air, slaked lime will react with the carbon dioxide and transform into calcium carbonate. Similarly, calcium carbonate will form if slaked lime is exposed to bicarbonate from carbon dioxide dissolved in seawater. In the Skaftö case, this carbonation process has been ongoing since the day the barrels were deposited on the seafloor. Presently, the lime is almost fully transformed into calcium carbonate. Provenance It is possible to determine the provenance of the limestone used for the lime burning due to the presence of sand grains and small pieces of sandstone, together with residues of underburned limestone ( Figure 10). Sand grains mainly consists of quartz and feldspar and occur as 0.2-1 mm nodules. Similar grains also occur intercalated, together with mica minerals and opaque minerals interpreted as iron sulphide (pyrite), in up to 10 mm pieces of what is believed to be a fine-grained, slightly clayey sandstone. Both sand grains and sandstone are interpreted as being part of the original limestone. Sand grains are generally heavily affected by heat from the lime burning, as well as by the subsequent etching processes caused by the strongly basic slaked lime. Residues of underburned limestone occur as small pieces of texture-less, fine-grained limestone and up to 3 mm long shell fragments of the animal group Brachiopoda ( Figure 11). The composition of the limestone shows that it is of sedimentary origin with content of grains of sand and, in some layers, calcareous siltstone or fine-grained sandstone. This type of limestone is formed in relatively nearshore environments, which would exclude the possibility of a Mesozoic or Tertiary age (e.g. chalk and so-called Danian limestone). Coastal occurrences of similar limestone can be found on isolated locations in Scania (Ignaberga and Hannaskog). Proper sandstone, however, is not present here. Over the years, SEIR-materiealeanalyse A/S has conducted a large number of lime and limestone analyses. Compared to this material, the Skaftö lime appear to show a particularly high resemblance to limestone derived from a retaining wall from the 16th or 17th century, located in the bastion Grå Munken at Varberg Fortress, Sweden. The lime from Grå Munken has a similar structure and composition as the analysed lime from the Skaftö wreck. The size of the sand grains, as well as their distribution and frequency, are in fact almost identical. Since the preservation of the original limestone was considerably better in the Varberg case, it was possible to fix provenance to Gotlandmore specifically the area of Halla, on the central part of the island, or Burgsvik on the southernmost tip (Seir Hansen, 2005). Based on the conducted analysis and comparative material, Gotland is thus considered to be the most likely origin of the analysed Skaftö lime. It should however be noted that this determination is to be considered preliminary, since limestone occurrences in present-day Estonia have currently not been evaluated. Introduction Already in 2005, analysis by means of thin layer chromatography (TLC) was performed on tar-like Figure 11. Thin section photo (parallel polarizers) of lime, showing a shell fragment, which is probably from the animal group Brachiopoda. The fragment originates from the limestone used for lime burning. Sample 20 from an exposed lime barrel in the aft section of the wreck (Torben Seir). material from one of the barrels at the wreck site (Barrel W). This analysis confirmed that the content is indeed a tar, and tentatively a wood derived productor a mixture of a wood derived product with one or more other substances (Wranne et al., 2005). In order to improve this chemical determination, further analysis was conducted by Sven Isaksson at Stockholm University's Archaeological Research Laboratory (Isaksson, 2009). This time analysis comprised samples from two barrels: Barrel E and the aforementioned Barrel W. Analysis of tars and pitches found on shipwrecks have been performed on several previous occasions (Bailly et al., 2016;Connan & Nissenbaum, 2003;Lange, 1983;Reunanen et al., 1989Reunanen et al., , 1990Robinson et al., 1987;Stern et al., 2006Stern et al., , 2008 in attempts to identify the wood species from which it was produced, to shed light on the manufacturing technique used, and to determine its geographical origin. For the latter, bulk stable light isotope (δD, δ 13 C and δ 18 O) analyses have been proven successful in separating birch bark pitches from Northern and Southern Europe, while there was no clear difference detected between samples from England, Norway, Denmark or Sweden (Stern et al., 2006). Most studies of components of these materials are instead based on molecular analysis using gas chromatography mass spectrometry (GCMS), and this was also the method chosen for the samples from the Skaftö wreck. Composition The results of the analysis are summarized in Table 3, and a chromatogram is exemplified in Figure 12. The analysis shows that the main components of the residues from both barrels are tricyclic diterpenoids of the abietane and pimarane series. These are characteristic components of resins, pitches and tars from spruce or pine, that is, of trees of the Pinaceae family. In Northern Europe, pine trees were the main source for tar production in historical times. Even though it is possible to distillate tar also from spruce, the comparatively low content of resin makes it less worthwhile (Svensson, 2007, p. 619). Thus, the tar transported on board the Skaftö vessel most likely constitutes pine tar. In earlier investigations of archaeological remains of prehistoric tar production facilities (Hjulström et al., 2006), the distribution of four main components were shown to be diagnostic, namely, retene, abietic acid, methyl dehydroabietate and dehydroabietic acid. Retene is a neutral diterpene formed by the reduction of the corresponding resin acids. The content of retene in both samples ( Figure 13) is in par with what has been previously reported for both tar and prehistoric tar production facilities. Methyl dehydroabietate is present in low concentrations in resins Figure 12. Example of chromatogram recorded from the sample from Barrel W. Component numbers are found in Table 3 (Sven Isaksson). from pine and spruce, but is also formed by reaction between methanol and the resin acid in connection with dry distillation (tar production). The relative content of methyl dehydroabietate is lower in both of the samples than in reference samples and in the prehistoric tar production facilities previously analysed (Figure 13), indicating either a different process or effects from deviating deposition conditions. The ratio of resin acids per neutral diterpenes is also relatively low, which indicates a high temperature in connection with tar burning (cf. Egenberg & Glastrup, 1999). This in itself points towards a different manufacturing process, with higher temperatures in comparison with previously analysed samples from prehistoric tar production sites (cf. Hjulström et al., 2006). Even though the outer surface was discarded during sampling, the marine environment seems to have affected the composition slightly. One such process that has been suggested is the degradation and modification of abietic-type resin acids, mainly through microbial hydrogenation, resulting in the formation of tetrahydroabietic acid and 18-norabieta-8, 11, 13-triene (Reunanen et al., 1990). 18norabieta-8, 11, 13-triene is identified in both samples at low levels (peak number 3 in Table 3 and Figure 12), but there are only traces of tertrahydroabietic acid found when extracting characteristic ion chromatograms. From the molecular ion m/z 376, and other ion fragments, peak number 11 in Table 3 and Figure 12 is suggested to contain the trimethylsilyl derivative of dihydroabietic acid. It is however co-eluting with other components, the most prominent of which most probably is a steroidal component as suggested by characteristic ion fragments, and the secure identification of both compounds is thus indecisive. Introduction As part of the present study, a dendrochronological analysis of a total of seven samples from the timber cargo was carried out by one of the authors (Daly), who in 2012 did a similar analysis of barrels containing lime from the wreck (Daly, 2014a). Before this, dendrochronological analysis had been conducted by other researchers: of structural timbers from the ship itself (Linderson, 2004;reanalyzed by Krąpiec in 2006), of some barrel remains (Krąpiec, 2006), and of planks and boards from the cargo (Linderson, 2007). So far, all dendrochronologically analysed timbers from the wreck and cargo have been of oak (Quercus sp.). It should be mentioned, though, that a timber found in conjunction with the big plank stack and sampled for U max analysis in 2005 proved to be of ash tree (Fraxinus excelsior). It is however uncertain whether this heavily degraded timber was actually part of the timber cargo (von Arbin, 2010, p. 20;Wranne et al., 2010). The present analysis used the same seven timbers that Linderson analysed in 2007 and was undertaken in the context of the research project TIMBER, which is directed by Daly at the Saxo Institute, University of Copenhagen. The project deals with the material and historical evidence for trade of timber in Northern Europe c.AD 1200-1700. Preserved timber cargoes in shipwrecks are extremely rare, and the cargo of the Skaftö wreck is thus of great importance for this research. Analysis of three samples from the boards (Group 1; samples P1, P3 and P5) and four from the planks (Group 2; samples P2, P6, P7 and P8) was performed on duplicate samples retained at Bohusläns Museum. Linderson (2007) suggested that the cargo and the ship timbers of the Skaftö wreck have a very similar dating: the trees for the ship's hull were felled c.AD 1437-39, while trees for the timber cargo were felled c.AD 1437-41. The analysis of the lime barrels reaches a similar conclusion (Daly, 2014a). In his analysis of the timber cargo, Linderson identified two provenances: northern and southeastern Poland respectively. He furthermore suggested that the ship timbers are from a more eastern area than the cargo. However, neither Linderson (2004Linderson ( , 2007 nor Krąpiec Figure 13. The relative abundance of the four components retene, abietic acid, methyl dehydroabietate, and dehydroabietic acid in the two samples (Barrel W and Barrel E), compared to the composition in samples of contemporary pine resin and pine tar (traditionally produced in a tar dale) from Sweden (Hjulström et al., 2006, p. 291), a contemporary pine tar from Finland, a sample from a tar barrel from the 18th century Russian frigate St. Nikolai found at Svensksund (Reunanen et al., 1989, p. 37), and pine tar residues from a tar dale (historic time) and a Late Iron Age funnel shaped tar pit, all from eastern central Sweden (Hjulström et al., 2006, p. 291) (Sven Isaksson). (2006) provided any correlation statistics to corroborate their conclusions. Dating The result of the dating analysis is illustrated in the diagram in Figure 14. All seven analysed samples are from radially cleft planks and boards. Two of the three boards belonging to Group 1 have sapwood preserved: Sample P5 had four sapwood rings that could be measured and six unmeasured outermost on the sample. This is from a tree that was felled c.AD 1430-43. Sample P1 has 11 sapwood rings and is from a tree that was felled in c.AD 1438-50. One of the plank samples belonging to Group 2, P6, has four sapwood rings preserved and, allowing for missing sapwood, is from a tree felled c.AD 1440-54. There are slight discrepancies between the tree-ring count and the dated position of the respective timbers, between Linderson's (2007) analysis and the current analysis. Possibly, this is due to the slight difference in how many rings that are present at different positions along the length of the planks and boards. There is one exception to this. The current analysis provides a different dating position for P3, a sample with only heartwood. It is unclear whether this is because the duplicate sample contains fewer rings than the sample analysed by Linderson, or whether he identified a different position for this series. If we make an assumption that the timber cargo is from trees felled at the same time, then we can combine the results of all seven analysed samples and suggest that this felling took place c.AD 1440-43, which allows for a slightly later date than that proposed by Linderson (2007). This is marked in the diagram with an orange vertical line (Figure 14). The dating of the three samples that have sapwood preserved does not allow greater dating precision to determine whether they are from a single felling phase or not, however, and the heterogeneity of Group 1 might suggest in fact that the felling took place in separate occasions. Considering the homogeneity of the barrel material analysed previously (Daly, 2014a), the felling of oaks for the barrels might have taken place in spring or summer 1439. Taking the dating of the vessel itself into consideration, it seems quite clear from the dendrochronological results that the Skaftö vessel sank while it was very new. Provenance A matrix of internal correlation is presented in Table 4. This shows the correlation, t-value (Baillie & Pilcher, 1973), between the tree-ring curve from each plank with each other. Those that display highest similarity are grouped together. The timbers display a rather heterogeneous dataset. Four timbers form a small group, which was observed also by Linderson Figure 14. The results of the dendrochronological dating of the timber cargo from the Skaftö wreck. The grey bars represent the chronological position of each sample. The dark grey ends represent the sapwood. The line symbols represent the probable date for the felling of each tree, using a sapwood statistic for northern Poland (Wazny, 1990). The orange line represents the combined results of all seven samples, suggesting felling took place c.AD 1440-43 (Aoife Daly). (2007). These are the larger planks, Group 2. An average of these is made (Z084M003). The remaining three boards, Group 1, are dating independently of the material from the Skaftö cargo. When we look at the correlations between planks belonging to Group 2 and tree-ring datasets for Northern Europe (Table 5), we see very high t-values with material from the southern or southeastern Baltic region, chiefly with datasets from around Gdańsk Bay. There is also very high agreement with the socalled Baltic 2 chronology, a chronology built with tree-ring datasets from oak panels used as supports for fine arts (Hillam & Tyers, 1995). We also see very high correlation with a range of shipwrecks and other materials made from southern Baltic oak. In Linderson's (2007) study, he states that this material is from southeastern Poland but, as mentioned above, no correlation values are presented in his report. It is clear from Table 5 that lower correlation with chronologies from the southern regions of modern-day Poland argue for a more northerly source for this timber. We do not know, still, where the source of the trees that belong in the Baltic 2 art-historical group is (but see Daly & Tyers 2022). If it is from oak timber rafted along the river Vistula (Wisła), how far up-river was this resource exploited in this early phase of the trade and transport of oak panels and planks from the southern and southeastern Baltic? The map in Figure 15 shows the distribution of the correlations for the Group 2-planks. Positions for the three main art-historical chronologies are postulated, marked with '?'. Of the existing chronologies for terrestrial sites in current day Poland, the Baltic 2 chronology (Hillam & Tyers, 1995) correlates best with northeastern chronologies (Pułtusk) and with a large regional chronology from southeastern Poland, so in the map it is placed approximately between the Bug and Vistula Rivers in central Poland. The three boards belonging to Group 1 are also dating with southern Baltic material, but these correlations do not allow a clear suggestion of where, in this large region, these trees grew (Table 6). They might nevertheless come from a different region than the Group 2-planks. In contrast, when the tree-ring curves from the sampled lime barrels from the Skaftö wreck were compared with a wide range of chronologies from northern Europe, they also achieved high correlation with both art-historical chronologies Baltic 1 and Baltic 2, but higher again with tree-ring datasets from Vilnius (Pukienė & Ožalas, 2007;R. Pukienė, personal communication, 2019) and Klaipėda (Vitas, 2020). It thus appears that the barrel material might come from further east in the southern Baltic region than the Group 2-planks. Furthermore, the low correlation between the barrels and the planks and boards leads us to suggest that the oak trees for the two uses were felled in different locations. A total of 20 staves or heads from the lime barrels were analysed (Daly, 2014a), constituting a quite robust, well replicated dataset. Though the number of cargo planks and boards analysed are rather low, a clear separation can be seen, in terms of the tree-ring correlations between the two cargo types and in terms of the chronologies that each group matches best with, to suggest quite separate geographical sources within the wider southern Baltic region. Introduction In order to determine the geographical origin of the clays used in the production of the bricks and roof tiles found on the Skaftö wreck (Figure 16), inductively coupled plasma mass atomic emission spectrometry (ICP-MA/ES) analysis was performed on samples extracted from material retrieved during the 2005, 2006 and 2008 campaigns. For the present analysis, the amounts of twelve different elements were measured: aluminium, chromium, gallium, manganese, vanadium, calcium, magnesium, strontium, Figure 15. Map showing the distribution of the correlations (t-value) for the Group 2-planks from the Skaftö wreck plank cargo. The larger the dot, the higher the t-value. For details of the method, see e.g. Daly, 2007. The river data are from Lehner and Grill (2013, www.hydrosheds.org, accessed 3rd March 2020) (Aoife Daly). cerium, lanthanum, sodium and cobalt. Samples were taken from three different bricks and three different roof tiles. Analysis was carried out at OMAC Laboratories Ltd., Ireland, whereas the comparative analysis, statistical processing and interpretation was conducted by Torbjörn Brorsson of the Swedish company Ceramic Studies (Brorsson, 2019). The results are presented in Table 7. Provenance To be able to determine geographical origin, results need to be compared with reference data from other analysed sites. These data were collected from Ceramic Studies' own extensive database, which currently contains information from almost 4000 unique sites in Northern Europe. In this case, comparisons were made with material from Sweden, Denmark, Germany, the Netherlands, Great Britain, Belgium, Poland, Estonia and Norway. As a first step, the six samples were analysed and compared to each other. This comparison shows that the brick samples Skaftö 1 and Skaftö 2 constitute a group of their own, and so do brick sample Skaftö 3 and tile sample Skaftö 6. These four samples are likely to belong to two different ceramic productions within the same geographical area. The sample Skaftö 5, however, forms a separate group. So does Skaftö 4 ( Figure 17). Both constitute roof tile samples. They are likely to represent two separate productions, which may or may not belong in the same geographical area as the other four. Their similarities and differences are better highlighted, however, when compared to material of completely different origin. (Daly, 1997) The grey tones highlight the t-values achieved. Comparisons have been made with ceramics from a large number of sites in Denmarkfrom Zealand in the east to Jutland in the westas well as with ceramics from Belgium, Holland, Norway and western, southern and eastern Sweden, including Gotland. The Skaftö samples do not match any of these sites. A comparison with ceramics from northern Germany gives a similarly negative correlation: the chemical composition of the analysed samples differs from ceramics from Lübeck, Wismar, Bremen, Hamburg, Rostock and Greifswald, which makes a provenance in northern Germany unlikely. The analysis, however, indicates that bricks and tiles may have been produced in a geographically adjacent area. Further comparisons with ceramics from the Baltic states and Poland were initially obstructed by the fact that there was very little analysed material to compare with. Available reference data concerned ceramics from a small number of sites in western Poland and Tallinn in present-day Estonia. For the sake of this study, ceramics from different excavations in Gdańsk were therefore analysed. Samples were kindly provided by Bogdan Kościński at the Gdańsk Archaeological Museum. A total of 18 samples, representing ceramic vessels, production waste and bricks, found in different parts of the town, were analysed. The comparison of the results reveals that samples Skaftö 1, Skaftö 2, Skaftö 3, Skaftö 5 and Skaftö 6 group perfectly with bricks from the 15th and 16th centuries found in Gdańsk. Possibly, they were even produced in the very same kilns. Sample Skaftö 4, which stem from a tile, group similarly well with ceramics from the Old Town of Gdańsk, but also with production waste found in conjunction with a ceramic kiln in the same town ( Figure 18). Analysis thus clearly shows that all six samples from the Skaftö wreck derive from ceramic productions in central Gdańsk/Danzig. Metal Ingots Copper has long constituted one of the most appreciated, valuable and widely used metals, and this was also the case in medieval Europe. Due to its different properties, it was used as raw material in the production of a wide array of everyday and household objects, but also as building material, particularly for roofing. Other important areas of use were the minting of coins and the manufacturing of various church equipment, such as church bells, baptismal fonts, candleholders and liturgical vessels. In the late medieval period, copper was also increasingly used for weapon manufacturing, not least the casting of bronze cannons. The trade in copper thus grew increasingly important during the course of the Middle Ages (Garbacz-Klemka et al., 2014, p. 301;Irsigler, 1979, p. 15). It has been shown here that the Reißscheiben retrieved from the Skaftö wreck most probably derive from two sources in the Carpathian Mountains, namely the area around Banská Bystrica in the central part of present-day Slovakia, and the Spišsko-Gemerské area of Gelnica and Smolnik in the eastern part of the country. At that time, these two areas constituted the main mining districts in what was then part of the Kingdom of Hungary (Możejko, 2014, pp. 65-66;Ŝtefánik, 2018, p. 785). While the former area has received much attention from historians, not least due to the well-known exploitation of the Fugger-Thurzo company in the 16th century (Vlachović, 1977, pp. 148-150), eastern Slovakian copper has largely been overlooked. However, recent research by Martin Ŝtefánik (2018) has shown that in the 15th century and earlier, the Spiš-Gemer region was one of the major providers of copper for the European market, and most recently, Miroslav Lacko (2016Lacko ( , 2019 has studied the trade networks and the different actors involved in this trade. Copper from both areas is known to have been exported via Danzig. Ingots were shipped on rivers such as Poprad and Dunajec to the city of Krakόw, from where they were transported further down the Vistula to Danzig, via the town of Thorn (Toruń) (Dollinger, 1970, p. 233;Lacko, 2016, p. 26;Możejko, 2014, pp. 65-71;Ŝtefánik, 2018, pp. 788-790). Most of the copper that ended up in Danzig seems to have been exported to the Western European market, mainly Flanders, Holland and Englandeither directly or via Lübeck (Dollinger, 1970, p. 222;Irsigler, 1979;Yrwing, 1966, p. 571). The main bulk, however, seems to have headed to Flanders, and Bruges in particular (Możejko, 2014, p. 67;Ŝtefánik, 2018, p. 791). At that time, Bruges, which was the location of one of the four Hansekontore, had become one of the main commercial centres in Northern Europe. It also served as a hub for the Mediterranean trade. From here, copper was distributed mainly to Venice, but also to other major ports along the Mediterranean coast (Elbl, 2007;Ŝtefánik, 2018, pp. 786-787). The Skaftö copper cargo has a close parallel in the Copper Ship, where at least three stacks of Reißscheiben, each containing approximately ten ingots, were recorded in situ within the coherent hull structure, in addition to the large number of ingots that were lying scattered on the seabed outside the wreck. Ingots appear to have been stacked in the same manner as in the Skaftö wreck. At least 226 ingots, with a total weight of approximately 1.4 tons, were recovered from the Copper Ship (Ossowski, 2014b, p. 243). Thus, the average weight of an ingot was 6 kg. This can be compared to the Skaftö wreck, were approximately 100 visible ingots, distributed in two major assemblages comprised of three and six or seven discernible stacks, respectively, have been recorded. Based on retrieved specimens, their total weight have been estimated to between 1.5 and 3.5 tons, which gives an average ingot weight in the range between 15 and 35 kg. This clearly shows that the Skaftö Reißscheiben in general are both larger and thicker than their counterparts from Gdańsk Bay. Analysis reveals that speiss ingots are likely to derive from the same source as the Type 1b Reißscheiben, namely central Slovakia, and it may thus have been ferried along the same waterways together with the copper. As previously described, speiss was largely regarded as an unwanted by-product. As such, it is not known to have been traded on a regular basis. However, as shown by the Skaftö cargo, this material was apparently worth transporting for some reason. The question is if the Skaftö speiss represents an occasional transport, or if it is an indication of a trade not previously known from historical sources? Although the speiss in this case could be classified as rather poor (cf. Kleinheisterkamp, 1948), it is possible that the reason for trading the ingots was their still reasonable copper content (c.30%). In the late medieval period, different technical and socio-economic factors severely affected the copper mining industry (Bartels & Klappauf, 2012, pp. 238-248), which could have made this rather unworkable material worth trading. Due to its low melting point, it is also possible that speiss could have been used in conjunction with soldering (Tylecote, 1976, p. 69). Lime As mentioned, lime is likely to have constituted the largest part of the cargo, at least in regards to volume. Analysis has revealed that it was so-called quicklime, or burnt lime. In the medieval period, burnt lime was mainly used for the production of mortar of different sorts. It had, however, a number of other applications as well. One such application was hide tanning (Granlund, 1963, p. 159). Due to its caustic properties, quicklime is also known to have been used as a defensive weapon, on land but possibly also in warfare between ships at sea (Sayers, 2006). Typically, it was transported in barrels due to the great risk associated with sea-transport (Munthe, 1945, p. 1, footnote 1;Granlund, 1963, p. 158). When exposed to water, a highly exothermic reaction is initiated which also leads to expansion of the material. For a ship at sea, this vigorous process could of course be disastrous. Possibly, this is also part of the explanation of why the vessel ultimately foundered. Analysis of the lime points towards a likely Gotlandic provenance. As previously stated, however, an Estonian origin cannot be fully ruled out at this point. At present, very little is known about early lime export from Gotland. It has been argued that the burning of lime on the island was very limited in the medieval period and mostly served domestic needs (Sjöberg, 1972, p. 39;Steffen, 1940, p. 12). Moreover, it has been suggested that the export in the Middle Ages primarily concerned 'raw' lime, that is, unburnt limestone, and that it was not until the mid-17th century that the export of burnt lime really took off (Lisiński et al., 1987, p. 7;Sjöberg, 1972, p. 52). The supposed reason for this was a royal decree issued in 1649, which increased export fees on limestone in order to further the export of burnt lime (Steffen, 1940, p. 13). It is worth noting that the main part of the Gotlandic lime produced before the Swedish takeover in 1645 seems to have headed for ports in the southeastern part of the Baltic Sea; among them Danzig and Königsberg (present-day Kaliningrad) (Sjöberg, 1972, p. 44;Yrwing, 1960, p. 395). One of the earliest written mentions of lime export from Gotland dates to the year 1318 and concerns trade with the Teutonic Order in the latter town. The Lübeck Pfundzollisten reveals a recurrent import of Gotlandic limestone to Lübeck from the 1360s and onwards (Yrwing, 1960, p. 395). The first time export of burnt lime is mentioned is in 1460. This year, both limestone and burnt lime were shipped from Gotland to several ports in northern Germany, the eastern Baltic (among them Danzig), Denmark, Westphalia, the Rhine area, Holland and England (Granlund, 1963, p. 158;Munthe, 1945, p. 115). Sources, however, do not reveal if shipments consisted of quicklime or slaked lime. According to the geologist Henrik Munthe, there was no substantial export of quicklime from Gotland before the mid-17th century. He based this opinion on the fact that the use of barrels in conjunction with lime export first occurs in written sources around that time (Munthe, 1945, p. 115). The lime cargo of the Skaftö wreck, however, could indicate that export of Gotlandic quicklime packed in barrels may have taken place already in the 1440sthat is, approximately 200 years before the written evidence. Of course, an isolated ship find like the Skaftö wreck does not automatically reveal the extent and nature of this export. Markings, which could help in attributing lime barrels to particular merchants, either at the shipping port or at the port of delivery, have been recorded on two of the barrel staves recovered from the Skaftö wreck ( Figure 19). The mark depicted in Figure 19 (inventory No. 29276:51) resembles merchant's marks from the 14th-17th centuries that have been recorded in Rostock, Greifswald and Lübeck (Nordell, 2014). The stave is however quite heavily eroded, and it is thus not possible to determine whether the mark is preserved in its entirety. Since even small changes to the layout of the mark would alter its possible affiliations in crucial ways, it is probably wise for now to treat this evidence very cautiously. Tar Tar remains the most anonymous cargo of the Skaftö wreck. Historically, wood tar has primarily been used for the impregnation and preservation of wooden items, buildings, and other structuresnot least boats and shipsbut also of objects made from plant fibres (such as rope and fishing nets) and leather (e.g. shoes). Tar, and pitch, which is the further processed product, had numerous other applications as well. They were, for instance, used as sealants or adhesives. They could also be used for various medical purposes, such as treating psoriasis and other skin diseases (Granlund, 1974, pp. 417-418;Ossowski, 2014b, p. 269;Svensson, 2007, p. 613). As shown, there are currently no reliable analytical methods available for determining the provenance of tar. In this respect, we are thus largely dependent on written sources. These sources tell us that tar from Sweden/Finland was exported already in the 14th century. The extent of this export, however, was probably relatively limitedat least compared to later centuries. It was not until the 17th century that Sweden/Finland became the major European supplier of tar (Villstrand, 1996, pp. 62-63). In medieval times, tar was produced also in Norway. The export of Norwegian tar, however, seems to have been very restricted during this period (Ropeid, 1974, p. 426). Among the countries surrounding the Baltic Sea, it appears as if the major tar-producing region at the time was Poland (Villstrand, 1996, pp. 62-63). Apparently, large quantities of tar were also produced on Gotland. Surviving customs records from the late 15th century reveal that some of this tar was exported to Danzig (Lauffer, 1894, p. 17;Yrwing, 1974, pp. 421-422). Based on written sources alone, the Skaftö tar is thus more likely to originate in either of these two regions than in Sweden/Finland. This conclusion might actually also be supported by the results of the GCMS analysis. As mentioned, the difference in composition between the now analysed tars and the comparative materials presented in Figure 13, which are all of Swedish or Finnish origin, suggests a difference in the manufacturing technique, which possibly indicates a provenance outside of mainland Sweden/Finland. Like the lime, tar was shipped in barrels. They appear to be of approximately the same size as tar barrels recovered from the Copper Ship in Poland. Here, capacities could be calculated to between 69 l and 99 l (Litwin, 1985, p. 47;Ossowski, 2014b, p. 269), which can be compared to barrels on the Skaftö wreck, which likely held around 80 l. Timber Dendrochronological analysis shows that the planks and boards in the cargo are from different locations, possibly within present-day Poland. Major ports in this region at the time were Danzig (Gdańsk), Elbing (Elbląg) and Königsberg (Kaliningrad). All three towns were major exporters of southern Baltic timber in the beginning of the 15th century. Danzig, however, soon became the paramount actor within this highly specialized trade (Bonde et al., 1997, p. 203;Dollinger, 1970, p. 221;Haneca et al., 2005, p. 262;Wazny, 2005, p. 117). Towns were situated along Gdańsk Bay and Vistula Lagoon and received shipments from the interior of timber, brought to, and along the coast, via the inland waterways. The watershed of the Vistula has long been identified as the main transport route for this southern Baltic timber, but the large region that drains also from the east, with major rivers like the Pregolya, flowing to Königsberg, enabled wide exploitation of the forest resource also eastwards (Haneca et al., 2005, p. 262;Wazny, 2002, p. 316). In the 15th century, most of the southern Baltic timber seems to have headed to ports in Western Europe. Among the major importers at the time were Holland, Flanders, England and Scotland (Dollinger, 1970, p. 221;Haneca et al., 2005, p. 262;Wazny, 2005, pp. 121-122). Of the Flemish towns, Bruges seems to have taken a particularly active part in this trade, later superseded by Antwerp (Dollinger, 1970, pp. 246-247;Haneca et al., 2005, p. 262). As shown by the dendroarchaeological record, this trade concerned almost without exception converted products from the parent tree, in the form of planks and boards of widely varying sizes. The Southern Baltic region was not the primary source for larger, bulky, structural timber, it seems (Daly, 2007, pp. 200-202;Wazny, 2005, p. 121). During this period, it was common to distinguish between different timber varieties. In surviving customs records and other contemporaneous sources, a number of varieties are mentioned, such as Baumholz, Bogenholz, Bottichholz, Clappholz, Flossholz, Gudholz, Remenholz and Wagenschot (Lauffer, 1894). A few of them are known only by name, whereas others are much better defined. Apparently, terms alternately refer to conversion techniques, shipping forms, timber qualities, and the intended use of the timbers. A problem, however, is that definitions often seem to have changed over time (Wazny, 2005, p. 119). Both timber sorts that have been recorded in the Skaftö wreck have close parallels in the timber cargoes of the Copper Ship and Skjernøysund 3. These slightly older vessels carried timber of a similar geographical origin as the ship wrecked at Skaftö (Daly, 2011a(Daly, , 2020Krąpiec & Krąpiec, 2014). In both cases, boards resembling Group 1 have been interpreted as semifinished barrel staves, possibly Clappholz or Bottichholz, while planks resembling Group 2 have been identified as Wagenschot (Eng. wainscot) (Auer & Maarleveld, 2013, p. 28, 46;Ossowski, 2014b, pp. 254-261;Zwick, 2019, pp. 194-195). It appears as if the latter term in this period was used for high quality planks extracted from straight-grown, finegrained and knotless tree trunks (Haneca et al., 2005, p. 263;Wazny, 2005, p. 120). It is notable that all the analysed wainscot planks (Group 2) in the Skaftö wreck can be attributed to one distinct geographical region. Thus, it is possible that they represent one single production event. The origin of the boards (Group 1), on the other hand, may be a little more diverse. Historical records show that timber was often prepared to semi-finished products on, or close to, the timber-felling sites (Haneca et al., 2005, p. 262;Wazny, 2005, pp. 119-121). Apparently, it is this practice that is being mirrored in the Skaftö material. An important reason for this was the difficulty of working oak after the wood had seasoned, which made it necessary to prepare the timbers as soon as possible after the trees were felled (Daly, 2007, p. 202). Besides, the handling and transport of semi-products was presumably more convenient than that of complete logs (Haneca et al., 2005, p. 262). More importantly, however, due to the relatively low value of wood in relation to the hold space it occupies, it must be carried as efficiently as possible to make trading economically viable. This means not shipping material that is only going to be waste, as, for instance, bark and sapwood, and converting the timber into forms that pack efficiently. The fact that the transport cost for southern Baltic timber sold in western markets in this period represented up to almost 80 percent of the retail price (Dollinger, 1970, p. 157) was obviously a big incentive to semifinish at source. Another interesting feature is the manner in which the timber cargo was stowed on board the ship, which have close parallels in both the Copper Ship (Litwin, 1985, pp. 46-47;Ossowski, 2014b, p. 259) and the Skjernøysund 3 shipwreck (Auer & Maarleveld, 2013, p. 27, 46). In all three cases, timber were stacked in the bottom of the hold, close to the keel, on top of the mast-step buttresses. In Skjernøysund 3, like in the Skaftö wreck, spaces between buttresses had been filled with shorter boards in order to make maximum use of the available storage space (Auer & Maarleveld, 2013, p. 27, 46). Similarly, some of the planks in the Copper Ship seem to have been deliberately cut in order to fit between the structural elements of the hull (Ossowski, 2014b, p. 254). Placement of the timber cargo in the bottom of the hold, underneath much heavier cargo types, may seem strange from a stability point of view. However, since the curvature of the hull is less pronounced in this part of a ship, this was probably the most beneficial in terms of cargo efficiency. In addition, the placement of planks may also have served the purpose of protecting the hull interior from heavier cargo items (Ossowski, 2014b, p. 259). Bricks and Tiles ICP-MA/ES analysis reveals that all of the analysed brick and tile samples derive from ceramic productions in present-day Gdańsk. On the European continent, bricks became increasingly common as building material, particularly in urban contexts, during the High Middle Ages. This was largely a response to the intense economic and demographic growth, as it enabled standardization, and thus rationalization, of the building trade (Debonne, 2014). In the Nordic countries, its exclusivity remained up until the end of the 16th century. Here, it was reserved primarily for prestigious sacred and profane buildings, such as cathedrals, churches, monasteries, palaces and castles (Andersson & Hildebrand, 1988, pp. 51-52). This being said, a load of bricks and tiles on the wreck of a 15th century trading vessel is not particularly surprising, especially in view of the other building materials found on the wreck site. It seems, however, as if bricks and tiles mostly were transported only shorter distances (Spufford, 2000, p. 160). Already at an early stage of investigation, the question arose whether the Skaftö bricks in fact could originate from a fireplace, rather than being part of the ship's cargo. At that time, there were, at least to the knowledge of the corresponding author, few, if any, published examples of large brick-built ship hearths from the first half of the 15th century or earlier in Northern Europe (there are examples, however, of simple 'fireboxes' filled with sand, clay, and/or brickssee e.g. Vlierman, 1992). This, together with the location of the bricks, which appeared somewhat odd in the context of food preparation; the occurrence of occasional roof tiles; and, most importantly, the lack of mortar on the surfaces of the bricks, eventually lead to the exclusion of this possibility (von Arbin, 2010(von Arbin, , p. 16, 2012. Since then, however, new and interesting comparative archaeological material has surfaced. In 2009, the wreck of a big cargo vessel was located in the River IJssel, close to the historic centre of Kampen in present-day Netherlands. This vessel, termed the 'IJsselcog', is dated to around 1415-20 and was comparable in size to the Skaftö wreck. On its main deck, located to its starboard side, abaft the centre of the hull, the vessel had a large fireplace and a dome oven (Waldus et al., 2019). No evidence of a roof construction was found during excavation, which may indicate that cooking actually took place in open air. The hearth and oven was built from ordinary bricks, whereas the galley floor was made of glazed floor tiles. Interestingly, bricks did not display any traces of mortar (W. B. Waldus, personal communication, 2019). This example shows that at least some of the larger vessels of the early 15th century were equipped with large brick-built fireplaces. Interestingly, the location of the IJsselcog galley seems to be almost identical with the assumed original location of the brick assemblage on the Skaftö wreck. The IJsselcog has been interpreted as a vessel built and equipped mainly for military purposes, such as the convoying of merchant ships. This conclusion is partially due to the size of the vessel, and partially to the size of the galley, which is considered to be over-dimensioned for an ordinary ship's crew (Waldus et al., 2019, pp. 485-491). This explanation may be valid also for the Skaftö wreck, which, however, apparently carried cargo at the time of sinking. To summarize, it is currently not possible to say with any certainty whether bricks and tiles constituted cargo or not. Discussion Dendrochronological analysis of barrels containing lime, and, possibly, speiss, of planks and boards carried as cargo, and of timber from the ship itself, suggests that the vessel that foundered off Skaftö was fairly newly built when it set out on its final voyagepresumably sometime between the years 1440 and 1443. Judging from the archaeological and analytical data that have been accounted for here, it seems indisputable that the ship was heading from the Baltic Sea, when it for some reason foundered en route. Based on Figure 20. Map showing the origin of the various cargoes and the suggested planned route of the Skaftö vessel, based on investigations of the cargo and written sources. It should be noted that tar, which is likely to have originated either in Poland or in Gotland, has been omitted from the map. Borders and place names are modern-day (Anders Gutehall, Visuell Arkeologi). current research, at least two presumptive shipping ports can be pinpointed within the Baltic Sea basin. One of them is Gotland, and possibly the town of Visby, from which lime was likely exported. The other is one of the Hanseatic towns of Danzig, Elbing and Königsberg, which were all situated along Gdańsk Bay ( Figure 20). Since Danzig, as previously stated, was the main shipping port for Hungarian copper and, in addition, had a dominant position in the southern Baltic timber trade at that time, it definitely stands out as the strongest candidate. As we have seen, Danzig also appears to be the origin of bricks and tiles found on the shipwreck, even though it can be disputed whether these actually constituted cargo. Tar may have arrived in Danzig either by sea from Gotland or from inland Poland, transported primarily on river routes, namely Vistula and its tributaries. In this period, and especially after the beginning decline of the Teutonic Order in the first decades of the 15th century, Danzig became the region's leading port in the transhipment of commodities between the east and the west (Dollinger, 1970, pp. 230-231). As such, it was certainly a place where goods from several different geographical sources may have been aggregated for a single shipload, like in the Skaftö case. Probably, Danzig was also where the vessel had its homeport. Although long-distance trade in southern Baltic oak for shipbuilding purposes can be evidenced dendrochronologically from the beginning of the 15th century (Daly, 2007, pp. 217-230), the absence of 'foreign' constructional elements among the timbers suggests that the ship was built regionally, that is, in the Vistula estuary or its vicinity. This assumption is supported also by the use of Drepanocladus mosses for luting, which could be considered a distinct regional feature (Filipowiak, 1994, p. 93;Gos & Ossowski, 2009). From written sources it is known that, next to Lübeck, Danzig was the biggest shipbuilding centre among the Hanseatic towns, a position it maintained until about 1450 (Dollinger, 1970, p. 144). Another important shipbuilding centre at that time, however, was the town of Elbing, situated some 55 kilometres to the southeast (Litwin, 1989, pp. 154-155). That the ship not only was built in this coastal region, but also operated from here is indicated by the small ceramic assemblage that have previously been recovered from the wreck site. This material consists largely of ceramic types characteristic for the southern Baltic region (von Arbin, 2010, p. 18). The vessel may well have belonged to merchants resident in Danzig. However, considering the Teutonic Order's involvement in Danzig shipbuilding at the time, and particularly in the construction of big 'hulks' (cf. Możejko, 2014, p. 59), a Teutonic ownershipfull or partialcannot be ruled out, either. Since the timber cargo was stowed in the bottom of the hold, and thus must have been the first cargo taken on board, the following two hypothetical scenarios for the vessel's last journey can currently be set up: (1) After having loaded timber, copper and speiss, and possibly also bricks and tar, in Danzig, the vessel set sail to Gotland, where lime and possibly tar were taken on board. (2) Lime, and possibly tar, were shipped separately to Danzig, where it was loaded onto the vessel together with the other cargoes. Based on the distribution of the various cargoes within the remaining hull structure, and given that the cargo has not been restowed at any point during the journey, the last scenario is probably the most feasible. To what extent distribution of the cargo actually corresponds to the original loading order of the vessel is of course difficult to tell. What we can observe at the wreck site today is at least partly due to the various site formation processes that have taken place during, and after, the foundering of the ship. For instance, some of the heavier cargo items, such as copper and speiss ingots, but also bricks and tiles, seem to have slid slightly towards the upper part of the starboard side as the vessel laid down on the sea floor. Other parts of the cargo, such as timber and lime and tar barrels, appear to have moved very little, if at all. However, if we accept that distribution isessentiallycorrect, we must also assume that it is the result of deliberate considerations, made by the people who once loaded the vessel. Considering the distribution of high-density goods relatively high up in the hold, it appears as if stability may not have been a decisive factor in the loading of the vessel. In other words, we need to look for other explanations as well. In the case of the Polish Copper Ship, it has been suggested that the division of Reißscheiben ingots into a number of well-defined assemblages could mirror different ownership, and thus possibly also different presumptive buyers (Ossowski, 2014b, p. 243). This might also be the case with the copper and speiss found in the Skaftö wreck. We should however also consider that division (although apparently not consistently executed, as shown by chemical analysis) might also have been a means of separating the two chemically different but visually similar copper sorts from each other. From written sources, we also know that the distribution of goods on two or more ships, heading to the same destination, was common practice among medieval merchants in order to spread the economic risk connected with sea transport (Dollinger, 1970, pp. 155-156). Possibly, what we can observe today is partially the result of this practice. An alternative interpretation, which of course does not necessarily contradict the former, would be that different assemblages were meant to be distributed in different ports along the proposed route. Sequencing, that is, the stowing of cargo in the presumed order of unloading, prevents the need for time and labour consuming restowing, which, ultimately, saves money in the transhipment process. Both interpretations, however, imply that cargo was intended for more than one purchaser. The composition of the cargo suggests that the intended destination(s) for the ship was most probably none of the contemporaneous Norwegian ports. Instead, the ship was likely aiming for the Western European market. Ports of call may have been situated in, for instance, England, Holland and/or Flanders ( Figure 20). For vessels, both native and foreign, that took on cargo in Danzig at that time, Bruges (or, rather, Sluis, which in practice served as the port of Bruges after the silting up of the Zwin channel from the late 13th century onwardsee e.g. Charlier, 2011, pp. 747-748) appears to have been the most frequented port of call (Litwin, 2014, p. 24). Bruges was the location for one of the four Kontore of the Hanseatic League, and, as we have seen, it was also a major entrepôt for both Hungarian copper and southern Baltic timber at the time. Coming from the Baltic Sea, the ship is likely to have headed through the Oresund where dues had to be paid at the Danish Sound Toll in Helsingør, established in 1429. Unfortunately for our case, there are no surviving customs records until the end of that century (Gøbel, 2010, pp. 305-306). From Oresund, the ship continued northwards, probably hugging the now-Swedish west coast, closely. Presumably, the initial plan was to follow the coast to the Agder side of Viken, most probably to the area of Cape Lindesnes on the southwestern tip of Norway, which for many centuries served as a crossroad for overseas traffic (Stylegar, 2004, pp. 145-147). Here, the course would have been set southwardseither to England or to the western European mainland via the Frisian Islands ( Figure 20). However, for unknown reasons the journey instead ended abruptly off the island Skaftö.
2022-08-20T15:12:43.692Z
2022-01-02T00:00:00.000
{ "year": 2022, "sha1": "d3cb2398e81ad246b3a838cf635b4593040bbc7f", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10572414.2022.2076518?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "fdad4f1adbfb8910f0c14fbef416415af8552e2b", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [] }
216046997
pes2o/s2orc
v3-fos-license
The EZH2–PHACTR2–AS1–Ribosome Axis induces Genomic Instability and Promotes Growth and Metastasis in Breast Cancer These findings reveal that EZH2 mediates ribosomal DNA stability via silencing of PHACTR2-AS1, representing a potential therapeutic target to control breast cancer growth and metastasis. Aberrant activation of histone methyltransferase EZH2 and ribosome synthesis strongly associate with cancer development and progression. We previously found that EZH2 regulates RNA polymerase III–transcribed 5S ribosomal RNA gene transcription. However, whether EZH2 regulates ribosome synthesis is still unknown. Here, we report that EZH2 promotes ribosome synthesis by targeting and silencing a long noncoding RNA PHACTR2-AS1. PHACTR2-AS1 directly bound ribosome DNA genes and recruited histone methyltransferase SUV39H1, which in turn triggered H3K9 methylation of these genes. Depletion of PHACTR2-AS1 resulted in hyperactivation of ribosome synthesis and instability of ribosomal DNA, which promoted cancer cell proliferation and metastasis. Administration of PHACTR2-AS1-30nt-RNA, which binds to SUV39H1, effectively inhibited breast cancer growth and lung metastasis in mice. PHACTR2-AS1 was downregulated in breast cancer patients, where lower PHACTR2-AS1 expression promoted breast cancer development and correlated with poor patient outcome. Taken together, we demonstrate that PHACTR2-AS1 maintains a H3K9 methylation-marked silent state of ribosomal DNA genes, comprising a regulatory axis that controls breast cancer growth and metastasis. Significance: These findings reveal that EZH2 mediates ribosomal DNA stability via silencing of PHACTR2-AS1, representing a potential therapeutic target to control breast cancer growth and metastasis. Long noncoding RNAs (lncRNA) comprise a large class of regulatory RNAs without protein-coding potential that are >200 nucleotides long (10,11). LncRNAs regulate cancer development and pro-gression by influencing cell proliferation, metastasis, metabolism, and self-renewal (12)(13)(14)(15). Histone methylation is an important means of lncRNA-mediating gene expression (16). The histone methyltransferase SUV39H1 catalyzes H3K9 methylation at repetitive DNA sequences and regulates heterochromatin formation. RNA plays a critical role in targeting SUV39H1 to DNA sequences. Chromatinassociated RNA mediates the association of SUV39H1 with pericentric DNA (17). The telomeric lncRNA TERRA targets SUV39H1 to telomeres, promoting H3K9me3 accumulation at damaged telomeres, and modulating telomere structures (18). Heterochromatin is formed at pericentromeres, telomeres, and rDNA loci. However, whether lncRNAs participate in targeting SUV39H1 to rDNA repeats is unknown. Here, we report that EZH2-regulated lncRNA PHACTR2-AS1 mediates SUV39H1 localization to rDNA repeats and induces H3K9 methylation of rDNA, helping to suppress rRNA transcription. Moreover, PHACTR2-AS1 displays efficacy against tumor growth and metastasis in vivo. Our findings provide insight into the potential of PHACTR2-AS1 as a therapeutic target in breast cancer. Materials and Methods Cell culture and transfection MCF7, MDA-MB-231, Hs578T, and HEK-293A cell lines were obtained from the Cell Resource Center, Peking Union Medical College (the Headquarter of National Infrastructure of Cell Line Resource, NSTI). The species origin of the cell lines was confirmed with PCR. The identity of the cell lines was authenticated with short tandem repeat profiling. The cell lines were checked free of Mycoplasma contamination by PCR. Cell stocks were created within five passages, and all experiments were completed within ten passages. The same batch of cells was thawed every 1 to 2 months. The cells were grown in DMEM (Gibco; Thermo Fisher Scientific) supplemented with 10% FBS at 37 C under 5% CO 2 in a humidified incubator. Tumor specimens and in situ hybridization The tissue microarrays of patients with breast cancer were purchased from National Human Genetic Resources Sharing Service Platform 2005DKA21300 (Shanghai Outdo Biotechnology Company Ltd.). Tissue sections were deparaffinized and rehydrated gradually. After being digested with pepsin, tissue samples were hybridized with 5 0 biotin-labeled LNATM-modified PHACTR2-AS1 probe (Exiqon) at 54 C overnight, and subsequently streptavidin conjugated to Poly-Horseradish Peroxidase (Poly-HRP) detecting kit was applied (SP-9002, Zhong Shan Jin Qiao, Beijing, China). The scoring of staining was performed by three investigators including a pathologist. Criteria for assessing staining intensities of PHACTR2-AS1 were classified into four levels: 0 ¼ no staining, 1 ¼ low staining, 2 ¼ media staining, and 3 ¼ strong staining. RNA in situ hybridization and immunofluorescence microscopy Cells were briefly rinsed with PBS and fixed with 4% formaldehyde for 15 minutes at room temperature followed by permeabilization with 0.1% pepsin for 1 minute at 37 C. After shortly washing, cells were dehydrated gradually and air dried. 5 0 Biotin-labeled LNATMmodified PHACTR2-AS1 probe (Exiqon) was diluted in hybridization buffer, denatured at 80 C for 2 minutes, and hybridized at 54 C for 30 minutes. After hybridization, cells were washed by 0.1Â SSC at 65 C for 3 Â 10 minutes, and streptavidin Alexa Fluor 488 conjugate (Invitrogen) was then added and incubated for 4 hours at 37 C. For colocalization, immunofluorescence was followed using antifibrillarin (Abcam), anti-UBF1 (Santa Cruz Biotechnology), anti-SUV39H1 (Abcam), and anti-g-H2AX (Merck Millipore). Images were captured with a confocal laser-scanning microscope (Carl Zeiss). Ribosome fractionation Cells were exposed to cycloheximide (100 mg/mL) for 15 minutes, then 2 Â 10 6 cells were lysed in 500 mL lysis buffer (9). After 30-minute incubation on ice, the samples were centrifuged at 13,000 rpm at 4 C for 10 minutes. For fractionation, the lysates were loaded on 15%-45% sucrose gradients and separated by ultracentrifugation with a SW40 rotor (Beckman) at 39,000 rpm at 4 C for 3 hours. Linear sucrose gradients were prepared with a Gradient Master (Biocomp). The distribution of ribosomes on the gradients was recorded using BIOCOMP Piston Gradient Fractinator equipped with BIO-RAD ECONO UV Monitor (set at 260 nm). RNA pull-down assay Linearized plasmids of pcDNA3.1-PHACTR2-AS1/truncation mutants/antisense were used as DNA templates for transcription in vitro. MEGAscript T7 Kit (Ambion) with biotin-16-UTP (Ambion) was used to produce biotin-labeled RNA and MEGAclear Kit (Ambion) was then applied to purify those RNA transcripts. Biotinylated RNA in RNA structure buffer (19) was heated to 95 C for 2 minutes, put on ice for 3 minutes, and then left at room temperature or 30 minutes to allow proper secondary structure formation. Folded RNA was then mixed with precleared cell lysates in binding buffer and then incubated at room temperature for 1 hour. Streptavidin Dynabeads (Invitrogen) were added to each binding reaction and further incubated at room temperature for 1 hour. Beads were washed briefly with binding buffer for five times and boiled in SDS buffer. Then, the retrieved proteins were detected by Western blot analysis. Dual luciferase reporter assay Plasmids of pGL3-rDNA-IRES and pRL-TK were cotransfected using Lipofectamine 3000 Reagent (Invitrogen; Thermo Fisher Scientific). Firefly and Renilla luciferase activity was measured by the Dual-Luciferase Reporter Assay (#E1910, Promega), and Renilla activity was used to normalize firefly activity. Chromatin immunoprecipitation A chromatin immunoprecipitation (ChIP) assay was performed using SimpleChIP Enzymatic Chromatin RNA reagents PHACTR2-AS1-30nt-RNA and negative control were chemically synthesized with modifications from Ribobio Co. All the 30 nucleotides were modified by 2 0 -O-methylation and 5 0 -Cholesterol for in vivo RNA delivery, which is long-lasting in mice. The negative controls were purchased from Ribobio Co. For delivery of methylation and cholesterol-conjugated RNA, 5 nmol RNA in 0.1 mL saline buffer was injected into tail vein of NOD/SCID mice once every 3 days for 2-4 weeks. Cell proliferation and colony formation Cells were plated into 96-well plates at 3,000 cells/well. Ten microliters of WST-1 (Roche) was added to the cells per well and incubated for 2 hours at 37 C. Then the reaction mixture was measured in a microplate reader at 490 nm. For colony formation, 2 Â 10 3 cells were plated into 6-well plates. Two weeks later, cells were fixed, stained with crystal violet, and photographed. LncRNA microarray analysis Total RNA was isolated using TRIzol (Life Technologies). RNA quality and quantity was measured by using Agilent 2200 Bioanalyzer. The antisense RNA was generated using Amino Allyl messageAmp II kit (Life Technologies) and labeled with Cy5. Hybridization was carried out using RiboBio RiboArrayTM lncDETECTTM Human Array 1 Â 40K (Ribobio Co.). The slides were scanned using the Axon GenePix 4000B microarray scanner (Axon Instruments). Scanned images were then imported into GenePix Pro Microarray Image Analysis Software for analysis. The raw data have been deposited in GEO under the accession code GSE147441. Ethics The Ethics Committee of Peking University Health Science Center has approved the mouse experiments (permit number: LA2014122) for this study. The handling of mice was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975 and the revised version in 1983. We also referred to the procedures by Workman and colleagues. Statistical analysis All data are presented as means AE SD of results from three independent experiments. Statistical significance was determined by the two-tailed Student t test. P < 0.05 was considered statistically significant. Results LncRNA PHACTR2-AS1 is a target gene of EZH2 Similar to protein-coding genes, lncRNAs are also subject to epigenetic regulation, especially H3K27 methylation mediated by EZH2 (21). To identify EZH2-regulated lncRNAs, we initially profiled the expression of lncRNAs in a pair of breast cancer cells (MCF7-control vs. MCF7-EZH2) and the top 10 downregulated lncRNAs were shown ( Supplementary Fig. S1A). By detecting the level of the ten lncRNAs in human breast cancer tissues, PHACTR2-AS1 was markedly downregulated ( Supplementary Fig. S1B), and low PHACTR2-AS1 expression was observed in a panel of cell lines with the mesenchymal phenotype ( Supplementary Fig. S1C). To verify whether PHACTR2-AS1 was regulated by EZH2, we first estimated the levels of PHACTR2-AS1 with or without EZH2 overexpression. EZH2 overexpression led to inhibited PHACTR2-AS1 expression ( Fig. 1A; Supplementary Fig. S1D). In contrast, EZH2 knockdown restored the expression of PHACTR2-AS1 ( Fig. 1B and C; Supplementary Fig. S1E). Consistently, when cells were treated with GSK343, an EZH2 inhibitor, PHACTR2-AS1 expression was increased (Supplementary Fig. S1F and S1G). We then cloned the promoter of PHACTR2-AS1 into the upstream of firefly luciferase-coding region and examined the luciferase activity. Knockdown of EZH2 directly activated PHACTR2-AS1 promoter transcription ( Supplementary Fig. S1H). Furthermore, to understand how PHACTR2-AS1 was regulated by EZH2, we detected the occupancy of both EZH2 and H3K27 trimethylation at PHACTR2-AS1 promoter and found that EZH2-mediated H3K27 trimethylation indeed occurred at PHACTR2-AS1 promoter (Fig. 1D). To scrutinize how EZH2 was recruited to PHACTR2-AS1 promoter, we detected the regulation of some known EZH2-interacting transcription factors on PHACTR2-AS1 expression, including c-myc, Twist, YY1, and E2F6 (22). Among them, only YY1 knockdown can lead to the increase of PHACTR2-AS1 level (Fig. 1E). Transcription factor Yin Yang 1 (YY1) was reported to interact with EZH2 and recruit it to target genes (23). Our data found that YY1 occupied the promoter of PHACTR2-AS1 (Fig. 1F). YY1 knockdown could decrease the occupancy of EZH2 and H3K27 trimethylation at PHACTR2-AS1 promoter (Fig. 1G), indicating that YY1 is involved in the recruitment of EZH2 to PHACTR2-AS1 promoter. All these results showed that EZH2 suppressed PHACTR2-AS1 expression, identifying PHACTR2-AS1 as a target gene of EZH2 in breast cancer cells. PHACTR2-AS1 targets ribosome DNA and inhibits ribosome synthesis To explore the biological functions of PHACTR2-AS1, we first examined its subcellular localization. FISH followed by immunofluorescence showed a strong colocalization between PHACTR2-AS1 and nucleolar marker proteins (fibrillarin and UBF1), indicating that PHACTR2-AS1 was enriched in the nucleolus ( Fig. 2A; Supplementary Fig. S2A). The nucleolus contains abundant rDNA repeats, suggesting that PHACTR2-AS1 may be associated with rDNA. We found a marked enrichment of PHACTR2-AS1 at the D0 region, demonstrating that PHACTR2-AS1 directly bound the rDNA promoter ( Fig. 2B; Supplementary Fig. S2B). All rDNA promoter sequences in different chromosomes are identical. To discern whether PHACTR2-AS1 binding to rDNA blocked the occupancy of RNA Pol I at rDNA, we generated stable cell lines with PHACTR2-AS1 overexpression or downregulation ( Supplementary Fig. S2C) and measured the effect of PHACTR2-AS1 on the occupancy of RPA135, a subunit of the Pol I complex. Indeed, the presence of PHACTR2-AS1 inhibited the occupancy of RNA Pol I at the rDNA promoter (Fig. 2C), whereas PHACTR2-AS1 depletion released the inhibition of RNA Pol I occupancy ( Fig. 2D), strongly suggesting that PHACTR2-AS1 suppressed rRNA transcription. Next, we examined the regulation of PHACTR2-AS1 on the 47S precursor rRNA (pre-rRNA), a direct product of rRNA transcription. PHACTR2-AS1 overexpression led to a pronounced reduction of pre-rRNA expression in breast cancer cell lines ( Fig. 2E; Supplementary Fig. S2D). Treatment of PHACTR2-AS1 ASOs resulted in the increase of pre-rRNA, indicating the regulatory effect of PHACTR2-AS1 on pre-rRNA ( Fig. 2F; Supplementary Fig. S2E). Furthermore, we detected the changes of nascent pre-rRNA and found PHACTR2-AS1 depletion led to increased nascent pre-rRNA transcription, suggesting that PHACTR2-AS1 specifically regulated pre-rRNA production (Fig. 2G). In agreement, luciferase reporter assays showed that PHACTR2-AS1 depletion resulted in significant activation of rDNA promoter, indicating that PHACTR2-AS1 suppressed rRNA transcription (Fig. 2H). Production of pre-rRNA in the nucleolus is the initiating step of ribosome biogenesis. Mature 80S ribosome, composed of a 40S and PHACTR2-AS1 is a target gene of EZH2. A, Empty vector or Flag-EZH2 plasmid was transfected, followed by qRT-PCR and Western blot analysis. B, Control or EZH2 siRNAs-pool was transfected into MDA-MB-231 cells, followed by qRT-PCR and Western blot analysis. C, Control or EZH2 siRNA pools were transfected into Hs578T cells, followed by qRT-PCR and Western blot analysis. D, Lysates were extracted from Hs578T cells for ChIP assays. qPCR assays were performed to quantify ChIPenriched DNAs. E, siRNA pools were transfected into Hs578T cells, followed by qRT-PCR. F, Lysates were extracted from Hs578T cells for ChIP assay. G, Hs578T cells were treated by control or YY1 siRNA pools for 48 hours and lysates were extracted for ChIP assay. Data represent the mean AE SD . Ã , P < 0.05; ÃÃ , P < 0.01; ÃÃÃ , P < 0.001 (two-tailed Student t test). Western blot analysis showed the efficacy of YY1 siRNA pool. C and D, Lysates were prepared from Hs578T-control/ PHACTR2-AS1 or MCF7-control/PHACTR2-AS1-shRNA cells for ChIP assays. E, An empty or a PHACTR2-AS1-overexpressing vector was transfected into Hs578T cells, followed by qRT-PCR. F, LNA-based ASOs were transfected into MCF7 cells, followed by qRT-PCR. The efficacy of ASOs was examined both in the nucleus and cytoplasm. G, ASO-treated MCF7 cells were pretreated with 5-FU for 4 hours and with 4sU for labeling nascent pre-rRNA during the last hour. Nascent pre-rRNA was measured by qRT-PCR and normalized to nascent GAPDH mRNA. H, Luciferase reporter assays were performed in ASO-treated MCF7 cells, and the values were normalized to those of Renilla. The above data represent the means AE SD from three independent experiments. Ã , P < 0.05; ÃÃ , P < 0.01; ÃÃÃ , P < 0.001. I, Absorbances of 40S, 60S, and 80S ribosomes were detected at 254 nm in the stable cells. a 60S subunit, is ultimately formed. We found that PHACTR2-AS1 overexpression resulted in an obvious decrease in the levels of 40S, 60S, and 80S ribosomes, whereas PHACTR2-AS1 knockdown led to a marked increase in the 3 ribosomes (Fig. 2I). Collectively, these results demonstrated that PHACTR2-AS1 prevented the rDNA transcription and ultimately inhibited ribosome synthesis. Because PHACTR2-AS1 is a target gene of EZH2, we examined the effect of EZH2 on pre-rRNA, and found that overexpression of EZH2 enhanced the level of pre-rRNA ( Supplementary Fig. S2F). We also observed the silencing of pre-rRNA upon to EZH2 knockdown, and the silencing can be erased by PHACTR2-AS1 ASOs ( Supplementary Fig. S2G). Furthermore, EZH2 knockdown sharply decreased the levels of 40S, 60S, and 80S ribosomes (Supplementary Fig. S2H). These results suggested that EZH2 may promote ribosome synthesis through suppressing PHACTR2-AS1. PHACTR2-AS1 promotes H3K9 methylation of rDNA by recruiting SUV39H1 Given that various epigenetic modifications occur within rDNA loci, we first analyzed the presence of histone modifications at rDNA in breast cancer cells. Among those modifications, H3K9me2 and H3K9me3 were markedly enriched throughout the rDNA-repeat region ( Supplementary Fig. S3A). Regions rich in H3K9 me2/3 display heterochromatin-mediated gene silencing (24). We therefore reasoned that H3K9 methylation may mediate the PHACTR2-AS1-dependent silencing of rRNA genes. We observed that PHACTR2-AS1 overexpression enhanced the abundance of H3K9me2/3 at rDNA promoter ( Fig. 3A; Supplementary Fig. S3B). Conversely, PHACTR2-AS1 knockdown in the nucleus led to reduced occupancy of both H3K9me2/3 at rDNA promoter ( Fig. 3B; Supplementary Fig. S3C). These results indicated that PHACTR2-AS1 suppressed rRNA transcription by enhancing the occupancy of methylated H3K9 at rDNA loci. Furthermore, we found that SUV39H1 siRNA prevented PHACTR2-AS1-induced suppression of pre-rRNA (Fig. 3H). The SUV39H1 inhibitor chaetocin partly reversed the silencing of pre-rRNA resulting from PHACTR2-AS1 overexpression (Fig. 3I). Consistently, a methyltransferase activity-deficient mutant (R235H) of SUV39H1 did not obviously decrease pre-rRNA ( Supplementary Fig. S3G), indicating that the methyltransferase activity of SUV39H1 was required for SUV39H1-dependent suppression of pre-rRNA. Furthermore, SUV39H1 depletion partly overcame the inhibition of RNA Pol I occupancy at rDNA resulting from PHACTR2-AS1 overexpression (Supplementary Fig. S3H). These data suggested that PHACTR2-AS1 recruited SUV39H1 and blocked RNA Pol I occupancy. Upon depletion of PHACTR2-AS1, rRNA transcription is reactivated, leading to increased pre-rRNA and ribosomes. PHACTR2-AS1 depletion induced genome instability Cells lacking H3K9 methylation display disorganization of the nucleolar structure (3). Because PHACTR2-AS1 deletion might help disrupt the nucleolar integrity, we analyzed the cellular localization of nucleolar protein fibrillarin to visualize alteration of nucleolar structure. Interestingly, the nucleoli in control cells presented large spherical masses, whereas cells lacking PHACTR2-AS1 were characterized by fragmented nucleoli with irregular masses of reduced size. Statistical analyses confirmed that the number of fragmented nucleoli increased significantly in PHACTR2-AS1-deficient cells versus control cells (Fig. 4A-C; Supplementary Fig. S4A and S4B), indicating that PHACTR2-AS1 depletion resulted in fragmentation of nucleolar structure. Next, we examined the effect of PHACTR2-AS1 on the formation of phosphorylated histone H2AX (gH2A.X) foci, which is critical for DNA damage responses and genome stability (27). PHACTR2-AS1 depletion resulted in a global increase of gH2A.X foci in the nucleus ( Fig. 4D and E; Supplementary Fig. S4C and S4D). In contrast, overexpression of PHACTR2-AS1 led to the decrease both of fragmented nucleoli and gH2A.X foci ( Supplementary Fig. S4E-S4H). Furthermore, ChIP analysis indicated that PHACTR2-AS1 knockdown led to a significant increase in gH2A.X levels at rDNA repeats (Fig. 4F), suggesting that PHACTR2-AS1 depletion potentially resulted in increased DNA damage. Furthermore, the number of rDNA copies decreased significantly in PHACTR2-AS1-shRNA cells compared with control cells (Fig. 4G), indicating that PHACTR2-AS1 depletion may disrupt rDNA stability. We also found that PHACTR2-AS1-depleted cells showed marked genomic abnormalities, including micronuclei and abnormal mitoses ( Fig. 4H and I). These findings suggested that PHACTR2-AS1 depletion disrupted the rDNA integrity, resulting in genome instability. PHACTR2-AS1 suppresses breast cancer cell growth and metastasis Hyperactivation of ribosome biogenesis is associated with uncontrolled cancer cell proliferation and genome instability induces cellular malignant transformation (6,28). PHACTR2-AS1 depletion markedly promoted proliferation, migration, and invasion of breast cancer cells (Fig. 5A-D). In contrast, PHACTR2-AS1 overexpression significantly inhibited cell proliferation, migration, and invasion ( Fig. 5E-H). Furthermore, a transcriptional factor of rDNA UBF1 was overexpressed to activate ribosome synthesis (Supplementary Fig. S5A). PHACTR2-AS1 overexpression overcame the increase of cell proliferation and migration resulting from UBF1 overexpression ( Supplementary Fig. S5B and S5C). And the increase of cell migration induced by PHACTR2-AS1 depletion can be rescued by actinomycin D, an inhibitor of rDNA transcription ( Supplementary Fig. S5D), suggesting that dysregulation of rDNA synthesis mediates the role of PHACTR2-AS1 in cancer cells. Enhancement of genome instability induced by SUV39H deficiency promotes tumorigenesis. PHACTR2-AS1 overexpression could rescue the effect of SUV39H1 knockdown (Supplementary Fig. S5E and S5F), indicating that SUV39H1 is involved in the role of PHACTR2-AS1 in cancer cells. To examine the role of PHACTR2-AS1 in vivo, control cells and PHACTR2-AS1-overexpressing cells were orthotopically implanted into mammary fat pad of mice (Fig. 5I). After 4 weeks, tumor sizes were measured. Tumors induced by PHACTR2-AS1-overexpressing cells were significantly smaller than those developed from control cells. Tumor volumes and weights were monitored weekly, revealing that PHACTR2-AS1 significantly inhibited tumor growth. Next, control cells and PHACTR2-AS1-overexpressing cells were injected PHACTR2-AS1 promotes H3K9 methylation of ribosome DNA by recruiting SUV39H1. A and B, Lysates were extracted from stable Hs578T-control/PHACTR2-AS1 or MCF7-control/PHACTR2-AS1-shRNA cells for ChIP assays. C, Endogenous SUV39H1 from MCF7 cells was immunoprecipitated with an anti-SUV39H1 antibody for RIP assay. RIP enrichment was determined as the amount of RNA recovered in association with SUV39H1, relative to that recovered with an IgG. The above data represent the mean AE SD. Ã , P < 0.05; ÃÃ , P < 0.01; ÃÃÃ , P < 0.001, Student t test; ns, nonsignificant. D, Costaining of PHACTR2-AS1 (RNA FISH, Alexa Fluor 488), SUV39H1 (immunofluorescence, Alexa Fluor 568) in Hs578T cells. E, Purified GST-SUV39H1 protein was incubated with biotin-labeled PHACTR2-AS1 RNA for RNA pull-down assays. Biotin-labeled antisense RNAs against PHACTR2-AS1 and GAPDH mRNA were used as negative controls. F, Top, different regions of GST-SUV39H1. Different fusion proteins were incubated with biotin-PHACTR2-AS1 for RNA pull-down assays. G, Top, different PHACTR2-AS1 fragments. RNA pull-down assays were performed by incubating GST-SUV39H1 protein with biotin-PHACTR2-AS1 fragments. H, An empty or PHACTR2-AS1-overexpressing plasmid was transfected into Hs578T cells in the presence of control or SUV39H1 siRNA, followed by qRT-PCR and Western blot analysis. I, PHACTR2-AS1-overexpressing cells were treated with different doses of chaetocin for 48 hours, followed qRT-PCR and Western blot analysis. PHACTR2-AS1 depletion induced genome instability. A, Immunofluorescence of fibrillarin in stable MCF7-control/PHACTR2-AS1-shRNAs cells. The images represent a confocal Z stack. Arrows, cells with fragmented nucleoli. B, Cells with fragmented nucleoli were counted for statistical analysis (n ¼ 100). C, Fibrillarin expression was measured by Western blot. D and E, Immunofluorescence of gH2A.X was measured to assess the formation of gH2A.X foci. Representative images and statistical analysis are shown. F, ChIP assays were performed using an anti-gH2A.X antibody. G, Genome DNA was extracted, and qPCR was performed to quantitate the amounts of rDNA repeats, with normalization to GAPDH. H and I, Stable MCF7-control/PHACTR2-AS1-shRNA cells were stained with DAPI. Micronuclei or abnormal chromosomes (including anaphase bridges and lagging chromatin) were measured from interval or anaphase cells (n ¼ 100), respectively. Arrows, individual abnormalities. Ã , P < 0.05; ÃÃ , P < 0.01; ÃÃÃ , P < 0.001. intravenously into mice, and lung metastasis was determined at 6 weeks post-injection (Fig. 5J). PHACTR2-AS1 overexpression led to decreased lung metastasis of cancer cells. The efficacy of PHACTR2-AS1 overexpression was measured (Fig. 5K). Taken together, all these results indicated that PHACTR2-AS1 suppressed breast cancer growth and metastasis in vitro and in vivo. Representative bioluminescent images of tumor specimens both in vivo and in vitro. Bioluminescence-based quantitation of primary tumor sizes. Tumor volumes were measured every week and tumor weights were determined after dissection. J, Both cell transfectants were injected into tail vein of mice (n ¼ 5/group). Lung metastasis was monitored at 6 weeks postinjection. Representative bioluminescent images of lung specimens both in vivo and in vitro. Bioluminescent quantitation of lung metastasis. Representative lung metastasis specimens were sectioned and stained with hematoxylin and eosin (H&E). K, PHACTR2-AS1 levels in both cell transfectants were verified by qRT-PCR. Ã , P < 0.05; ÃÃ , P < 0.01; ÃÃÃ , P < 0.001. Figure 6. PHACTR2-AS1-30nt-RNA inhibited cancer cell growth and metastasis. A and B, The empty, PHACTR2-AS1 full length , or PHACTR2-AS1 1766-1795 plasmid was transfected into Hs578T cells, followed by qRT-PCR and ribosome detection. C-E, Hs578T cells were transfected with empty or PHACTR2-AS1 1766-1795 plasmid for cell proliferation, migration, and invasion assays. F, Cy3-labeled PHACTR2-AS1-30nt-RNA was transfected into Hs578T cells. Cells were fixed and stained with an antifibrillarin. G, MDA-MB-231-Luc-D3H2LN cells were inoculated onto the mammary fat pad of mice (n ¼ 8/group). After 2 weeks, the mice were divided into two groups and injected with control RNA or PHACTR2-AS1-30nt-RNA through the tail vein once every 3 days. Representative in vivo bioluminescence images. H, Bioluminescence-based quantitation of primary tumor sizes. I, Ribosomes were extracted and fractionated from tumor specimens. J, MDA-MB-231-Luc-D3H2LN cells were injected into mice through the tail vein, followed by tail vein injection of control RNA or PHACTR2-AS1-30nt-RNA once every 3 days (n ¼ 8/group). Representative bioluminescence images are shown. K, Bioluminescent quantitation of lung metastasis. L, Representative lung metastasis specimens were sectioned and stained with hematoxylin and eosin (H&E). Ã , P < 0.05; ÃÃ , P < 0.01; ÃÃÃ , P < 0.001. A synthesized PHACTR2-AS1 fragment efficiently inhibited tumor growth and lung metastasis Given that the region of PHACTR2-AS1 from nucleotides 1766-1795 was responsible for recruiting SUV39H1, we investigated whether this short region can function similar to the full-length PHACTR2-AS1. Our results showed that PHACTR2-AS1 1766-1795 contributed to decreases in pre-rRNA and ribosome levels and inhibited cell proliferation, migration, and invasion ( Fig. 6A-E). We also synthesized PHACTR2-AS1-30nt-RNA (nucleotides 1766-1795) labeled with fluorophore Cy3 and observed its location in the nucleolus (Fig. 6F). To evaluate the possible therapeutic potential of PHACTR2-AS1-30nt-RNA, we synthesized a durable PHACTR2-AS1-30nt-RNA ( Supplementary Fig. S6A). First, we injected the biotin-labeled PHACTR2-AS1-30nt-RNA into tail vein of mice bearing xenograft tumors. The location of biotin-PHACTR2-AS1-30nt-RNA was showed by avidin staining of freezing section. At about 72 hours, the intensity of biotin-PHACTR2-AS1-30nt-RNA in tumor cells reached a highest point, then began to decline and disappeared at 96 hours (Supplementary Fig. S6B and S6C), indicating that the retention-time of PHACTR2-AS1-30nt-RNA in tumor cells is about 72 hours. In addition, biotin-PHACTR2-AS1-30nt-RNA was clearly observed in tumor cells of lungs at 72 hours after injection ( Supplementary Fig. S6D). These data showed that the biotin-PHACTR2-AS1-30nt-RNA can be taken up by the tumor cells of both mice xenograft tumors and lung metastasis. Furthermore, MDA-MB-231-Luc-D3H2LN cells were orthotopically implanted into mammary fat pad of mice (Fig. 6G). Tumor growth was significantly slower in the PHACTR2-AS1-RNA-treated group compared with that in the control RNA-treated group (Fig. 6H), suggesting that long-lasting RNA fragments can effectively inhibit tumor growth. Importantly, PHACTR2-AS1-30nt-RNA decreased levels of 40S, 60S, and 80S ribosomes extracted from xenografts (Fig. 6I), indicating that ribosome synthesis mediated the inhibitory effect of PHACTR2-AS1 on tumor growth. In addition, 30-nt-RNA treatment enhanced the abundance of H3K9me2/3 at the rDNA loci ( Supplementary Fig. S6E and S6F), indicating the specificity and activity of the 30-nt-RNA. To further examine the role of PHACTR2-AS1-30nt-RNA in lung metastasis, MDA-MB-231-Luc-D3H2LN cells were injected into tail vein of mice, followed by tail vein injection of control RNA or PHACTR2-AS1-30nt-RNA for 4 weeks (Fig. 6J). Bioluminescence imaging showed that PHACTR2-AS1-30nt-RNA decreased breast cancer cell metastasis to the lungs ( Fig. 6K and L), suggesting that PHACTR2-AS1-30nt-RNA is of therapeutic value for blocking breast cancer lung metastasis. PHACTR2-AS1 is downregulated in patients with breast cancer To assess the role of PHACTR2-AS1 in patients with breast cancer, we stained PHACTR2-AS1 in 90 pairs of breast cancer tissues and normal tissues and found that normal breast tissues exhibited stronger PHACTR2-AS1 staining than cancer tissues ( Fig. 7A and B). In addition, PHACTR2-AS1 levels were higher in normal tissues than in some types of cancers, including pancreatic, kidney, lung, and rectal cancers ( Supplementary Fig. S7A). We also found that the patients with elevated PHACTR2-AS1 expression showed better overall survival than those with reduced PHACTR2-AS1 expression ( Fig. 7C; Supplementary Fig. S7B). Collectively, these findings indicated that PHACTR2-AS1 expression was suppressed in breast cancer and that lower PHACTR2-AS1 expression may predict poor outcomes for patients with breast cancer. Furthermore, we detected the EZH2 protein level by IHC staining and PHACTR2-AS1 RNA level by RNA in situ hybridization in human breast cancer tissue arrays. The expression of EZH2 was negatively correlated with the expression of PHACTR2-AS1 ( Fig. 7D and E). The level of PHACTR2-AS1 is higher in the group of low expression of EZH2 than that in the group of high expression of EZH2 (Fig. 7F). Likewise, the level of EZH2 is also higher in the group of low expression of PHACTR2-AS1 than that in the group of high expression of PHACTR2-AS1 (Fig. 7F), suggesting that the protein level of EZH2 is negatively correlated with the RNA level of PHACTR2-AS1 in breast cancer tissues. Furthermore, we used 17 pairs of fresh breast cancer tissues and normal tissues to assess the RNA level of both PHACTR2-AS1 and EZH2 and the ribosomal level. PHACTR2-AS1 level was downregulated in breast cancer tissues compared with the normal tissues, whereas the RNA level of EZH2 was upregulated in cancer tissues (Fig. 7G), indicating the negative relation between PHACTR2-AS1 and EZH2 again (Fig. 7H). Next, we detected the ribosomal levels of the 17 patients and found that the ribosomes are markedly increased in 13 patients with breast cancer (Fig. 7I; Supplementary Fig. S7C). Collectively, these results suggested that EZH2 may enhance ribosome synthesis through silencing PHACTR2-AS1 expression to promote breast cancer development. Discussion Here, we report that PHACTR2-AS1 binds to rDNA and recruits SUV39H1, triggering H3K9 methylation at rDNA. H3K9 modification blocked the binding of RNA Pol I to rDNA, resulting in the suppression of ribosome synthesis. Upon depletion of PHACTR2-AS1 induced by EZH2-mediated H3K27 methylation, rRNA transcription is reactivated, leading to increased ribosome synthesis and genomic instability, both of which promote cancer proliferation and metastasis (Fig. 7J). Epigenetic regulation of rRNA genes is involved in dysregulating ribosome biogenesis during the malignant transformation of cells (29,30). NoRC (nucleolar remodeling complex) is critical for maintaining the constitutively silent state of an rDNA cluster by recruiting DNMT and HDAC (2). eNoSC (energy-dependent nucleolar silencing complex) contains SIRT1, SUV39H1, and NML (31). Under glucose starvation, eNoSC triggers rDNA silencing, thereby protecting cells from energy deprivation-dependent apoptosis. Thus, eNoSC is mainly responsible for precisely regulating rRNA synthesis under different energy conditions. Although lncRNA PHACTR2-AS1 interacts with SUV39H1, PHACTR2-AS1 is not required for rRNA gene silencing in response to glucose deprivation ( Supplementary Fig. S7D), suggesting that PHACTR2-AS1 is not a component of the eNoSC complex. Here, we found EZH2-mediated H3K27me3 inhibited PHACTR2-AS1 expression. EZH2 has been regarded as a biomarker for aggressive breast cancer (32). Blocking EZH2-mediated H3K27me3 disrupts the silent state of rRNA genes associated with PHACTR2-AS1, uncovering the relationship between EZH2 and rRNA synthesis. Hyperactivation of ribosome biogenesis was reported to play significant roles in cancer initiation and progression (33,34). Cancer tissues have lower rDNA copy numbers than normal tissues, despite increased rRNA synthesis and proliferation (35). Indeed, depletion of PHACTR2-AS1 led to the decreased rDNA copies, although rRNA synthesis was activated. It is possible that depletion of PHACTR2-AS1 induced the switch of silent rDNA to active rDNA or upregulated the transcription activity of active rDNA copies. PHACTR2-AS1 depletion can reduce SUV39H1 recruitment to rRNA genes, resulting in heterochromatin relaxation at rDNA. Heterochromatin relaxation may cause fragmentation of nucleolar structure, DNA doublestrand breaks, and loss of rDNA repeats, all of which lead to genome instability (36). Thus, PHACTR2-AS1 depletion enhances protein synthesis rates and increases genomic instability, both of which contribute to malignant cell transformation. Consistent with PHACTR2-AS1, depletion of TIP5 (a large subunit of NoRC) induced genomic instability and concurrently enhanced rRNA transcription, leading to malignant transformation (37). Chen and colleagues reported that PHACTR2-AS1 (also known as NR027113) regulated PTEN/PI3K/AKT signaling pathway in hepatocellular carcinoma. Recently, they reported that this lncRNA also regulated ERK and AKT signaling and renamed it as LncIHS according its function in hepatocellular carcinoma (38,39). Since no obvious nucleolar localization was showed in HCC cells, PHACTR2-AS1 may no effect on ribosome synthesis in HCC, which can explain the dichotomous role of PHACTR2-AS1 in hepatocellular cancer and breast cancer. We have demonstrated that rDNA is the direct target of PHACTR2-AS1 in breast cancer cells. However, due to functional diversity of lncRNAs, we cannot exclude that there are other targets of PHACTR2-AS1 in breast cancer cells, which leaves an opportunity for future study. Ribosome biogenesis has been confirmed as a potential target for cancer treatment (40)(41)(42). Several features of lncRNAs determine their potentials as therapeutic targets (43). One of the most used strategies is to block cancer-upregulated lncRNA function by applying specific ASOs against lncRNA (44,45). Another strategy of lncRNA targeting is to restore lncRNA levels with synthetic RNA molecules, which is aimed to cancers with downregulated lncRNAs. However, for now this strategy cannot be well applied, due to limitations in the lengths of lncRNAs. In this study, we explored the functional fragment of an lncRNA and synthesized a durable PHACTR2-AS1-30nt-RNA molecule. Synthetic PHACTR2-AS1-30nt-RNA molecules formed a hair-pin structure and mimicked the full-length lncRNA by inhibiting tumor growth and lung metastasis, providing insight into the potential of PHACTR2-AS1 as a therapeutic target in breast cancer. Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2020-04-22T13:05:01.665Z
2020-04-20T00:00:00.000
{ "year": 2020, "sha1": "c858870f1eee88278507b0019a5dfba472ba681d", "oa_license": "CCBY", "oa_url": "https://cancerres.aacrjournals.org/content/canres/80/13/2737.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "32f900a8875fa69ccf6d62893b0b0030838b9a95", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
202543921
pes2o/s2orc
v3-fos-license
Luganda Text-to-Speech Machine In Uganda, Luganda is the most spoken native language. It is used for communication in informal as well as formal business transactions. The development of technology startups globally related to TTS has mainly been with languages like English, French, etc. These are added in TTS engines by Google, Microsoft among others, allowing developers in these regions to innovate TTS products. Luganda is not supported because the language is not built and trained on these engines. In this study, we analyzed the Luganda language structure and constructions and then proposed and developed a Luganda TTS. The system was built and trained using locally sourced Luganda language text and audio. The engine is now able to capture text and reads it aloud. We tested the accuracy using MRT and MOS. MRT and MOS tests results are quite good with MRT having better results. The results general score was 71%. This study will enhance previous solutions to NLP gaps in Uganda, as well as provide raw data such that other research in this area can take place. Introduction The language Luganda is part of the Niger-Congo family of languages. These sometimes are called the bantu languages. Luganda is a widely spoken native language by the Ganda people in Uganda. It is spoken by more than five million people, making the language have the highest number of speakers who are increasingly fluent. Today, we witness advancement in Luganda documented literature that is published and doc-umented in magazines, newspapers, etc. In 2013, Moses and Eno-Abasi emphasized that most African languages are resource limited with less or no linguistic resources like textbooks [8]. Luganda is not exceptional as a few researchers have done some work on NLP e.g. [5]. This research draws from a demand for innovative technology products on Luganda language linguistics e.g. learning systems for linguistic students to practice, speaking websites and mobile apps. These products are the reason for the need of a Luganda Text-to-Speech Machine. We used MARYTTS (Modular Architecture for Research on speech sYnthesis) engine. Related Work A Text-to-Speech (TTS) synthesizer is a computer based system that can read text aloud automatically. A speech synthesizer can be implemented by both hardware and software [1]. Speech synthesis compliments other language technologies such as speech recognition, which aims to convert speech into text, machine translation which converts writing or speech in one language into writing or speech in another [3]. There are seven ways we can understand speech and they are; Acoustics, phonetics, phonology, morphology, syntax, semantics and pragmatics [17]. TTS machines are divided into NLP and digital processing. NLP pcaptures the dump raw text, analyses it, process it and combine information used later for the synthesis. The speech synthesis performs phonetic transcriptions, the output is usually a sequence of phones supplied with prosody makers. The speech synthesis performs speech synthesis on the signal level, based on information from the NLP module. Unit selection-based synthesis Unit are phonemes which are units of sound speech that distinguish one utterance from another in a language, these units of sound are either vowels or consonants of a particular language. To be able to create a speech units inventory, it is required to have the speech corpus and to process a segmentation. The segmentation is the process of finding boundaries of the selected speech units in the speech data. The segmentation can be manual or automatic. The automatic segmentation is mostly used especially on large speech units' inventory. There are two mainly used methods of the automatic segmentation: a Hidden Markov Models (HMM)-based method and a Dynamic Time Warping (DTW)-based method. Because of the better consistency of segmentation the HMM-based method are preferred. Fig 1: The illustration of the segmentation of the word butiko on the speech units. [16].In unit selection, a speech database is designed such that each unit is available in various prosodic and phonetic contexts [18]. The char (text) input is captured as input to enable the utterance as output which defines the string of phonemes (vowels and consonants) required to synthesize the text, and is annotated with prosodic features (pitch, duration and power) which specify the desired speech output. Other Speech Synthesis Work Done in Uganda on Ugandan Languages The Language speech synthesis work and research done in Uganda is mainly related to natural language processing like software localization. Not much has been done to translate Ugandan local languages using a Text-to-Speech Machine. There have been software localization ventures for the Luganda [6] and Runyakitara languages. This was done on the Google interface translating it to Runyakitara [5] and [10]. The motivation behind this is that it can pave a way in bridging the Digital Divide between countries. People are motivated to embrace and participate in technologies that are in a language they are most familiar with, mainly their mother tongue [5]. Conclusion There is quite a lot of literature about Text-To-Speech machine focused on both natural language processing and speech synthesis. The researchers have found literature about Software localization in the Luganda language. However, there is no literature found about Text-To-Speech on Ugandan Languages. It is on this research gap analysis that this research has been done and the results are promising. Methodology This research shows a Luganda Text-to-Speech system using MARYTTS. The MARYTTS TTS (Text-To-Speech) synthesis system is a flexible and modular tool for research, development and teaching in the domain of Text-To-Speech synthesis [4]. MARYTTS [8] [9][7] [15] [14] is an open-source project, it is written in Java and includes a number of useful tools for adding support for a new language and adding new voices. The aim of these tools is to simplify the task of building new resources for TTS, their effectiveness can be seen from the fact that when MARYTTS was born it was originally developed for the German language; nowadays it makes available voices and support for the following languages: US English, British English, German, Turkish, Russian, Telugu, etc. Plain Text Plain text is the most basic, and maybe most common input format. Nothing is known about the structure or meaning of the text. The text is embedded into a MaryXML document for the following processing steps SABLE -annotated text and SSMLannotated text SABLE is a markup language for annotating texts in view of speech synthesis, and SSML is a markup language for annotating texts in view of speech synthesis. It was proposed by the W3C as a standard. Speech synthesis markup languages are useful for providing information about the structure of a document, the meaning of numbers, or the importance of words, so that this information can be appropriately expressed in speech (such as pausing in the right places, pronouncing telephone numbers appropriately, or putting emphasis on the word carrying focus). Such information may be provided by a human user or, more likely, by other processing units such as natural language generators, email processors, or HTML readers. Optional Markup Parser The MARY text-to-speech and markup-tospeech system accepts both plain text input and input marked up for speech synthesis with a speech synthesis markup language such as SA-BLE or SSML. . Both SABLE and SSML are transformed to MaryXML which reflects the modeling capabilities of this particular TTS system. MaryXML is based on XML Tokenizer The tokenizer cuts the text into tokens, i.e. words and punctuation marks. It uses a set of rules determined through corpus analysis to label the meaning of dots based on the surrounding contex Text Normalisation Module In the preprocessing module, organizes the input sentences into manageable lists of words. It identifies numbers, abbreviations, acronyms and idiomatic s and transforms them into full text when needed, those tokens for which spoken form does not entirely correspond to the written form are replaced by a more pronounceable form. Numbers: The pronunciation of numbers will highly depend on their meaning. Different number types, such as cardinal and ordinal numbers, currency amounts, or telephone numbers, must be identified as such, either from input markup or from context, and replaced by appropriate token strings. Abbreviations: Two main groups of abbreviations are distinguished: Those that are spelled out, such as "USA", and those that need expansion. Phonemisation The output of the phonemisation component contains the phonemic transcription. The Speech Assessment Methods Phonetic Alphabet (SAMPA) phonetic alphabet will be created for Luganda and adopted to be used for each token, as well as the source of this transcription (simple lexicon lookup, lexicon lookup with compound analysis, letter-to-sound rules, etc.). Inflection endings: This module deals with the ordinals and abbreviations which have been marked during preprocessing as requiring an appropriate inflection ending. The part-of-speech information added by the tagger tells whether the token is an adverb or an adjective. In addition, information about the boundaries of noun phrases has been provided by the chunker, which is relevant for adjectives. Lexicons: The pronunciation lexicon contains the grapheme form, a phonemic transcription, a special marking for adjectives, and the inflection information Letter-to-sound conversion: Unknown words that cannot be phonemised with the help of the lexicon are analyzed by a "letterto-sound conversion" algorithm. Letter-to-Sound rules are statistically trained on the MARY lexicon. Prosody Module The prosody rules were derived through corpus analysis and are mostly based on part-of-speech and punctuation information. Some parts-ofspeech, such as nouns and adjectives, always receive an accent; the other parts-of-speech are ranked hierarchically (roughly: full verbs > modal verbs > adverbs), according to their aptitude to receive an accent. This ranking comes into play where the obligatory assignment rules do not place any accent inside some intermediate phrase. According to a GToBI principle, each intermediate phrase should contain at least one pitch accent. In such a case, the token in that intermediate phrase with the highest-ranking partof-speech receives a pitch accent. After determining the location of prosodic boundaries and pitch accents, the actual tones are assigned according to sentence type (declarative, interrogative-W, interrogative-Yes-No and exclamative). For each sentence type, pitch accent tones, intermediate phrase boundary tones and intonation phrase boundary tones are assigned. The last accent and intonation phrase tone in a sentence is usually different from the rest, in order to account for sentence-final intonation patterns. Postlexical Phonological Process Once the words are transcribed in a standard phonemic string including syllable boundaries and lexical stress on the one hand, and the prosody labels for pitch accents and prosodic phrase boundaries are assigned on the other hand, the resulting phonological representation can be restructured by a number of phonological rules. These rules operate on the basis of phonological context information such as pitch accent, word stress, the phrasal domain or, optionally, requested articulation precision. Calculation of Acoustic Parameters This module performs the translation from the symbolic to the physical domain. The output produced by this module is a list containing the individual segments with their durations as well as F0 targets. This format is compatible with the MBROLA .pho input files. Synthesis At present, MBROLA is used for synthesizing the utterance based on the output of the preceding module. Due to the modular architecture of the MARY system, any synthesis module with a similar interface could easily be employed instead or in addition. Adding a new Language and Voice Module to MARYTTS This chapter covers the data preparation to train the HMM models for the new voice and the new language module that was added to the MARY system. Speech database Speech database [20] is one of the components used to train unit selection models for speech synthesis, a database was recorded and for each utterance the text annotation needs to be present in the selected sentences with in the database to be able to transcribe the speech sample and to train the unit selection models properly. Database Creation In this stage, we select certain data from the Wikipedia dump data. Factors considered while creating the database are the purpose of the corpus, size of the corpus, the number and diversity of speakers, the environment and the style of recording, phonetic coverage. The corpus can have phonetics described in the phonetic file close to natural speech and wide enough with phonetic units (e.g. phonemes or dip-hones). Also, the selected sentences should be reasonably short and easy to read. For the purpose of this project, a small database with only one speaker was created. The data for the corpus was prepared using the MARY system's building tools. The created database corpus consists of 511 sentences. Recording the Speech Database Using the created text database, the corresponding speech database was recorded by one female speaker in .wav files. We recorded using redstart a MARY TTS recording tool. The format of the audio data is the following: 16 kHz sampling rate, 16-bit samples, mono wav files. Integrating the New Language into MARY TTS Adding a new language to MARY TTS follows a guideline with subsections. The whole implementation was done using Ubuntu Linux Distribution because Linux is the best platform to run the voice building software's. This research used a corei3 processor Laptop with 6GB RAM because that is want the resources enabled the researcher to have and it did a tremendous job in achieving the goal of this research, various software's were installed to help in training the unit selection models and attaining a synthetic voice. Adding the New Language Module to MARY TTS MARYTTS Language module uses a minimal NLP Components which helped in phonemisation (turning text into sound sequences i.e. grapheme to phoneme conversion), this is the linking point between text and speech since it converts text to allophones. Also it is were to-kenization takes place and later prosodic events (accent and phrasing) from the text is performed. Three major things are put into consideration while adding a new language and these are; a complete set of phones with the articulation features, dictionary containing words with their phonetic transcriptions and list of functional words for the target language. Adding a New Voice Module to MARY TTS The guideline for creating a unit selection voice was followed to build a new unit selection voice. These guidelines are put into consideration when building the voice and these are; First of all data from the created database need to be prepared in the following pattern: one sentence per file, speech sound in wav file, text representation of the sentence in txt file and the names of wav and txt files have to match due to the transcription alignment calculations, e.g. files audio000.wav and audio000.txt need to contain the same sentence. Sound files have to be in directory. /wav, text files in directory. /text placed in selected working directory. These help in running the Voice Import tools together with installed programs then prepare the data for the HTS toolkit and unit selection models are trained according to the characteristics of the created database. This procedure takes a long time and the output models and trees are used for the new voice creation. In this research we build the voice using a unit selection voice using software for building the voice like praat which computes pitch markers by aligning Pitch Marks to the nearest Zero Crossings, HTS, HTK, HDecode, Speech Signal Processing Toolkit (SPTK), hts-engine among others were installed [22][23] [24]. Also Edinburgh Speech Tools was installed. MARYTTS Client The MARYTTS-client has a graphical user interface where we input Luganda text and process to get Luganda words, tokens, intonations, part of speech, and audio among others. The image be-low allows input of text and outputs audio. Program expects a text input in the top text area. Button Play sends request to the MARY server for an audio output and plays it as a speech sound. Evaluation of the synthesized speech quality During evaluation the intelligibility and the naturalness of the synthesized speech were tested. The intelligibility tests were evaluating how listeners understands the synthesized speech with an emphasis on perception transition sounds; the naturalness test were evaluating the speech from an overall aspect. Modified Rhyme Test (MRT) Modified rhyme test (MRT) was used to test the intelligibility of the synthesized speech. MRT is one of the most widely used intelligibility tests. During the test one word is played to the listener and his task is to identify the sound by looking at the word lists in their hand. The test is concentrated on consonants because the synthesis of consonants has the biggest influence on the intelligibility of the speech. Results As a result of adding the new language module and the new voice to the MARY system, several simple programs and scripts to ease database creation and to ease system integration were created. The whole process was described step by step. The database corpus was created and the speech database was recorded. The new language module and the new voice were successfully integrated into the MARY system. A pull request is made to publish the voice in MARYTTS https://github.com/marytts/marytts/pull/792. Meanwhile the synthesized voice can be installed from dropbox. Follow the instructions below; • Install Java and add it to system pat • Clone MARYTTS https://github.com/marytts/marytts or version 5.2 SNAPSHOT; this was used for testing. • Download target folder from dropbox link which has the Luganda voice, https://goo.gl/Qi6xDo • Unzip it and place it at the root of marytts project • Navigate to target/marytts-5.2-SNAPSHOT/bin/marytts-server in command line. When the server is running; browse http://localhost:59125/ • https://github.com/Nandutu/luganda_data set . Use the text dataset created to test the voice. Use test/.txt folder and lg.txt to test voice. This was the little dataset used to train the voices. Luganda corpus was developed that was used to build a Text-to-Speech machine, this machine can capture Luganda text and renders Luganda synthetic voice. Since there was no such work done in speech synthesis research, no comparison was done.
2019-09-09T08:47:57.578Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "6ad8915cb4d85e7502a41c4606700b6dbc39aa2c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "93860233d5b89ebf395f0e0b3bb7e4d4dd71b9ea", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
227039573
pes2o/s2orc
v3-fos-license
The Role of Omega-3 Polyunsaturated Fatty Acids from Different Sources in Bone Development N-3 polyunsaturated fatty acids (PUFAs) are essential nutrients that must be obtained from the diet. We have previously showed that endogenous n-3 PUFAs contribute to skeletal development and bone quality in fat-1 mice. Unlike other mammals, these transgenic mice, carry the n-3 desaturase gene and thus can convert n-6 to n-3 PUFAs endogenously. Since this model does not mimic dietary exposure to n-3 PUFAs, diets rich in fish and flaxseed oils were used to further elucidate the role of n-3 PUFAs in bone development. Our investigation reveals that dietary n-3 PUFAs decrease fat accumulation in the liver, lower serum fat levels, and alter fatty acid (FA) content in liver and serum. Bone analyses show that n-3 PUFAs improve mechanical properties, which were measured using a three-point bending test, but exert complex effects on bone structure that vary according to its source. In a micro-CT analysis, we found that the flaxseed oil diet improves trabecular bone micro-architecture, whereas the fish oil diet promotes higher bone mineral density (BMD) with no effect on trabecular bone. The transcriptome characterization of bone by RNA-seq identified regulatory mechanisms of n-3 PUFAs via modulation of the cell cycle and peripheral circadian rhythm genes. These results extend our knowledge and provide insights into the molecular mechanisms of bone remodeling regulation induced by different sources of dietary n-3 PUFAs. Introduction The skeleton is a unique system that performs mechanical, storing, metabolic, and protective functions. Despite its inert appearance, bone is a highly dynamic organ undergoing constant remodeling in order to maintain optimum mechanical functions. Bone tissue is continuously degraded by bone resorbing cells-the osteoclasts-which are replaced by new bone-forming cells called osteoblasts [1]. The development and differentiation of these two distinct cells are tightly regulated by numerous endogenous substances, including growth factors, hormones, cytokines, and neurotransmitters. Although genetics contribute considerably to peak bone mass (PBM), an individual's full genetic potential can be achieved through proper nutrition and exercise [2]. Histological Staining of Growth-Plate (GP) Sections Various staining procedures were used to examine the tibial GPs. Tibiae samples were fixed overnight in 4% paraformaldehyde (PFA, Sigma, St Louis, MO, USA) at 4 • C, followed by 2 weeks of decalcification in 0.5 M EDTA pH 7.4. The samples were then dehydrated, cleared in histoclear (Bar-Naor), and embedded in paraffin blocks. Transverse tissue sections of 5 µm were prepared with Leica microtome (Agentec, Yakum, Israel) for histological staining [15]. The sections were stained in hematoxylin solution for 5 min followed by rinsing in tap water; then the sections were stained with eosin and rinsed again in tap water. For Safranin-O staining, Weigert's iron hematoxylin solution, fast green solution, and acetic acid were used. The sections were dried and DPX mounting was used for histology. To stain for alkaline phosphatase, 1-Step NBT/BCIP reagent (Thermo Fisher Scientific, Rehovot, Israel) was used according to the manufacturer's instructions. Sections were incubated in 0.25% naphthol AS-MX phosphate alkaline solution with fast blue RR salt (Sigma, St Louis, MO, USA), washed with PBS, and incubated with naphthol solution mixture for 1 h at room temperature. The resulting purple, insoluble, granular dye deposit indicated sites of alkaline phosphatase (ALP) activity [18]. Imaging and Measurement of GPs Stained transverse sections of tibiae were viewed using a light microscopy eclipse E400 Nikon with ×10, ×20, or ×40 objectives, using light filters. Images were captured by a high-resolution camera (Olympus DP 71) controlled by Cell A software (Olympus). Longitudinal median sections from the proximal tibia growth plate were subjected to histological staining and measurements. The thickness of the total GPs, proliferative zone (PZ) and hypertrophic zone (HA) as well as the number of cells were measured using a Cell A software (Olympus) with a measuring tool feature. Measurements were performed on these sections from 5 different animals from each group. In each slide, 10 random locations throughout the GPs were selected and measured [18,19]. Micro-CT Femur bones were slowly thawed to room temperature before scanning, which was performed using a Skyscan 1174 X-ray computed microtomograph with the following parameters: 50 kV X-ray tube, 800 µA, 0.25 mm aluminum filter at 3000 ms exposure time, and 6.4 µm high spatial resolution. For each specimen, a series of 450 projection images were obtained with a rotation step of 0.4 • , averaging two frames, for a total 180 • rotation. Flat field correction was performed at the beginning of each scan for a specific zoom and image format. A stack of 2-D X-ray shadow projections was reconstructed to obtain images using NRecon software (Skyscan, Bruker, Belgium) and dynamic image range, postalignment value, beam hardening, and ring-artifact reduction were optimized for each experimental set. Next, to perform a morphometric analysis of the images, we used CTAn software (Skyscan). Detailed 3-D analysis and reconstruction of the sample were performed using the custom software of the micro-CT device, yielding quantitative data. Morphometric parameters were calculated as suggested by the guidelines for bone microstructure assessment. Cortical analysis was performed on a standardized region of interest (ROI) in the mid diaphysis, equidistant from the ends of the bone, containing 150 slices and corresponding to 2.764 mm. The trabecular ROI of the femora consisted of 100 slices, equivalent to 1.86 mm, extending proximally from the end of the distal growth plate (GP) of each bone. Global grayscale threshold levels for the cortical region were between 60 and 255 and for the trabecular region, adaptive grayscale threshold levels between 57 and 255 were used [20]. Mechanical Testing A three-point bending experiment was conducted in order to characterize the mechanical properties of the bones. Right femora from the mice were tested using a custom-built micro-mechanical testing device. On the day of testing, each bone was slowly thawed and placed within a saline-containing testing chamber that rested on two supports. The supports were located equidistant from the ends of the bone, both in contact with the posterior aspect of the diaphysis. The distance between the stationary supports was set to 8 mm to ensure that the relatively tubular portion of the mid-diaphysis rested on these supports. An initial preload of 0.2 N was applied with a movable prong, which contacted the anterior surface of the bone at a point precisely in the middle between the two supports to hold the bone in place. Force was measured with a dedicated load cell. The experiment was conducted at a constant rate of 400 µm/min up to fracture, as identified by a sudden and significant (>40%) decrease in load. Force and displacement were recorded at a rate of 10 Hz by a custom designed software program. The resulting force-displacement data of each experiment were used to calculate whole bone stiffness (N/µ), yield load (N) and maximal load (N). The stiffness was calculated as the slope of the linear portion of the load-displacement curve. The yield point was defined as the load at which the load-displacement relationship ceased to be linear [21]. Bone RNA Extraction and RNA-Sequencing RNA extraction and sequencing were performed on bones from 6-week-old mice. Muscles, tendons, and ligaments were removed with a scalpel. The distal and proximal epiphyses were excised, and the diaphyseal bone marrow was removed by centrifugation at >15,000× g for 1 min at room temperature [22]. The resultant hollow bone shafts were individually flash frozen in liquid nitrogen and underwent manual pulverization. Total RNA was extracted and purified from mice ulna and humerus bones using TRI reagent (Sigma, USA) according to the manufacturer's protocol. Each RNA sample had a RNA integrity number-RIN > 6, indicating they were of sufficient quality to prepare sequencing libraries, which was performed using INCPM-mRNA-seq, based on the Transeq protocol. Briefly, the polyA fraction (mRNA) was purified from the total RNA, followed by fragmentation and the generation of double-stranded cDNA. Next, end repair, base addition, adapter ligation, and PCR amplification steps were performed. Libraries were evaluated by Qubit (Thermo Fisher Scientific) and TapeStation (Agilent). Sequencing libraries were constructed with barcodes to allow multiplexing of 18 samples in two lanes. Around 20-27 million single-end 60-bp reads were sequenced per sample on an Illumina HiSeq 2500 V4 instrument. Quality control analysis revealed that Q scores of all samples were~36, Q > 30 is considered a benchmark for quality in NGS. Bioinformatics Bioinformatic analyses were performed by the Grand Israel National Center for Personalized Medicine (G-INCPM) research facility, Weizmann Institute of Science, Rehovot, Israel. Poly-A/T stretches and Illumina adapters were trimmed from the reads using Cutadapt [23]; resulting reads shorter than 30 bp were discarded. Reads were mapped to the M. musculus reference genome GRCm38 using STAR [24], supplied with gene annotations downloaded from Ensembl (with the EndToEnd option and outFilter-MismatchNoverLmax set to 0.04). Expression levels for each gene were quantified with an HTseq-count [25], using the GTF file. Differentially expressed (DE) genes were identified using DESeq2 with the betaPrior, cooksCutoff and independent filtering parameters set to false [26]. Raw p values were adjusted for multiple testing using Benjamini and Hochberg's procedure. Pipeline was run using Snakemake [27]. Gene-Set Enrichment Analysis Ingenuity Pathways Analysis (IPA) version 4.0 (Ingenuity Systems, Mountain View, CA, USA) was used to search for possible biological processes, canonical pathways, and networks. A detailed description of IPA can be found at www.Ingenuity.com. To perform significance testing, these steps were followed: (1) the ratio of the number of DE genes from the uploaded data set that was mapped to an IPA pathway was divided by the total number of molecules that existed in the pathway. (2) Fischer's exact test was used to calculate the probability that the association between the genes in the uploaded data set and the canonical pathway was explained by chance alone. p values of < 0.05 were considered significant. (3) The Benjamini-Hochberg procedure was used to calculate the false discovery rate and to correct for multiple testing. Adjusted p values of < 0.05 were considered significant. (4) A fold change cutoff of >0.585 and <−0.585 log (differential expression) were applied to all data sets. (5) Z-score analysis was used as a statistical measure of the match between expected relationship direction and observed gene expression of the uploaded dataset. Positive and negative z scores indicated up-regulated and down-regulated pathways, respectively. Statistical Analysis All data are expressed as means ± SD. The significance of differences between groups was determined by ANOVA using JMP 12.0.1 Statistical Discovery Software (SAS Institute 2000, Cary, NC, USA). Differences between groups were further evaluated by Tukey-Kramer HSD test, considered significant at p < 0.05. Each group that has different letters, differs significantly from the other groups, groups that share letters do not differ from each other. Results To study the effect of dietary n-3 PUFAs from different sources on skeletal development, 3-week-old female C57BL6 mice were divided into three groups and fed isocaloric diets that differed only in their source of fat (Table 1) for 3 or 6 weeks. The diets contained either corn oil (control), flaxseed oil rich in ALA (flaxseed) as a plant source of n-3 PUFA or fish oil rich in EPA and DHA (fish) as an animal source of n-3 PUFA. Dietary n-3 PUFAs Decrease Fat Accumulation and Alter FA Content in Liver and Serum GC analysis of FAs showed the differences between the diets with respect to FA composition: the quantity of saturated fatty acids (SFAs) was highest for the fish oil diet and lowest for the flaxseed oil diet. Moreover, the control diet was rich in n-6 PUFAs, while both flaxseed oil and fish oil diets were rich in n-3 PUFAs (Table 2). PUFAs are highly prone to oxidative degradation [28]. In order to ensure that the FA content of the diets remained as desired throughout the experiments, GC analysis was performed on diet samples that were handled in similar conditions to the actual feeding regime at three different time intervals (time 0, 24 h, 72 h). The results showed no modifications to the FA content of the diets, suggesting that no oxidation had occurred and therefore that the diets were suitable to use ( Figure 1). For fatty acid (FA) profiles, lipids were extracted, concentrated, and analyzed by gas chromatographymass spectroscopy. Differences in FA content were calculated according to food consumption during the experiment. The rate limiting enzyme in the LC-PUFA biosynthetic pathway is delta-6 desaturase, which is mostly expressed in the liver both in humans and in mice [29]. Thus, the effects of the different FAs The rate limiting enzyme in the LC-PUFA biosynthetic pathway is delta-6 desaturase, which is mostly expressed in the liver both in humans and in mice [29]. Thus, the effects of the different FAs on metabolism and growth and bone parameters also depend on the modifications that occur in the liver. Hepatic fat levels as measured by GC analysis were 1.6 times lower in the mice that were fed the flaxseed oil diet and two times lower in mice that were fed the fish oil diet compared to the control mice on the corn oil diet (Figure 2A), suggesting that consumption of dietary n-3 PUFAs reduced fat accumulation in the liver. Furthermore, dietary n-3 PUFA enrichment increased the hepatic levels of n-3 PUFAs; mice in the flaxseed oil group had 1.7 and 7.11 times higher levels of n-3 PUFAs in the liver compared with mice in the fish oil group and control mice, respectively ( Figure 2B). The total n-6 PUFAs levels in the liver were significantly lower in the fish and flaxseed groups compared with the control group: 3.8-and 6.3-fold, respectively ( Figure 2C). As a result of these effects, the n-6 to n-3 ratio in the liver of mice in the flaxseed and fish groups was dramatically reduced ( Figure 2D). Values are expressed as means ± (SD) of n = 8 mice/group; different superscript letters indicate significant difference (p < 0.05) which was determined using a one-way ANOVA followed by Tukey's test. We also assessed the FA content in the serum in order to investigate its correspondence to the FA content in the liver. Total serum lipid content was significantly lower in the fish and flaxseed groups as compared with the control (Table 3), resembling the results obtained for the liver. This indicates that the serum lipid levels that the cells were exposed to were significantly affected by the experimental diets. Both treatment groups, flaxseed and fish, displayed significantly lower n-6:n-3 FA ratios compared with the control (11.5 and 15.5 times lower, respectively). Similar to the result in the liver, consumption of both diets containing n-3 PUFAs was Values are expressed as means ± (SD) of n = 8 mice/group; different superscript letters indicate significant difference (p < 0.05) which was determined using a one-way ANOVA followed by Tukey's test. We also assessed the FA content in the serum in order to investigate its correspondence to the FA content in the liver. Total serum lipid content was significantly lower in the fish and flaxseed groups as compared with the control (Table 3), resembling the results obtained for the liver. This indicates that the serum lipid levels that the cells were exposed to were significantly affected by the experimental diets. Both treatment groups, flaxseed and fish, displayed significantly lower n-6:n-3 FA ratios compared with the control (11.5 and 15.5 times lower, respectively). Table 3. Hepatic and serum FAs profiles. FAs content was analyzed by GC performed on liver samples of 9-week-old mice and on serum samples of 6-week-old mice from all groups. Values are expressed as means ± (SD) of n = 8 mice/group (mg/100 mg liver or as mg/mL serum). Different superscript letters indicate significant difference (p < 0.05) which was determined using a one-way ANOVA followed by Tukey's test. FAs-Fatty acids, ALA-alpha linolenic acid, EPA-eicosapentaenoic acid, DHA-docosahexaenoic acid, LA-linoleic acid, AA-arachidonic acid, SFA-saturated fatty acid, MUFA-mono-unsaturated acid. Similar to the result in the liver, consumption of both diets containing n-3 PUFAs was comparably associated with a significant reduction in serum AA levels as compared with the control group. However, the fish group had significantly higher serum AA levels, as compared with those in the flaxseed group (Table 3). Comparative analyses between the hepatic FA profiles of the mice that were fed one of the three diets and the FA profiles of the diet (Table 4) demonstrated the modifications occurred in the liver. For instance, in the control group that consumed a corn-oil-based diet rich in LA (97.45% of the PUFAs in the diet) and low in ALA (1.7% of the PUFAs in the diet), hepatic levels of n-6 PUFAs were 32.67% higher compared with hepatic n-3 PUFAs (comparable to the diet). However, higher levels of DHA were detected in the liver of the control group compared to the diet, probably due to the conversions, albeit inefficient, of ALA to its derivatives. The estimated activity of delta-5 desaturase and delta-6 desaturase was significantly reduced in the flaxseed oil group as compared to the control group, suggesting that the conversion of LA to AA was inhibited by the presence of ALA ( Figure 2E,F). This is supported by the apparent reduction of AA from 18.89% in the control to 4.63% in the flaxseed oil and to 11.4% in the fish oil diet group. Interestingly, although the percentage of LA was 2.6 times higher in the flaxseed oil diet compared with the fish oil diet, in which the more mature form of n-3 PUFAs was abundant, mice that were fed the flaxseed oil diet had a lower percentage of hepatic AA, n-6 derivative (2.4 times lower). Furthermore, while the percentages of EPA and DHA (n-3 derivatives) in the liver of mice that were fed the fish oil diet corresponded to their percentages in the diet, in mice that were fed the flaxseed oil diet, hepatic levels of EPA and DHA were high despite their low levels in the diet. These results suggest that ALA enrichment of the diet (as in the flaxseed diet) affects the biosynthetic pathway of PUFAs by diverting it toward n-3 PUFA conversions from ALA to EPA and DHA rather than to n-6 PUFA conversions from LA to AA. Effect of Omega-3 from Different Sources on Growth Pattern and Food Consumption Measurements of food intake, body weight, and tail length showed a standard growth pattern with no significant differences between the three groups ( Figure 3A-D). The growth pattern corresponded to the standard, with a rapid growth rate from 3 to 6 weeks of age and a slower rate from 6 to 9 weeks of age upon reaching sexual maturity ( Figure 3C,D). Femur length, an additional parameter of skeleton longitudinal growth, did not differ between the groups, either at 6 weeks old, or at 9 weeks old ( Figure 3E). Taken together, these results show that the different fat sources in the diet did not affect food consumption and the general growth pattern in the young mice. Since longitudinal bone growth originates from the growth plate (GP) [30], we further evaluated possible microstructure differences between the groups. Safranin-O staining analyses revealed that the GPs of mice in the flaxseed oil and fish oil groups had a regular structure of all the zones (resting, proliferating, and hypertrophic) with normal cell morphology ( Figure 4A). A key component of the hypertrophic matrix is ALP, an enzyme that is implicated in the mineralization process [31]. ALP staining based on colorimetric insoluble substrate of the enzyme, thus indicating ALP activity in-situ. ALP activity did not differ between the groups (Figure 4). parameter of skeleton longitudinal growth, did not differ between the groups, either at 6 weeks old, or at 9 weeks old ( Figure 3E). Taken together, these results show that the different fat sources in the diet did not affect food consumption and the general growth pattern in the young mice. Values are expressed as means ± SD of n = 8 mice/group; different superscript letters indicate significant difference (p < 0.05) which was determined using a one-way ANOVA followed by Tukey's test. Since longitudinal bone growth originates from the growth plate (GP) [30], we further evaluated possible microstructure differences between the groups. Safranin-O staining analyses revealed that the GPs of mice in the flaxseed oil and fish oil groups had a regular structure of all the zones (resting, proliferating, and hypertrophic) with normal cell morphology ( Figure 4A). A key component of the hypertrophic matrix is ALP, an enzyme that is implicated in the mineralization process [31]. ALP staining based on colorimetric insoluble substrate of the enzyme, thus indicating ALP activity in-situ. ALP activity did not differ between the groups (Figure 4). At the age of 9 weeks, the total GP thickness as well as the thickness of the specific regions was significantly higher in both flaxseed and fish groups compared with the control mice ( Figure 4B). At the age of 6 weeks, the total GP thickness of mice that were fed the flaxseed oil or fish oil diet was 11% and 6% higher compared with mice in the control group, respectively ( Figure 4A). These differences were reflected in the PZs of the mice, while no differences were indicated between the groups in the HZ. No significant differences were found between the groups either at 6 weeks or at 9 weeks of age in the number of cells in the total GPs, PZs, and HZs (Table 5). These results imply Values are expressed as means ± SD of n = 8 mice/group; different superscript letters indicate significant difference (p < 0.05) which was determined using a one-way ANOVA followed by Tukey's test. Nutrients 2020, 12, x FOR PEER REVIEW 12 of 25 that diet rich in ALA or in EPA and DHA does not affect chondrocytes proliferation, but enhances matrix production, which leads to a thicker GP. Some support for these results could be found in an in-vitro study that showed that treating chondrocytes with n-3 PUFAs resulted in an increase in cells differentiation and matrix production [15]. Furthermore, differences in GP width were not reflected in the femoral length, probably due to different measurement scales; while femoral length was measured on a scale of mm, GP width was measured using microscopic software on a scale of µ m that allows detection of slighter changes. At the age of 9 weeks, the total GP thickness as well as the thickness of the specific regions was significantly higher in both flaxseed and fish groups compared with the control mice ( Figure 4B). At the age of 6 weeks, the total GP thickness of mice that were fed the flaxseed oil or fish oil diet was 11% and 6% higher compared with mice in the control group, respectively ( Figure 4A). These differences were reflected in the PZs of the mice, while no differences were indicated between the groups in the HZ. No significant differences were found between the groups either at 6 weeks or at 9 weeks of age in the number of cells in the total GPs, PZs, and HZs (Table 5). These results imply that diet rich in ALA or in EPA and DHA does not affect chondrocytes proliferation, but enhances matrix production, which leads to a thicker GP. Some support for these results could be found in an in-vitro study that showed that treating chondrocytes with n-3 PUFAs resulted in an increase in cells differentiation and matrix production [15]. Furthermore, differences in GP width were not reflected in the femoral length, probably due to different measurement scales; while femoral length was measured on a scale of mm, GP width was measured using microscopic software on a scale of µm that allows detection of slighter changes. Diets Rich in n-3 PUFAs Improve Bone Mechanical Properties The skeleton plays a critical role in bearing functional loads, which depends on the bones' capacity to withstand fractures; it is therefore important to assess their mechanical properties. The biomechanical properties of the femur were evaluated in a three-point bending experiment (Table 6). At 6 weeks of age we found that most of the tested parameters were significantly higher in mice that were fed the fish oil diet. A similar trend was observed in mice that were fed the flaxseed oil diet, yet to a lesser degree. Specifically, maximal load was substantially higher in both the flaxseed and fish oil groups compared with the control group. Additionally, femora of mice fed a fish oil diet were stiffer that those of the control group (Table 6). These parameters measuring the stiffness, plastic, and elastic properties of the bone indicate the ability to bear the load. No differences were found between the groups at 9 weeks of age. Values are expressed as means ± (SD) of n = 8 mice/group; different superscript letters indicate significant difference (p < 0.05) which was determined using a one-way ANOVA followed by Tukey's test. Diet Containing Plant Source of n-3 PUFAs (Flaxseed Oil) Improved Trabecular Bone's Micro-Architecture The comparison between the groups demonstrated significant differences in trabecular bone micro-architecture. As expected, the trabecular number (Tb.N) for all the groups decreased slightly when the mice reached 6 to 9 weeks of age as a result of bone maturation [32] (Table 6). Both the 6-and 9-week-old mice in the flaxseed oil group had a significantly larger percent of bone volume (BV/TV), which increased by 30%, and a higher Tb.N, which increased by 20% compared with the control group (Table 6, Figure 5). The group that was fed the fish oil diet had a slightly higher BV/TV and Tb.N compared with the control group, but this was not significant. Trabecular thickness and separation did not differ between the groups (Table 6). These data suggest that the presence of n-3 PUFAs from a plant source contributes to the process of bone development and optimizes characteristics of trabecular bone. expressed as means ± (SD) of n = 8 mice/group; different superscript letters indicate significant difference (p < 0.05) which was determined using a one-way ANOVA followed by Tukey's test. Diet Containing Plant Source of n-3 PUFAs (Flaxseed Oil) Improved Trabecular Bone's Micro-Architecture The comparison between the groups demonstrated significant differences in trabecular bone micro-architecture. As expected, the trabecular number (Tb.N) for all the groups decreased slightly when the mice reached 6 to 9 weeks of age as a result of bone maturation [32] (Table 6). Both the 6and 9-week-old mice in the flaxseed oil group had a significantly larger percent of bone volume (BV/TV), which increased by 30%, and a higher Tb.N, which increased by 20% compared with the control group (Table 6, Figure 5). The group that was fed the fish oil diet had a slightly higher BV/TV and Tb.N compared with the control group, but this was not significant. Trabecular thickness and separation did not differ between the groups (Table 6). These data suggest that the presence of n-3 PUFAs from a plant source contributes to the process of bone development and optimizes characteristics of trabecular bone. Diet Containing n-3 PUFAs from an Animal Source (Fish Oil) Increased BMD We found that diets rich in n-3 PUFAs had no significant effect on the geometrical features of the cortical bone (Table 6), although a trend towards increased cortical area fraction was detected in the mice consuming the flaxseed-based diet. Results showed that the BMD of 9-week-old mice on the fish oil diet was significantly higher compared with the mice on the flaxseed oil and control groups. At the age of 6 weeks, these differences were still not significant ( Table 6), suggesting that the diet containing an animal source of n-3 PUFAs (fish oil) resulted in a higher accumulation of mineral in the young developing bones. Diet Containing n-3 PUFAs from an Animal Source (Fish Oil) Increased BMD We found that diets rich in n-3 PUFAs had no significant effect on the geometrical features of the cortical bone (Table 6), although a trend towards increased cortical area fraction was detected in the mice consuming the flaxseed-based diet. Results showed that the BMD of 9-week-old mice on the fish oil diet was significantly higher compared with the mice on the flaxseed oil and control groups. At the age of 6 weeks, these differences were still not significant ( Table 6), suggesting that the diet containing an animal source of n-3 PUFAs (fish oil) resulted in a higher accumulation of mineral in the young developing bones. The Effect of Different Sources of n-3 PUFAs on Bone's Transcriptional Regulation To verify the underlying mechanisms of the different n-3 PUFA effects on bone phenotype of 6-week-old mice, we applied the pairwise analysis of the signals from 19,040 genes. We identified 763 genes that were differentially expressed (DE) in at least one of the following comparisons: flaxseed versus control, fish versus control, or flaxseed versus fish with a p value below 0.05 and a log-2-fold change greater than 0.585 (Table S1). Figure 6A summarizes the number of DE genes for each comparison. Twenty-two genes were up-regulated and 33 were down-regulated in the flaxseed group compared with the control group, 619 were up-regulated and 77 down-regulated in the fish group compared with the control group, and 34 were up-regulated and 227 were down-regulated in the flaxseed group compared with the fish group ( Figure 6B). IPA Pathway Enrichment Analysis of Genes Related to Dietary n-3 PUFAs The top signaling pathway related to bone development was clarified (Table 7) in the transcriptome analysis. The flaxseed-oil-based diet was enriched for genes implicated in the circadian rhythm, with about 20% of the pathway genes either up-or down-regulated compared with the control diet. The fish-oil-based diet altered the cyclin and cell-cycle pathways, with 40% of the pathway genes either up-or down-regulated compared with the control diet; and the comparison between the flaxseed oil diet and the fish oil diet indicated changes associated with the heme biosynthesis pathway. IPA Pathway Enrichment Analysis of Genes Related to Dietary n-3 PUFAs The top signaling pathway related to bone development was clarified (Table 7) in the transcriptome analysis. The flaxseed-oil-based diet was enriched for genes implicated in the circadian rhythm, with about 20% of the pathway genes either up-or down-regulated compared with the control diet. The fish-oil-based diet altered the cyclin and cell-cycle pathways, with 40% of the pathway genes either up-or down-regulated compared with the control diet; and the comparison between the flaxseed oil diet and the fish oil diet indicated changes associated with the heme biosynthesis pathway. Differential expressed cutoff was set at p-value ≤ 0.05, the log-2-fold change was ≥0.585, min counts was ≥30. Raw p-values were adjusted for multiple testing using Benjamini and Hochberg's procedure. Data were analyzed using IPA (QIAGEN Inc.). To further investigate the functional influence of the fish oil diet on cell-cycle progression, we used the IPA functional analysis tool, which indicated an activation during interphase and inhibition of re-entry to the cell cycle. On the basis of these two analyses, we inferred that the fish oil diet inhibited cell-cycle progression. Figure 6C presents the up-and down-regulated DE genes that were classified as involved in pathways related to cell cycle. Despite the observed up-regulation of cyclin-E, B, A which promote cell-cycle progression, the G1/S check point pathway was significantly down regulated, this is due to the complexity of the cell cycle process that consists of many regulators that influence its direction of action. Several genes that are known to induce cell-cycle arrest were up-regulated in the fish oil group, including RB1, p18, and p107 as well as the transcriptional repressors E2F-4,7,8. Moreover, we observed that the expression of Cyclin Dependent Kinase Inhibitor 1A (CDKNA1, also known as p21), a negative regulator of cell-cycle progression, was 1.5 and 1.7 times higher in the fish oil group and flaxseed oil group compared with the control group, respectively. However, it is evident that the fish oil diet influenced cell-cycle genes to a higher extent compared with the flaxseed oil and control diets. We also used the IPA functional analysis tool to study the canonical pathways related to bone development; these included: the role of osteoblasts, osteoclasts, and chondrocytes in rheumatoid arthritis, sonic hedgehog signaling, role of macrophages, fibroblasts and endothelial cells in rheumatoid arthritis, osteoarthritis pathway and Wnt β-catenin signaling, which were significantly affected by one or more of the interventions ( Figure 6D). Results show that while the matrix protein collagen type 5α3 (COL5A3) expression was up-regulated in the flaxseed oil group compared with the control group, the expression of collagen type 27α1 (COL27A1) and 24α1 (COL24A1) was down-regulated in the fish group compared with the control group. The comparison of Matrix Gla protein (Mgp) expression between the flaxseed oil and fish oil groups showed an up-regulation in the former. The matrix metalloproteinases ADAMTS-4, 5, and 15 were up-regulated in both the flaxseed oil and fish oil groups compared with the control group. Osteoclastogenic markers, calcitonin receptor (Calcr), and the nuclear factor of activated T-cells, cytoplasmic 2 (Nfatc2), were significantly down-regulated in the fish group and flaxseed group compared with the control group. Periosteal osteoblast precursors are potential sources of new osteoblasts. Differentiation of mesenchymal stem cells toward tissue-specific lineages to either osteoblasts or adipocytes is sensitive to environmental conditions such as dietary changes. Results showed that consumption of fish oil diet led to an up-regulation in PPARγ gene expression compared with the control group ( Figure 6D). Comparative enrichment analysis of significantly regulated canonical pathways indicated an overlap between the different groups. Both diets containing oils rich in n-3 PUFAs resulted in an alteration to the circadian rhythm pathway ( Figure 6E). To better understand the involvement of n-3 PUFA and circadian rhythm in bone development, we focused on the specific genes affected and indicated consistent patterns of gene expression. The expression of ARNT-like protein-1 (Bmal1) was significantly up-regulated, and the expression of the Nuclear Receptor Subfamily 1 Group D Member 1 (Nr1d1), Period2 (Per2), Period3 (Per3), Cryptochrome2 (Cry2), D-site albumin promoter binding protein (Dbp), and basic-helix-loop-helix (Bhlhe41) was down-regulated. Discussion This study describes the role of different dietary sources of n-3 PUFAs in skeletal development and bone quality. We found that dietary n-3 PUFAs contributed to improved mechanic and morphometric properties of the bone, hence to improved bone quality. Notably, the positive effects depend on the sources of n-3 PUFAs. Evidence presented over recent has shown that LC-PUFAs, especially n-3 PUFAs such as EPA and DHA, are beneficial for bone health. Findings from observational and randomized controlled trials indicated that LC-PUFAs can enhance bone formation in adult and older men and women, increase PBM in adolescents, and reduce bone loss of postmenopausal and osteoporotic women as measured by dual energy x-ray absorptiometry (DEXA) [3,33,34]. Factors influencing initial bone development are of great importance because of the implications for bone health later in life. Women in particular are susceptible to bone mineral loss and compromised bone structure at a greater rate as they approach menopause and loss of endogenous estrogen production. We therefore selected female mice for our study and propose our findings as the basis for a preventative approach to promoting bone quality. Androgens and estrogens are considered as major regulators of gender differences in bone metabolism. Regarding the consumption of PUFAs, Lau et al., suggested that the dietary recommendations for PUFAs intake need to be gender-specific [35]. Dietary n-3 PUFAs Decrease Hepatic and Serum Fat Levels The different dietary sources of n-3 PUFAs contain FAs that differ in their biochemistry and their metabolic pathways. The FA content in the diet can modulate the composition of stored and structural lipids, including the FA profiles of tissues [36]. Our study demonstrates that dietary n-3 PUFAs at a physiological concentration (16%) with no further challenging decrease fat levels in the liver and serum. Mice that were fed flaxseed oil or fish oil diets had significantly lower levels of total hepatic fat. Studies that have investigated the effect of n-3 PUFAs on hepatic FA content usually focused on alcoholic liver steatosis. For instance, Huang et al. reported that the fat-1 mice exhibited significantly reduced hepatic fat levels. They suggested that endogenous n-3 PUFAs have protective effects against alcoholic liver disease [37]. We also show the effect of n-3 PUFAs on hepatic FA content. The hepatic levels of n-3 PUFA in the mice that were fed the flaxseed oil diet were higher than those whose n-3 PUFA intake came from fish and from those in the control group. The liver is a major site of LC-PUFAs biosynthesis [38], hence, their deposition is dependent not only on the intake of EPA and DHA but also on the intake of their precursor, ALA. The flaxseed oil diet, similar to the control diet, did not contain n-3 LC-PUFAs; nonetheless, the mice that were fed the flaxseed oil diet had higher levels of EPA as well as DHA in the liver compared with the control mice, demonstrating that ALA enrichment resulted in improved efficiency of ALA conversion to its derivatives. Still, the levels of hepatic EPA and DHA in mice from the flaxseed oil group were lower compared with mice in the fish oil group, possibly due to preferential use of ALA as an energy source, resulting from beta-oxidation or because of the inefficient conversion of ALA to EPA and DHA. Interestingly, despite comparable levels of dietary LA (18:2 n-6), the mice that were fed the flaxseed oil diet had the lowest level of the n-6 LC-PUFA, AA (20:4 n-6) in the liver compared with mice in the other groups. The differential metabolite composition may result from the competition between LA and ALA for the rate limiting enzyme delta-6 desaturase for their conversion to n-6 or n-3 LC-PUFAs, respectively. In the competition for the enzyme, a preference has been shown toward ALA [39], suggesting that the high amount of ALA in the flaxseed oil diet reduced production of the n-6 LC-PUFAs. Among other polyunsaturated free fatty acids (FFA), n-3 PUFAs have been recognized for their beneficial effects on the hepatic lipid metabolism as well as on the serum lipid profile as signaling molecules [40,41]. In this study, plasma was collected under fasting conditions, therefore we assume that the majority of FAs were present in serum in their free form, as FFAs. We focused on FFAs since FFA receptor 4 (FFAR4) is expressed in bone cells and preferentially binds n-3 PUFAs [15,42]. In our study it was expressed in the bone samples of all groups without significant differences. FFAR4 was shown to stimulate bone formation and suppress bone resorption upon activation by n-3 PUFA [43]. The activation of the FFAR4 pathway is possibly the underlying connection between nutrition, lipid metabolism, and bone metabolism and might be crucial to the protective effects of dietary n-3 PUFAs on bones. Diet Rich in n-3 PUFAs Affects Postnatal Skeletal Development and Bone Characteristics Micro-CT analysis suggests that dietary n-3 PUFAs affect bone quality by influencing different osseous tissues; while flaxseed oil improved trabecular bone properties, fish oil enhanced mineralization. This finding is supported by Lukas et al.'s recent work, which showed that rats that were fed a high-fat diet (26% Kcal), rich in DHA, had higher tibial BMD. In addition, rats on a high-fat diet (26% Kcal), rich in ALA, had an improved trabecular bone micro-architecture [7]. Our study exhibits analogous results despite using a balanced diet (16% Kcal) in a different animal model. The observed differences in bone micro-architecture and mineralization are expected to affect the mechanical behavior of long bones [44]. Lau et al. suggested that a reduction of the n-6:n-3 PUFA ratio and an increase in EPA and DHA are linked to greater bone strength in healthy young fat-1 male and female mice [33,35]. We found that diets rich in n-3 PUFAs improve bone mechanical properties in 6-week-old mice. However, this result was not indicated in 9-week-old mice, probably due to limitations in the three-point bending test of the whole bone. Reliable measurement of material properties requires the use of samples with well-defined geometry, like rectangular beams. Such samples are difficult to produce from thin cortices like those of mice. When whole bones are tested, results are dependent upon both the geometry of the bone and the mechanical properties of the material [21]. Osteoblasts and adipocytes derive from a shared pool of bone marrow mesenchymal stem cells, while metabolic microenvironment, such as nutrition, regulates bone marrow progenitor cells differentiation. Several studies showed a tradeoff between bone and fat mass, with the greater differentiation of adipocytes at the expense of osteoblasts thereby leading to reduced bone mass. In this study we evaluated marrow fat accumulation using tibial histological sections. Thus, we conducted a double blind evaluation based on visualization by eight independent examiners on six different slides from each group. Results, when calculated and quantified, show no significant differences between the treatments. Despite the impression of higher level of fat globules in the control group samples as compared to the n-3-enriched groups, we could not establish this fact. We assume that upon higher levels of fat in the diets these differences could become more pronounced. Transcriptional Regulation of Bone by Different Sources n-3 PUFAs To better understand the impact of n-3 PUFAs on the transcriptome of the developing bone, we performed RNA-seq analysis of the bones of mice whose PUFA intake originated from different dietary sources. Although a few studies have used RNA-seq analysis to reveal the DE genes in skeletal tissue [45][46][47], as far as we know, this is the first study that uses RNA-seq to study the effect of dietary n-3 PUFAs on bone transcriptome in vivo. N-3 and n-6 PUFAs are ligands of PPARγ, which is known to influence distinct target genes in various cell types, including bone cells [50]. Furthermore, eicosanoids and lipid mediators produced from AA, EPA, and DHA also bind and regulate PPARγ [12]. One of the actions of PPARγ is to physically inhibit the translocation of NFκB to the nucleus. This might be the mechanism by which n-3 PUFAs perform their anti-inflammatory activity [51]. Additionally, PPARγ was found to stimulate adipocyte differentiation at the expense of osteoblast differentiation in bone marrow mesenchymal stem cells [51]. Although it can be argued that PPARγ activation increases bone resorption, its up-regulation in the mice that were fed the fish oil diet was not accompanied by an increase in bone resorption genes or decreased BMD. In mice on the flaxseed oil diet, nuclear factor of activated T-cells 2 (Nfatc2), a bone resorption related gene was down-regulated compared with the mice on the fish oil diet and the control mice. The calcitonin receptor (Calcr) is expressed by mature osteoclasts and considered a bone resorption gene [52]; therefore its down-regulation in the fish oil group compared with the control group might explain the higher BMD levels. Moreover, genes associated with bone formation were not differentially expressed. Interestingly, it has been suggested that the inhibition of PPARγ prevents the action of n-3 PUFAs during cell differentiation [53]. This hypothesis led us to focus on the role of n-3 PUFAs in cell-cycle progression and its relationship to the observed skeletal phenotype (Figure 7). To better understand the impact of n-3 PUFAs on the transcriptome of the developing bone, we performed RNA-seq analysis of the bones of mice whose PUFA intake originated from different dietary sources. Although a few studies have used RNA-seq analysis to reveal the DE genes in skeletal tissue [45][46][47], as far as we know, this is the first study that uses RNA-seq to study the effect of dietary n-3 PUFAs on bone transcriptome in vivo. N-3 and n-3 PUFAs are ligands of PPARγ, which is known to influence distinct target genes in various cell types, including bone cells [50]. Furthermore, eicosanoids and lipid mediators produced from AA, EPA, and DHA also bind and regulate PPARγ [12]. One of the actions of PPARγ is to physically inhibit the translocation of NFκB to the nucleus. This might be the mechanism by which n-3 PUFAs perform their anti-inflammatory activity [51]. Additionally, PPARγ was found to stimulate adipocyte differentiation at the expense of osteoblast differentiation in bone marrow mesenchymal stem cells [51]. Although it can be argued that PPARγ activation increases bone resorption, its up-regulation in the mice that were fed the fish oil diet was not accompanied by an increase in bone resorption genes or decreased BMD. In mice on the flaxseed oil diet, nuclear factor of activated T-cells 2 (Nfatc2), a bone resorption related gene was down-regulated compared with the mice on the fish oil diet and the control mice. The calcitonin receptor (Calcr) is expressed by mature osteoclasts and considered a bone resorption gene [52]; therefore its down-regulation in the fish oil group compared with the control group might explain the higher BMD levels. Moreover, genes associated with bone formation were not differentially expressed. Interestingly, it has been suggested that the inhibition of PPARγ prevents the action of n-3 PUFAs during cell differentiation [53]. This hypothesis led us to focus on the role of n-3 PUFAs in cell-cycle progression and its relationship to the observed skeletal phenotype (Figure 7). Dietary n-3 PUFAs Alter Circadian Clock Gene Expression in Bone We observed a remarkable trend in both of the groups that were fed n-3 PUFA diets: the core circadian clock genes were altered in both. The presence of a peripheral clock in bone cells has increasingly been recognized as a major regulator controlling cellular functions [54]. Whereas the circadian clock in the suprachiasmatic nucleus (SCN) in the brain is mainly responsive to light, peripheral clocks are influenced by various factors, among them dietary macronutrients [55]. Current understanding of the circadian clock's role in bone metabolism mainly draws from in vitro studies as well as from transgenic models. Here, we show for the first time, the relationship between dietary fats and modifications in the expression of circadian clock genes in the bone. Genes related to the circadian rhythm participate in the regulation of bone resorption directly through osteoclastogenesis inhibition and indirectly through influencing the osteoblasts' regulatory activity of osteoclastogenesis [54,56,57]. The classical clock machinery comprises two positive and two negative elements: Bmal1 and Clock are the former; Per and Cry are the latter [58]. These genes were found to be involved in the differentiation and proliferation of osteoblasts and osteoclasts. We found that mice that were fed a diet containing either flaxseed oil or fish oil had a greater expression of Bmal1 in the long bones and a lower expression of the clock genes Bhlhe40, Cry2, Per2, Per3, and Dbp, as compared with the control mice. These results corroborate previous studies showing improved bone morphology along with changes in the expression of circadian clock genes. Takarada et al. reported that osteoblast-specific knockout (KO) of Bmal1 in mice resulted in significantly lower BV/TV, trabecular thickness, and Tb.N. Moreover, in the femur of Bmal1 KO mice, an increase was predominantly found in bone resorption marker genes but not in the expression of bone-formation related genes [57]. Our results match the findings described by Takarada: up-regulation of Bmal1 was accompanied by improved trabecular bone properties and lower expression of genes associated with bone resorption such as calcitonin receptor and Nfatc2; and no significant differences in bone-formation genes. These data further support the hypothesis that Bmal1 up-regulation in osteoblasts might inhibit bone resorption. The effects of Per and Cry core circadian genes on osteoblasts are the opposite of those that Bmal1 and Clock exert. Osteoblast-targeted deletion of Per genes (Per1 and Per2) or Cry genes (Cry1 and Cry2) was associated with increased bone mass, higher bone-formation rate, and lower osteoclast activity than that observed in control mice [59]. The morphological effects attributed to these genetic manipulations correlate with our data on bone phenotypes and down-regulation of Per and Cry genes (Figure 7). In a Bmal1 KO mice model, Takarada et al., indicated hypersensitivity to 1,25(OH)2D3 and a common thread among the clock system and 1,25(OH)2D3 signal in osteoblasts [57]. Moreover, in an in vitro experiment they found that Bmal1-deficient osteoblasts support osteoclastogenesis to a higher extent, suggesting that the bone modeling-remodeling process is regulated by an osteoblastic clock system through a mechanism related to the modulation of 1,25(OH)2D3-induced Rankl expression in osteoblasts. Additionally, the VITamin D and OmegA-3 TriaL (VITAL), is an ongoing clinical research in over 25,000 men and women across the United States. The main goal of VITAL is to determine whether vitamin D and/or n-3 PUFAs can prevent cancer, heart disease, and stroke. However, other health outcomes, such as risk for bone fractures, are now being examined as they might have potential therapeutic effect on bones [60]. Proliferation and differentiation of osteoblasts and osteoclasts and the cellular interaction between them are vital to the regulation of bone remodeling. Both VDR and PPARγ are ligand-activated nuclear transcription factors that are instrumental to bone health. In addition to its function in lipid metabolism, PPARγ is directly linked to the peripheral circadian clock [61]. PPARγ transcription is driven by Bmal1, and PPARγ in turn activates the transcription of Bmal1. In accordance with our RNA-seq results indicating activation of cell differentiation by n-3 PUFAs, PPARγ and 1,25(OH)2D3 were also reported to have antiproliferative effects. Therefore, it can be assumed that dietary n-3 PUFAs can influence the circadian clock through PPARγ and VDR, which might eventually lead to alterations in skeletal development. Our findings show a link between the consumption of n-3 PUFAs and bone circadian rhythm and improved bone phenotype. Together with evidence from the bone-specific KO of clock genes, we suggest that the pivotal role of dietary n-3 PUFAs in regulating bone quality is mediated by the peripheral circadian clock system. The circadian clock pathway involves the negative autoregulation of the Per and Cry genes via the inhibition of the activators Bmal1 and Clock through the Per and Cry proteins [62]. An additional negative feedback that represses Bmal1 expression is mediated by the Nr1d1 protein, which is itself induced by Clock/Bmal [55]. One of the actions of circadian repressors Cry1 and Cry2 is modifying transcriptional activity through interacting with nuclear receptors such as PPARγ and vitamin D receptor, both are ligand-activated nuclear transcription factors that are crucial for bone health [63]. N-6 and n-3 PUFAs, eicosanoids, and other lipid mediators produced from AA, EPA, and DHA can bind and regulate PPARγ, which can disturb the translocation of NFκB to the nucleus. This action might be a mechanism by which n-3 PUFAs perform their anti-inflammatory activity [51]. Several molecular links between circadian clock and cell-cycle genes are known, one of which is shown; Nr1d1 binds to the same element present in p21 promotor and inhibits p21 transcription, leading to cell-cycle progression [64,65]. The mechanism by which n-3 PUFAs might regulate bone formation is through down-regulation of Nr1d1, which results in increased levels of p21, inhibition of cell cycle, and increased differentiation. Negative cell-cycle regulators (such as pRB and p107) inhibit the transcription of cell-cycle genes and promote differentiation [66,67]. Moreover, p107 is a vital component mediating the antiproliferative activity of 1,25(OH)D3; hence its up-regulation is thought to promote cell differentiation [68]. Conclusions Our results emphasize the importance of integrating various sources of n-3 PUFAs into the diet from an early age. Our findings also suggest that Omega-3 may be an important nutrient to include in diets that promote bone health especially in view of the general population's tendency over the past three decades toward excessive consumption of n-6 PUFAs [69].
2020-11-19T09:14:17.042Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "64f99b684c696674f580e1e07965a94d2483ef3f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/12/11/3494/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee3d69ecfc5b74154753e1e12d18bc04b712cf5b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
16868238
pes2o/s2orc
v3-fos-license
Solitons, peakons and periodic cusp wave solutions for the Fornberg-Whitham equation In this paper, we employ the bifurcation method of dynamical systems to investigate the exact travelling wave solutions for the Fornberg-Whitham equation. The implicit expression for solitons is given. The explicit expressions for peakons and periodic cusp wave solutions are also obtained. Further, we show that the limits of soliton solutions and periodic cusp wave solutions are peakons. Introduction The Fornberg-Whitham equation has appeared in the study of qualitative behaviors of wave breaking [1,2]. It is a nonlinear dispersive wave equation. Since Eq.(1.1) was derived, little attention has been paid to studying it. In [3], Fornberg and Whitham obtained a peaked solution of the form u(x, t) = A exp (− 1 2 x − 4 3 t ), where A is an arbitrary constant. In [4], we constructed a type of bounded travelling wave solutions for Eq.(1.1), which are called kink-like and antikink-like wave solutions. Unfortunately, the results in [3,4] are not complete. In the present paper, we continue to derive more travelling wave solutions for Eq.(1.1), so that we can supplement the results of [3,4]. The remainder of the paper is organized as follows. In Section 2, we discuss the bifurcation curves and phase portraits of travelling wave system. In Section 3, we obtain the implicit expression for solitons and the explicit expressions for peakons and periodic cusp wave solutions. At the same time, we show that the limits of solitons and periodic cusp wave solutions are peakons. A short conclusion is given in Section 4. 2 Bifurcation and phase portraits of travelling wave system Let u = ϕ(ξ) with ξ = x − ct be the solution for Eq.(1.1); then it follows that where g is the integral constant. Let y = ϕ ′ ; then we get the following planar dynamical system: By the theory of planar dynamical systems (see [5]), for an equilibrium point of a planar dynamical system, if J < 0, then this equilibrium point is a saddle point; it is a center point if J > 0 and p = 0; if J = 0 and the Poincaré index of the equilibrium point is 0, then it is a cusp. By using the first-integral value and properties of equilibrium points, we obtain the bifurcation curves as follows: Obviously, the three curves have no intersection point and g 3 (c) < g 2 (c) < g 1 (c) for arbitrary constant c. Using the bifurcation method for vector fields (e.g., [5]), we have the following result which describes the locations and properties of the singular points of system (2.5). The phase portraits of system (2.5) are given in Fig.1. Solitons, peakons and periodic cusp wave solutions graphs of the homoclinic orbit, periodic orbit and their limit cure are shown in Fig.2. The following lemma gives the relationship of soliton solutions of Eq.(1.1) and homoclinic orbits of system (2.3). In Fig.2(a), the homoclinic orbit of system (2.3) can be expressed as Substituting Eq.(3.1) into the first equation of system (2.3) and integrating along the homoclinic orbits, we have It follows from (3.5) that 12) (3.13) (3.6) is the implicit expression for solitons for Eq. (1.1). We show the graphs of the solitons in Fig.3 under some parameter conditions. From Fig.3, we can see that when g 2 (c) < g < g 1 (c) and g tends to g 2 (c), the solitons lose their smoothness and tend to peakons. Note the following facts: when g 2 (c) < g < g 1 (c) and g tends to g 2 (c), the limit curve of such homoclinic orbit of system (2.3) is a triangle with the following three line segments (see Fig.2(b)): and Let us have g 2 (c) < g < g 1 (c) and g tends to g 2 (c); then we obtain that Obviously, u has peaks at x − ct = 0. We show graphs of the peakons in Fig.4 under some parameter conditions. (2) In the phase portaits, the triangle curve corresponds to a peakon solution. We have the following lemma, similar to Lemma 3.1, which indicates the relationship of periodic wave solutions for Eq.(1.1) and periodic orbits of system (2.3). Conclusion In this work, by using the bifurcation method, we obtain the analytic expressions for solitons, peakons and periodic wave solutions for the Fornberg-Whitham equation, given as (3.6), (3.17) and (3.26), respectively. We also show the relationships among the solitons, peakons and periodic cusp wave solutions.
2014-10-01T00:00:00.000Z
2009-08-06T00:00:00.000
{ "year": 2009, "sha1": "7f513a0916bc39c580052b492aff353f9969597d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0908.0921v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f221bdc31bc3d6a4926f099d5a83ef8320739803", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
221317066
pes2o/s2orc
v3-fos-license
Estimation of Stature from the Length of Index Finger (2DL) and Ring Finger (4DL) in Nepalese Adults Original Research Article Establishing the identity of an individual in cases of mass disasters or when dismembered body parts are found is one of the significant Forensic investigations. Stature is a very important factor helping identity establishment in all cases of identification. With the aim to help cases of forensic identification, this study attempts to estimate the stature of an adult Nepalese from the length of index finger (2DL) and the ring finger (4DL). The study was carried out on a randomly selected cross sectional sample of 250 adult Nepalese students of third and fourth year MBBS and BDS studying in Universal College of Medical Sciences Teaching Hospital (UCMSTH), Bhairahawa, Nepal. Stature, left and right 2DL and 4DL were measured and statistically analyzed. Pearson’s correlation coefficient was computed and Simple Linear Regression equations were derived to estimate stature from left and right 2DL and 4DL. The mean stature (171.06 cm) of males exceeds the mean stature (160.67 cm) of females. The Pearson correlation was higher between left and right 2DL and stature in males (Left 2DL r = 0.7 and Right 2DL r = 0.64) whereas it was higher between left and right 4DL in females (Left 4DL r = 0.8 and Right 4DL r = 0.8). Linear regression equations were derived for estimating stature in males and females from their left and right 2DL and 4DL. Scatter diagrams were plotted to see the association between the variables. Stature and body parts ratio being a population specific trait, we formulated simple linear regression equations to ascertain stature from 2DL and 4DL in Nepalese population. INTRODUCTION Establishing the identity of an individual in cases of mass disasters or when dismembered/ mutilated body parts are found is one of the most significant parts of Forensic investigation. Minor observations can play a vital role to help build the bigger picture. Estimating stature from measuring various body parts has been endeavored by many Anthropologists, Anatomists, and Forensic experts. Such computations are based on a principle that body parts have more or less consistent ratios comparative to the stature of a person [1,2] For that purpose researchers have utilized various body parts like upper limb, lower limb, and dimensions of hand and foot [2]. Like other body parts, the length of index and ring fingers also has more or less consistent ratios comparative to the stature of a person. The index finger is the second digit (2D) and the ring finger is the fourth digit (4D) of the hand [3]. Owing to the variations in morphology of different populations, the formula for stature estimation has to be population specific [13]. In view of lacking literature on the estimation of stature from length of index finger (2DL) and length of ring finger (4DL), the present research was taken up to report the correlation between index and ring finger length and stature in Nepalese population. The aim of this study was to derive simple linear regression equations from the index and ring finger lengths in males and females to help estimate the stature in this population. MATERIAL AND METHODS The study was carried out on a randomly selected cross sectional sample of 250 adult Nepalese students (150 males and 100 females) of third and [17] was employed) a. Length of Index finger (2DL) was measured as the distance between the midpoints of metacarpo-phalangeal crease at the base of index finger to the tip of the index finger. b. Length of the Ring finger (4DL) was measured as the distance between the mid-points of metacarpo-phalangeal crease at the base of ring finger to the tip of the ring finger. 2. Measuring stature: -Stature was measured in centimeters to the nearest millimeter by making the subject stand erect and barefooted in anatomical position with the head in Frankfort Horizontal Plane from crown to heel with standard height measuring instrument. The measurements were taken twice to avoid intra-personal variation and by two persons to avoid inter-personal variance. The mean of the measurements were then taken as final measurement. Statistical Analysis The obtained data was tabulated and statistically analyzed using the SPSS® for Windows, Version 12.0. Pearson's correlation coefficient was computed to understand the relationship between stature and 2DL and 4DL. Simple Linear Regression equations were derived to estimate stature from 2DL and 4DL, using stature as the dependent and 2DL and 4DL as independent variables. P < 0.05 was considered to be statistically significant. RESULTS The descriptive statistics of age, stature, length of left index finger (Lt. 2DL), length of right index finger (Rt. 2DL), length of left ring finger (Lt.4DL) and length of right ring finger (Rt.4DL) in males and females are shown in Table 1. The correlation observed between length and stature was statistically significant in all observations ( Table 2, 3). The Pearson correlation obtained linking finger length and stature was found to be higher in females as compared to males. The correlation was higher between left and right 2DL and stature in males (Left 2DL r = 0.7 and Right 2DL r = 0.64) whereas it was higher between left and right 4DL in females (Left 4DL r = 0.8 and Right 4DL r = 0.8). Linear regression equations were derived for estimating stature in males ( Table 2) and females (Table 3) from their left and right 2DL and 4DL. Parameter Pearson correlation coefficient (r) The association between stature and left 2DL, right 2DL, left 4DL, right 4DL among males and females are shown in scatter diagrams (Figure 1-8). DISCUSSION Forensic experts are investigating various body parts like head, face, hand, foot, phalanges, finger length etc to estimate the stature of a person [1] because there are circumstances where only dismembered or mutilated body parts are available for medical examination. In mass disasters like airplane crash, earthquake, tsunami etc. parts of dead bodies are brought for identification of individuals and if stature estimation can be done from that particular part, time required for identification and possible victim matches will be lessened [14]. In situations where sophisticated methods are not available or where such methods have limitations, simple anthropological methods can have much usefulness [1]. Many studies have revealed the usefulness of finger measurements in stature estimation [1,[4][5][6][7][8][9][10][11][12][13][14][15][16]. In a study involving five hundred Nigerians, Oladipo et al. found the mean stature in males and females to be 171.53 cm and 161.81 cm respectively [4]. In our study the mean stature of males was 171.06 cm and that of the females was 160.67 cm. Singh B et al. Studied the Kinnaur population of Himachal Pradesh and came up with conclusion that 2D provided the best stature estimates [14] which was also found in our study for males. A study on stature estimation from 2DL and 4DL in North Indian population by Krishan et al showed that stature can be estimated with reasonable accuracy from 2DL and 4DL [7] which was also observed in our study. However, the correlation coefficient obtained for males was 0.67 to 0.74 and for females was 0.36 to 0.53 which was lesser as compared to our finding. This disparity may be because that study was conducted on adolescents with age ranging from 14 to 18 years whereas the age range for our study was 20 to 24 years. The hand-related measurement variables of males in Slovakia [18], Turkey [19], Egypt [20], Mauritius [21], North India [7,9,14] and South India [5] have been found to be larger than those of females which were also found in our study. When comparing the parameters with gender difference, Bardale RV et al. [1] found correlation coefficient to be higher in females than males while estimating stature from 2DL and 4DL which was also found in our study. However, a study by Oladipo et al. [3] and another study by Krishan et al. [7] found higher correlation coefficient in males as compared to females. This variation may be owed to the population difference, as those studies [3,7] were done in Nigerian and North Indian population respectively but our study was done in Nepalese population. It has been stated by Myung MK and Yun H that the shapes of hands differ according to the gender and race, and thus is of immense significance to devise an equation in consideration of variances to estimate stature [22]. Our attempt to find correlation between length of index and ring finger with the stature of males and females showed statistically significant positive correlation between them, which was valuable in estimating the stature of a Nepalese adult. Similar findings have been documented by many researchers [1,[4][5][6][7][8][9][10][11][12][13][14][15][16]22] but stature and body parts ratio being a population specific trait we pioneered to formulate a simple linear regression equation to ascertain stature from 2DL and 4DL in Nepalese population. CONCLUSION Estimation of human stature to ascertain identity from various body parts is an important task. This being a quantitative trait affected by genetics and environment, population specific studies is indispensable. The outcomes of our present study show that lengths of index and ring finger can be used fruitfully to envisage the stature of a Nepalese adult. However, we recommend further specific studies involving different ethnic groups and races in Nepal for stature estimation. ACKNOWLEDGMENT The authors are grateful to all the subjects who participated voluntarily and cooperated for the success of the study.
2020-08-26T01:39:17.170Z
2020-07-08T00:00:00.000
{ "year": 2020, "sha1": "49fdd6490229fcf708bdd234b4a50fb9d5098e11", "oa_license": null, "oa_url": "https://doi.org/10.36347/sjams.2020.v08i07.007", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "49fdd6490229fcf708bdd234b4a50fb9d5098e11", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
266959329
pes2o/s2orc
v3-fos-license
Auditing the Representation of Females Versus Males in Heat Adaptation Research , Women remain substantially underrepresented in sports science and sports medicine (SSSM) research, with just 4%-23% of studies across subdisciplines examining female-only populations (Cowley et al., 2021;Hutchins et al., 2021;Kuikman et al., 2022Kuikman et al., , 2023;;Smith et al., 2022a).This is likely underpinned by numerous factors, including limited access to female participants (Emmonds et al., 2019), alongside deterrents around the perceived or real complexity and costs of implementing studies that consider menstrual status (Bruinvels et al., 2017).There is now recognition of the need to address the underrepresentation of female participants in SSSM research, including investigations of their specific responses to evidence-based protocols (Elliott-Sale et al., 2021).Indeed, SSSM guidelines are largely developed from research undertaken on male participants, without consideration of differential responses that may arise with sexual dimorphisms or sexrelated sporting characteristics (Smith et al., 2022a).To streamline and prioritize research, a standardized audit protocol has been developed (Smith et al., 2022b) to identify themes in SSSM where there is minimal information on female-specific responses and the likelihood of unique or different sex-related considerations associated with optimal practice. A research theme which would benefit from an audit of female representation is heat adaptation.Heat adaptation is induced via repeated exposures to artificial (i.e., heat acclimation) or naturally (i.e., heat acclimatization) occurring hot environments, that induce physiological adaptations and subsequent improvements in exercise performance (Périard et al., 2021).Anecdotally, most heat adaptation literature appears to be conducted in men, meaning that current heat adaptation guidelines (Racinais et al., 2015) are likely underpinned by male-focused research.If correct, this may not be optimal for female athletes, given that they differ to males in body surface area (Anderson, 1999;Gagnon & Kenny, 2012a), sweating capacity (Gagnon & Kenny, 2011;Gagnon & Kenny, 2012b), and their thermoregulatory profile across the menstrual cycle (MC; Charkoudian & Stachenfeld, 2014;Giersch et al., 2020).Furthermore, there is debate regarding whether the heat adaptation time course differs between sexes (Mee et al., 2015;Shapiro et al., 1980), with a recent female-only meta-analysis suggesting some physiological adaptations are likely induced more rapidly than previously stated (Kelly et al., 2023).Accordingly, to improve our understanding of the evidence base underpinning current guidelines, we conducted an audit to assess the quantity and impact of female representation in the published heat adaptation literature. Methods This audit was conducted according to established methods (Smith et al., 2022b). Search Strategy An electronic literature search of PubMed was conducted using standardized search terms of the audit methodology (Smith et al., 2022b), with heat adaptation terms of (a; acclim * OR adapt * OR thermoreg * OR sweat * OR training) AND (b; heat OR hot OR warm).A complete list of search terms is available in Supplementary Material S1 (available online).Searches were exclusive to original research papers of human participants, published in English, without date restrictions and current to February 14, 2023.Review articles were screened for additional relevant papers not detected in the primary search. Data Extraction Papers were screened using Covidence systematic review software (version 2636, Veritas Health Innovation) by a combination of two independent authors (Kelly, Smith, Brown, Jardine, Convit, Carr, and Snow), with conflicts resolved by a third author (Carr and Snow).Inclusion criteria comprised: a.Heat adaptation being the primary or secondary outcome of interest and heat adaptations induced and/or observed via exercise-based heat acclimation (Périard et al., 2021), passive heat acclimation (i.e., sauna and spa; Heathcote et al., 2018), and heat acclimatization.Heat acclimatization studies were separated into either heat acclimatization (relocation) or seasonal heat acclimatization (Brown et al., 2022).Studies employing multiple methodological approaches (i.e., exercise and passive heat acclimation) were termed "combined heat exposure." b.A minimum ambient temperature or wet-bulb globe temperature of 25 °C (Benjamin et al., 2019;Guy et al., 2015) was required for heat adaptation intervention studies. If a paper included several separate studies, each data set was individually analyzed.Hereafter "paper" refers to the entire publication, and "study" refers to the discrete investigations within a paper.A summary of extracted data per the audit framework (Smith et al., 2022b) is displayed in Table 1. Statistical Analyses Analyses were conducted using Stata 17 (StataCorp.2021.Stata: Release 17. Statistical Software.StataCorp LLC) with significance accepted at an α level of p < .05.Frequency-based metrics (population, athletic caliber, menstrual status, research theme, and heat adaptation exposure) were reported as counts and percentage(s) of the total studies/participants. Inspection of histograms for journal impact factor (IF), Altmetric scores, Field-Weighted Citation Impact (FWCI), and male/female sample sizes revealed a positive skew.A Mann-Whitney U test was used to examine median numbers of male and female participants, and a Kruskal-Wallis test used to determine significant differences in journal IF, Altmetric scores, and FWCI.A post hoc Dunn test with Bonferroni correction was used to assess differences, with data reported as median ± interquartile range.The number of studies reaching Altmetric scores >20 was assessed in a binary manner to describe studies receiving greater attention than others. Results A total of 477 studies involving 7,707 participants across 595 heat exposures were included (Figures 1 and 2a).The majority of participants were male (86.8%; n = 6,686 males; Figure 2a), while 26.4% of studies included at least one female participant.A comprehensive summary of results is provided in Supplementary Material S2 (available online). Population and Sample Size Male-only participants are included 16 times more than femaleonly cohorts in heat adaptation research (n = 5,672 and n = 360, respectively; Figure 2a).In studies evaluating sex differences, the number of male and female participants was similar; however, males outnumbered females in mixed-sex cohorts and male versus female (MvF) subanalysis (Figure 2b).Female-only cohorts accounted for 6.1% of studies, while of the 126 studies that included one (or more) female participant(s), 54.0% utilized a mixed-sex cohort design (Figure 2b). Female-and male-only studies had a median sample size of 12 and 13, respectively (p = .318,Figure 2c).In mixed-sex cohort studies, there was a consistently larger median (±interquartile range) sample size of men (9 ± 7) than women (4 ± 6, p < .001),with no difference in the median sample sizes in MvF design features and subanalyses (p = .579and p = .872,respectively, Figure 2c).There were 36 male and seven female para-athletes represented in studies, with no female-only para-athlete studies.Since 1943, the annual average number of participants involved in heat adaptation research has been 82 men compared to 12 women.The more recent average, from 2012 to 2022, involves an increase in total participants (322 men and 44 women, annually) without any change in this ratio (7 male: 1 female participant).There was an annual average (±standard deviation) of 0.4 (±0.6) studies in which females were investigated in isolation, compared to 4.4 (±5.3) male-only studies (Figure 2d).From 2012 to 2022, there were 0.8 (±0.9) female-only studies compared to 15.1 (±5.4) maleonly studies published yearly (Figure 2d). Athletic Caliber Only 31.0%(n = 2,387) of all participants were able to be classified to a specific tier, irrespective of sex, with the remainder of studies providing insufficient information.Of the total classifiable population, most (n = 986) were classified as Tier 2 caliber (Table 2), with 39.9% (n = 407) of female participants and 29.6% (n = 1,980) of male participants classified (Table 2). Menstrual Status Forty-seven of the 126 studies (37.3%) that included female participants attempted to define menstrual status, comprising 20 studies examining naturally menstruating (NM) women, five with hormonal contraceptive (HC) users, 21 utilizing a mixed but identifiable female cohort, and one examining menstrual irregularities.Regarding classification, 34 studies that defined menstrual status were ungraded (unable to classify menstrual status due to insufficient information), with 11.5 studies rated as bronze and 1.5 studies as silver standard (Figure 3).No studies achieved a gold standard of menstrual classification (Smith et al., 2022b; Figure 3).Studies including NM women achieved a mean of 0.35 of the five criteria required to justify eumenorrheic classification (Elliott-Sale et al., 2021), with a high score of 4 achieved by one female-only study. Performance Versus Health Research Themes The majority of participants (70.4%, n = 5,427) were in studies examining indirect markers of performance/health, with female participants accounting for 11.9% (n = 647) of the total sample in this theme (Table 3).When stratified by study design, the percentage of studies investigating indirect makers of performance/health outcomes was 69.0% for female-only, 74.9% for male-only, and 61.8% for mixed-sex cohort studies (Figure 4).There were no studies in female-only or MvF investigations targeting health outcomes (Figure 4). Journal and Study Impact The median (±interquartile range) IF of journals where studies were published was 3.19 (±0.93), with no differences between populations (all p > .05;Table 4).Forty-nine percent of studies were eligible for Altimetric scores (based on year of inception).The median Altmetric score was 8.0 (±20.0)across all studies and populations, with no between-population differences (all p > .05;Table 4).There were 38 (10.8%) male-only studies, five (17.2%) female-only studies, 17 (25.0%)mixed-sex cohorts, one (4.3%)MvF design features, and zero (0%) MvF subanalysis with Altmetric scores >20, with no between-population differences (all • Silver/bronze (achieve some methodological considerations but not all) • Ungraded (menstrual status defined but insufficient methodological control to award bronze/silver/gold) • Unclassified (insufficient information to classify participants) Research theme • Performance outcomes (performance outcome following an intervention or associated with a topic of interest) • Health outcomes (related to health status or condition) • Indirect markers of performance/health (studies measuring a physiological/psychological adaptation or response) Study impact For studies with multiple heat adaptation exposures, each investigation was included, and total number of exposures counted separately. p > .05).Sixty-one percent of studies were eligible for FWCI (based on year of inception), with a median score of 0.75 (±1.26) across all studies.There were no between-population differences (all p > .05),with MvF subanalysis the only population to achieve a median FWCI >1 (median score of 1.74; Table 4). Heat Adaptation Exposure Our examination of the type of heat exposure involved in studies found 437 exercise heat acclimation interventions, while 77 involved passive heat acclimation (e.g., spa and sauna), 34 examined heat acclimatization (relocation), 32 seasonal heat acclimatization, and 15 used a combination of heat exposures.Exercise heat acclimation interventions included 5,405 participants, with 14% of those being female (Figure 5a).The percentage of studies within population groups investigating exercise heat acclimation ranged between 71% and 93%, compared to 3% and 17% for passive heat acclimation (Figure 5b).There was no heat acclimatization (relocation) or combined heat exposure studies involving MvF design features (Figure 5b).There were 25 female-only exercise heat acclimation interventions (compared to 314 male-only interventions).Of the 23 studies that utilized MvF design features, 21 studies focused on exercise heat acclimation, one examined seasonal heat acclimatization, and one investigated passive heat acclimation exposure. Discussion We examined the representation of female and male participants involved in investigations of heat adaptation using a recently 1943 1948 1953 1958 1963 1968 1973 1978 1983 1988 1993 1998 2003 2008 developed audit methodology (Smith et al., 2022b).We aimed to assess the quantity and impact of female representation in the published heat adaptation literature currently underpinning bestpractice guidelines.Women accounted for only 13% of the total participants, with most studies investigating exposures involving exercise heat acclimation.No study achieved gold-standard methods for the classification and control of menstrual status.Of all female participants, only 13% could be classified as Tier 3 (highly trained/national) or greater.These results demonstrate that the direct relevance of current guidelines for heat adaptation practices for female athletes is limited. Population and Sample Size Women accounted for 13% of the total participant pool, consistent with prior work examining female representation in the broader exercise thermoregulatory research (11.6%-17.8%;Hutchins et al., 2021), and at the lower end comparatively to other studies profiling female inclusion using the same audit tool (11.0%-71.0%)among other SSSM subdisciplines (Kuikman et al., 2022(Kuikman et al., , 2023;;Smith et al., 2022aSmith et al., , 2022c)).Female-only heat adaptation regimens accounted for 6.1% of all studies, consistent with the prevalence of female-only studies previously reported for carbohydrate fueling (4%-6%; Kuikman et al., 2022Kuikman et al., , 2023)), and in SSSM research more broadly (Cowley et al., 2021).Of note, we identified 22 papers that failed to state participant sex or male-to-female ratios, including seven papers published between 2016 and 2021, alluding to a default interpretation of male participants as the norm in heat adaptation research. Studies examining mixed-sex cohorts included approximately twice as many males as female participants.Possible explanations for this may be that females are harder to recruit (Emmonds et al., 2019), a reduced capacity to fund/support the involvement of women (e.g., costs to facilitate MC control; Smith et al., 2022a), or that females are included to boost total sample sizes rather than to allow between-sex investigations.Given the differing body surface area (Gagnon & Kenny, 2012a), sweating capacity (Gagnon & Kenny, 2011;Gagnon & Kenny, 2012b), and varied thermoregulatory profile across the MC or with HC use (Charkoudian & Stachenfeld, 2014;Giersch et al., 2020), a mixed-sex cohort design without statistical power to examine responses between sexes may not be appropriate in some investigations of heat adaptation, particularly those of a mechanistic nature.Heat adaptation studies comparing responses between the sexes were also limited (6.1%), consistent with previous audits (Kuikman et al., 2022;Smith et al., 2022aSmith et al., , 2022c)).The complex and often costly nature of study designs associated with high-quality female research (e.g., MC phase verification via blood samples and ovulation testing) may discourage researchers from undertaking such investigations.The current gap in knowledge around sexual dimorphisms in heat adaptation calls for urgent attention of heat researchers, and for careful consultation of the current best-practice guidelines for including females in SSSM research, including a degree of MC phase control and HC use reporting (Elliott-Sale et al., 2021). Athletic Caliber The training/performance level of participants was unable to be classified in 69% of the studies included in this audit indicating that, irrespective of sex, there is currently poor consideration or identification of the caliber of athletes involved in heat adaptation research.The absence of appropriate classification of study participants (De Pauw et al., 2013;Decroix et al., 2016;McKay et al., 2022) may prevent study results from being correctly applied or translated to relevant athletic populations (Kuikman et al., 2022;Smith et al., 2022aSmith et al., , 2022c)).Of those who could be classified, most participants were categorized to lower athletic calibers (Tiers 0-2; 75.6% male and 67.3% female participants), suggesting that caution is needed when transferring the results of studies to highperformance athletes.However, of the studies of female athletes classed in Tiers 3-5 (32.7%; n = 133), 53% of studies (k = 9) were published since 2012, suggesting that in recent years, there has been an increased focus within the literature on female-specific research, more robust study designs, and/or increased reporting of heat adaptation interventions.There was one female-only study classified as Tier 5 (n = 12), compared to three male-only studies (n = 15) identifying a general lack of investigations of participants of the highest athletic caliber, irrespective of sex.Given the near parity between sexes as competitors in major sporting competitions Unclassified (k = 79) Figure 3 -The number of studies including females classified according to the standard of the methodological control concerning ovarian hormonal profiles (Smith et al., 2022b).(e.g., 49% of athletes at the 2020 (2021) Tokyo Olympic Games were female; International Olympic Committee, 2021), the underrepresentation of high-caliber female athletes in heat adaptation research presents a key area for future study.The challenges of conducting high-quality research in this population, which include the difficulty of making changes to periodized training programs that are specifically targeted to major national or international events and championships, while maintaining the requirements for robust research study designs, are acknowledged. Menstrual Status The classification and methodological control of menstrual status was extremely poor, with only 10% of studies reporting adequate methodological design around the categorization of menstrual status.Of the 126 studies including female participants, 63% made no attempt to classify menstrual status which is consistent with previous SSSM audits (Kuikman et al., 2022;Smith et al., 2022aSmith et al., , 2022c).The highest level of classification and methodological control identified across studies in this audit of both NM females and those using HC was "silver standard," defined as achieving some methodological considerations but not all (Smith et al., 2022b).Moreover, no study achieved the five criteria required to justify NM females as eumenorrheic (Elliott-Sale et al., 2021), meaning their true menstrual status was unconfirmed or inconclusive.Twenty-one studies (45%) included female participants with varying menstrual status (e.g., NM females and those using HC), which is likely representative of the current ratio of females using HC (50%), who are NM or with menstrual irregularities (50%; Martin et al., 2018), and reflective of the challenges of recruiting a sufficiently large population with uniform MC status.Importantly, when investigating this population, appropriate reporting of the MC phases for female participants is suggested within the current literature as outlined in recently published recommendations (Elliott-Sale et al., 2021) to account for the different hormonal profiles of females. The consideration and control of the MC and HC use in mechanistic heat adaptation research are an important design factor, especially for measures of resting and exercise core temperature.The dynamic hormonal profile of the MC leads to changes in core temperature, including 0.3 °C increase in resting core temperature during the luteal phase (Kolka & Stephenson, 1997;Stephenson & Kolka, 1993).This temperature increase at rest (Giersch et al., 2020) could potentially be mitigated by behavioral thermoregulation-mediated adjustments in pacing during selfpaced exercise in the heat (Lei et al., 2017).Moreover, the type of progestin and ratio of progestin to estradiol in HC use can also influence resting and exercise core temperature (Baker et al., 2020;Rogers & Baker, 1997).A key design consideration is the scheduling of testing sessions, such as heat tolerance tests, which are typically conducted at fixed intensities, to occur within the same phase of the MC and/or HC use.If they were to occur in differing phases of the MC, that is, follicular compared to luteal phase, and/ or differing phases of HC use, the change in core temperature at comparative time points may mask any influence of heat adaptation, that is, identification of possible core temperature reductions with heat adaptation.While "gold standard" (Smith et al., 2022b) is the highest level of MC control, "silver standard" (Smith et al., 2022b) is likely a more feasible method to implement within mechanistic heat adaptation research of females, allowing physiological changes to be more confidently attributed to the heat adaptation exposure.Figure 4 -The percentage of studies in each research theme: performance (direct performance outcomes), health (outcomes related to health status/ condition), and indirect associations with performance/health (physiological or psychological adaptation/response that may subsequently transfer to athletic performance/health; Smith et al., 2022b).MvF design features refer to studies with a purposeful methodological design to investigate differences in the intervention response between the sexes, while MvF subanalysis describes studies in which sex-based comparisons were completed within the statistical procedures, but this was not a primary study aim.Note.FWCI = Field-Weighted Citation Impact; IQR = interquartile range; MvF = male v female.MvF design features refer to studies with a purposeful methodological design to investigate differences in the intervention response between the sexes, while MvF subanalysis describes studies in which sex-based comparisons were completed within the statistical procedures, but this was not a primary study aim. Research Themes Outcomes of heat adaptation research typically focus on physiological changes such as core temperature, heart rate, and sweat rate.Accordingly, 73% of studies in this audit examined indirect markers of performance/health, with the distribution similar for female-only (69.0% of studies), male-only (74.9%), and mixed-sex cohorts (61.8%).There were 79 male-only studies (n = 1,384 males) investigating performance outcomes compared to just nine female-only (n = 125 females) investigations.The classification of only 133 female athletes as Tiers 3-5 (highly trained/national or greater; McKay et al., 2022) in this audit likely limits the applicability of current study findings to high-performance female athletes.The inclusion of elite female athletes in studies examining performance outcomes following heat adaptation exposures therefore presents a clear area for future research.There were no female-only or MvF investigations (either subanalysis or design features) in the health research theme.Yet, in the 2020 (2021) Tokyo Olympic and Paralympic Games, females were reported to be at a significantly greater risk of all illness (Soligard et al., 2023) and registered more illnesses compared to males (Derman et al., 2023), respectively.Unsurprisingly, investigation of immune and/or wellness markers following heat adaptation is limited in females (Alkemade et al., 2022).For high-caliber female athletes, an evidence-based understanding of illness risk following heat adaptation is likely of great interest, and an area of future research. Journal and Study Impact The IF of journals in which studies were published was similar to that previously reported across other SSSM disciplines (Smith et al., 2022a(Smith et al., , 2022c)), with a range from 3.54 for female-only studies and 2.97 for MvF design feature studies.The absence of between-population differences in this audit might suggest minimal incentive for researchers to conduct more challenging research designs, such as including adequate MC control in heat adaptation investigations, for a potential reward for publication in higher IF journals.However, this is speculative. Altmetric scores were available for 49% of studies in this audit (i.e., those published since 2012), with a median Altmetric score of 8.0 and no between-population differences.When considering studies achieving Altmetric scores >20 (delineating studies receiving greater attention than their peers; Smith et al., 2022b), there were also no statistically significant population differences.This suggests that despite the small absolute number of femaleonly and MvF studies in this audit, interest does not differ between populations in heat adaptation literature. In this audit, 61% of studies were eligible for FWCI (i.e., those published since 1996), with a median score of 0.75, with no between-population differences.MvF subanalysis was the only population to achieve a median FWCI >1 (median score of 1.74), identifying the outputs of this research are more cited than expected according to global averages.It should be noted, however, that this refer to studies with a purposeful methodological design to investigate differences in the intervention response between the sexes, while MvF subanalysis describes studies in which sex-based comparisons were completed within the statistical procedures, but this was not a primary study aim. subgroup had the lowest representation of studies (k = 6 studies in total) and therefore should be interpreted with caution. Heat Adaptation Exposure Exercise heat acclimation was the most frequently investigated exposure, with 437 regimens across 363 studies, including 5,405 participants, of which 730 (13.5%) were female.The representation of women ranged between 9% and 19% for all heat adaptation exposures, with no studies employing MvF design features for heat acclimatization (relocation) and combined heat exposures. The use of heat adaptation protocols (i.e., heat acclimation/ acclimatization) prior to competition in the heat by elite athletes, both male and female, has been investigated in recent years (Galan-Lopez et al., 2023;Périard et al., 2017;Racinais et al., 2020Racinais et al., , 2022)), with a greater percentage of females training in the heat than males prior to some competitions (Périard et al., 2017;Racinais et al., 2020).However, in one recent study, females were reported to exhibit less heat-related best-practice knowledge compared to males, with females more likely to report not knowing the maximum environmental conditions and wet-bulb globe temperature expected at the competition location (Galan-Lopez et al., 2023).In the same study, only 33.3% of female athletes reported undertaking heat acclimatization and only 12.5% reported utilizing heat acclimation (Galan-Lopez et al., 2023).Given the limited number of heat acclimatization (relocation) studies including female participants in this audit, and no studies utilizing MvF design features, future research is required to help inform evidence-based heat adaptation recommendations for female athletes.This includes a specific focus to improve educational resources available to female athletes to translate research knowledge to practice. Recommendations for Female Heat Adaptation Research This audit has confirmed and emphasized that females are underrepresented in heat adaptation research.This finding highlights the need for more female-focused heat adaptation research to form the basis for evidence-based guidelines for females preparing for exercise and/or sporting competition in hot conditions. The following specific recommendations are therefore provided to sports science researchers: • Researchers are encouraged to refer to current best-practice guidelines on the inclusion of females in sport and exercise research (Elliott-Sale et al., 2021) when designing research studies, and to employ methodological designs that appropriately consider and report the potential impact of the female MC on study outcomes.• Researchers are encouraged to investigate heat adaptation research of high-caliber female athletes (Tiers 3-5) with a focus on performance outcomes.This should include the profiling of thermoregulatory and physiological adaptations to inform evidence-based guidelines for heat adaptation protocols to prepare female athletes for competition in hot environments. Conclusions Our audit demonstrates the underrepresentation of women across all categories of heat adaptation research, with females accounting for just 13% of all participants.This is compounded by inadequate classification and control of menstrual status, alongside a lack of elite female athletes as participants.As such, the specific applicability of current research to the high-performance female athlete is limited.Researchers planning future heat adaptation interventions in female athletes are advised to adopt methodological approaches that consider the potential impact of sexual dimorphism on study outcomes.Methodological considerations include MC control within study designs and robust classification of participants, specifically regarding athletic caliber.The inclusion of these elements within the study design will contribute valuable data on the time course of physiological adaptations, providing a basis for future recommendations and evidence-based guidelines for female athletes preparing for exercise, or competition in hot conditions. Figure 1 - Figure 1 -Flowchart demonstrating the screening process for the audit and the total number of individual studies included for extraction.The total number of heat adaptation exposures included within each study was counted, with studies often investigating one or more heat adaptation exposures. Figure 2 - The total number of male and female participants included in the audited studies, (b) the percentage of studies published in each population, (c) the median (±IQR) number of male and female participants per study, and (d) a histogram displaying the number of heat adaptation studies published per year in male or female-only cohorts between 1943 and 2023.MvF design features refer to studies with a purposeful methodological design to investigate differences in the intervention response between the sexes, while MvF subanalysis describes studies in which sex-based comparisons were completed within the statistical procedures, but were not a primary study aim.IQR = interquartile range; MvF = male v female. Figure 5 - Figure5-(a) Total number of male and female participants and (b) the percentage of studies in each heat adaptation exposure.MvF design features refer to studies with a purposeful methodological design to investigate differences in the intervention response between the sexes, while MvF subanalysis describes studies in which sex-based comparisons were completed within the statistical procedures, but this was not a primary study aim. Table 1 A (Smith et al., 2022b)Data From Heat Adaptation Studies Included in this Audit According to Established Methods(Smith et al., 2022b) Elliott-Sale et al., 2021) practice as previously established;Elliott-Sale et al., 2021) Table 2 Total Number and Percentage of Male and Female Participants Classified Into Exercise Training Tiers (McKay et al., 2022) Note.Only classified participants are displayed in table.
2024-01-13T06:17:25.631Z
2024-01-11T00:00:00.000
{ "year": 2024, "sha1": "5a63330d11e0e5fa606dfa8d625d03c716854234", "oa_license": "CCBY", "oa_url": "https://journals.humankinetics.com/downloadpdf/view/journals/ijsnem/aop/article-10.1123-ijsnem.2023-0186/article-10.1123-ijsnem.2023-0186.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "19d1b06d1f0c0f1c8df73214e3969e96715331c9", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255688718
pes2o/s2orc
v3-fos-license
Traceability Technology of DC Electric Energy Metering for On-Site Inspection of Chargers The on-site inspection of high-power DC chargers results in new DC high-current measurement and DC energy traceability system requirements. This paper studies the traceability technology of electric energy value for automotive high-power DC chargers, including: (1) the traceability method of the built-in DC energy meter and shunt of the charger; (2) precision DC high current and small precision DC voltage output and measurement technology. This paper designs a 0.1 mA ∼ 600 A DC high current measurement system and proposes a 0.005 level DC power measurement traceability system. The uncertainty evaluation experiment of the DC power measurement calibration system and the high-power DC charger’s on-site calibration experiment results verify the method’s effectiveness and feasibility in this paper. The experimental results show that the combined standard uncertainty of the DC power metering verification system can be 0.0451%. Introduction The AC power transmission, transformation, and electricity consumption have dominated the power structure in the world.Therefore, in electric energy measurement, AC power is the mainstay.However, with the rapid development of distributed power sources and electric vehicles, the DC power supply equipment has been widely used in recent years.In such cases, the accuracy of DC power measurement must be guaranteed to ensure the fairness and impartiality of trade settlement in DC power measurement [1][2][3].Hence, studying and establishing a DC energy value traceability system is significant. The DC power supply device is a considerable part of the electric vehicle charging facilities, called a DC charger (referred to as a charger).The core of DC charger energy measurement is its built-in DC energy meter [4][5][6].Therein, the current commonly used in DC charger is up to 300 A, and the DC current in rail transit often exceeds 1000 A. In terms of DC verification traceability equipment, Fluke 6100 A can only trace DC current or voltage; Fluke 5520 A can achieve power calibration, and the measurement accuracy of DC voltage can reach 50 ppm; LMG 95 can measure DC voltage of 600 V, and its DC current measurement range is 150 mA-20 A, and its DC power measurement accuracy is 0.03%.However, none of the above devices have the function of accumulating electricity, and then they cannot directly verify the DC energy meter. With the wide application of high-power DC chargers, the development trend of DC energy metering and verification equipment is high current and high precision [7][8][9].All countries have gradually established and improved regulations and standards involving DC energy measurement, where more than ten international standards for DC chargers have been released for this issue [10][11][12].For example, IEC 62052-11: 2020 "Electricity Metering Equipment-General Requirements, Tests and Test Conditions-Part 11: Metering Equipment" and IEC 62053-41: 2021 "Electricity Metering Equipment-Particular requirements-Part 41: Static Meters for DC Energy (Classes 0.5 and 1)", have been prepared by IEC Technical Committee 13: Electrical energy measurement and control.IEC 62052-11 specifies the general requirements for DC energy meters and AC energy meters, including such requirements and associated test methods as nominal value, structure, marking, electrical, climate, EMC, etc. [13]. On-site verification of chargers is one of the critical tasks to ensure fair trade.On-site verification of DC chargers needs to solve the problem of DC high current measurement [14][15][16][17].At present, the resistance method, Hall method, and DC comparator are mainly utilized, and their accuracy can reach: 0.2%/year, 0.5%/year, and 5 ppm/year, respectively.The DC comparator is a traditional DC measurement and comparison method with a narrow dynamic range and cannot track the rapid changes in current during the charging process.In addition, as the equipment of chargers, the current often contains a certain AC component.When the charger is verified on-site, it is necessary to achieve broadband current measurement and comparison [18][19][20].Paper [21] studied the integrated dual-mode DC power metering device for DC distribution networks.However, the effective voltage measurement range is 375 V, which is not suitable used in the scenarios of the DC voltage higher than 375 V.In [22], the DC electric energy measurement method for the domestic electric vehicle charging facilities is explored, in which the allowed measuring range of the current is lower than 250 A. Whereas, at present, the charging current is always higher than 600 A. In [23], the anti-DC bias energy meter based on the magnetic-valve-type current transformer is proposed.In the proposed approach, the magnetic-valvetype current transformer is used as the current sensor in the meter instead of the conventional current transformer.Although the proposed method can validly resolve the issue of the DC bias, how to integrate it to form a DC electric energy metering system for the on-site inspection of the DC charger is fuzzy.With the technological advancement of related equipment, e.g., batteries and conductive cables, and users' requirements for shortening the charging time, the DC current level will be larger, and the accuracy of DC energy measurement will be higher [23].With the technological advancement of related equipment, e.g., batteries and conductive cables, and users' requirements for shortening the charging time, the provision of DC current level will be larger, and the accuracy of DC energy measurement will be higher [24]. To the best of the authors' knowledge, the method for on-site inspection of high-power DC chargers is still insufficient.Aiming at the on-site inspection requirements of high-power DC chargers, this paper investigates the traceability method of DC chargers, the traceability methods of the builtin measuring instruments (DC energy meters and shunts) of chargers, and high-precision large DC current and DC.The output and measurement technology of precise micro voltage realizes highprecision measurement of DC current with a range of 0.001-600 A and DC measurement traceability with an accuracy of 0.005.It attains the uncertainty evaluation of the traceability system and conducts verification experiments on DC chargers.The contributions of this article are summarized as follows: (1) the traceability method of the built-in DC energy meter and shunt of the charger is proposed. (2) The technology of the precision DC high current and small precision DC voltage output and measurement is given.(3) Design a high-precision DC current measurement system with a range of 0.001-600 A, and propose a DC energy measurement traceability system with an accuracy of 0.005. On-Site Inspection of DC Charger 2.1 Overall Plan for On-Site Inspection of DC Chargers The on-site verification of the DC charger can be roughly classified into two kinds of methods: actual load verification and virtual load verification.The actual load verification is implemented when charging the electric vehicle.This method can only verify the charger under the existing working conditions and cannot verify the entire metering performance of the charger.The virtual load verification provides different load points for the on-site verification through the virtual power load to confirm the full range of the charger.Regardless of whether it is a virtual load or an actual load, a standard electric energy meter, i.e., a charging machine field calibrator, is required to compare with the measurement data of the impact machine. According to the calibration requirement, the rated voltage of the charger is 100∼750 V, the reference current is 10∼500 A, and the accuracy is Class 1 or 2. The first step in verifying the DC charger is for the standard device to confirm the status and parameters of the charger.In other words, the standard device should exchange information with the charger to determine whether the charger and its parameters can be verified.This process is called interaction. The second step is the formal calibration.The DC charger is connected to the charger calibrator (built-in special charging socket) through a particular electrical connection line.The other end of the charger calibrator is linked to the power load.When the charger provides DC power, the charger calibrator should accurately measure its electric energy value.In Fig. 1, the working principle of the calibration is depicted.The detailed process is as follows: (1) connect the charging gun of the DC charger to the input end of the charger calibrator; (2) connect the output end of the calibrator to the load; (3) adjust the parameters of the load; (4) compare the electric energy values measured by the calibrator and the tested charger at different load points simultaneously; (5) calculate the error of the tested charger. High-Precision Large DC Current Output Technology The verification device should output the high voltage and large current for the verification of the direct-input energy meters and shunts.Given the measurement range of the DC energy meter, the maximum voltage and current are 1000 V and 600 A. Although the 1000 V DC voltage output technology has been explored diffusely, the high-precision and stable DC current output source of up to 600 A are problematic in the current DC energy meter verification technology. In contrast, two leading methodologies exist for outputting high-precision large DC current. (1) The adjustable voltage source is used to output the high voltage, which is converted into current by the shunt.The output of the adjustable voltage source is controlled by the negative feedback circuit for the voltage at both ends of the shunt to attain the stability and accuracy of the large current output [25]. (2) Use the switching power supply to generate a large current.This scheme is basically the same as the previous scheme.The output of the power supply is adjusted through negative feedback as well.The difference is that the feedback adjustment is performed by monitoring the magnitude of the current through the Hall sensor in the negative feedback loop.By monitoring the output of the Hall device and the size of the compensation current, the primary current value can then be obtained.Through this method, the large current generated by the switching power supply can be monitored.Furthermore, the negative feedback and numerical output are obtained. Precision Small Signal Voltage Output Technology When outputting a small DC voltage, it is easily affected by noise and temperature changes, thus affecting the output stability and measurement accuracy.Therefore, it is necessary to keep the temperature of the measurement loop uniform to reduce the error caused by temperature when designing the hardware.Furthermore, the four-wire terminal button method and the anti-interference line can be employed to measure small signals to reduce the influence of noise and keep the output stable and accurate.In Fig. 1, the typical precision small-signal voltage source schematic block diagram is presented.The precision small-signal voltage output first provides a voltage reference signal through the central control unit.This signal and the feedback signal reach a stable voltage value through the integrator and output a DC small voltage with high stability and low noise through the power amplifier.What's more, the output voltage collects its voltage signal through the 1000:1 precision V/V conversion standard and converts it into a standard small voltage signal.The instrument amplifier amplifies the signal and then uses it as a feedback signal, thus forming a closed-loop negative feedback system. The design uses constant temperature and ultra-low noise voltage reference, precision V/V conversion standard, and high-resolution ADC and DAC to achieve a small voltage signal accuracy of 0.01% to 0.05%.This meets the requirements of indirect-connected electric energy meters and traceability requirements. The Main Challenges of On-Site Verification of Chargers Compared with the laboratory, the on-site calibration of the charger is relatively harsh, majorly reflected in: (1) the temperature and humidity range is large −40°C∼55°C, 20%∼90% RH; (2) the dynamic range of the power supply of the charger is wider, and the surge current is larger, especially the fast DC may generate a large current of several hundred Amperes; (3) on-site calibration work requires that the test equipment is portable and has certain seismic performance. The above problems require that the equipment used for the on-site verification of the charger should have good environmental adaptability.The solutions are as follows: (1) The current measurement circuit of the primary standard used for on-site verification of the charger cancels the mechanical or electrical contacts, To ensure that there is no instantaneous short circuit (such as protection, onetime shift, etc.) in the current loop, the instrument should have a solid ability to resist sizeable current impact.(2) All instruments and equipment components adopt wide temperature devices and cooperate with excellent circuit design and manufacturing process to minimize the impact of large-scale changes in temperature and humidity on measurement.(3) The major standard devices are installed in the seismic portable instrument box.The box body is equipped with rollers, which are easy to drag; the interior is filled with a large amount of buffer material. Design of DC Energy Value Traceability System According to the on-site inspection requirements of DC chargers, this paper constructs a DC energy value traceability system.It is necessary to measure the magnitude of voltage, current, and time to measure DC energy.Therefore, to realize the traceability of DC energy measurement, it is required to trace values of voltage, current, and time, respectively. In the DC energy value traceability system designed in this paper, the accuracy of DC energy is up to 0.005%.The accuracy of time measurement can be up to 10 −6 , and the impact on electrical energy is negligible.Therefore, the DC voltage and current measurement error should be less than 0.0025%.The traceability system diagram of the DC energy value designed in this paper is shown in Fig. 2. The principle of DC voltage's traceability: a voltage comparison test is performed by outputting a stable voltage signal from a high-stable DC source and connecting the voltage measurement circuit of the eight-and-a-half-digit meter in parallel with the voltage measurement circuit of the detected DC standard energy meter.To ensure the measurement accuracy of the eight-digit half-digit meter, a 0.001-level precision AC and DC voltage divider is used to perform V/V conversion.The DC large voltage of 10 mV∼1150 V is converted into a DC voltage of 1 V and then through the 3458 A digital multi-purpose.The overall measurement accuracy equals to the proportional accuracy of the voltage divider (10 ppm) + the 1 V range measurement accuracy of the 3458 A (about 4.5 ppm), which is better than 0.0025% (25 ppm). The principle of DC current traceability: connect the current measurement circuit of the eight-anda-half-digit meter in parallel with the current measurement circuit of the tested DC standard electric energy meter to conduct a current comparison test.Therefore, it requires measuring the DC voltage indirectly.Furthermore, it demands measuring the DC current of 0.1 mA∼600 A with accuracy within 0.0025%.The maximum DC current measurement capability of mainstream commercial 8 1 / 2 -digit digital multi-meters is 20 A. Hence the indirect transfer measurement method must be the same as the voltage.Here the current range is divided into two cases: 0.1 mA∼100 A and 100 A∼600 A: (a) 0.1 mA∼100 A DC current It is planned to utilize a precision AC-DC coaxial shunt for I/V conversion.Convert the large DC current into a DC voltage of 1 V, and then measure it with a 3458 A digital multi-meter.The overall measurement accuracy = the proportional accuracy of the coaxial shunt (20 ppm) + 3458 A 1 V range measurement accuracy (about 4.5 ppm), better than 0.0025% (25 ppm). (b) 100 A∼600 A DC current It is planned to use the DC proportional standard for I/I conversion.Convert the large DC current into a DC current of 1 A first.Secondly, convert it into a DC voltage of 1 V through a standard resistance of 1 Ω.Lastly, employ a 3458 A digital multi-meter to measure the voltage across the standard resistance.Overall measurement accuracy = DC proportional standard proportional accuracy (1 ppm∼2 ppm) + 1 Ω precision resistance accuracy (<10 ppm) + 3458 A 1 V range measurement accuracy (about 4.5 ppm), better than 0.0025% (25 ppm). DC energy meters can be divided into direct and indirect access.Direct access means that high voltage U and high current I are directly fed into the electric energy meter.The indirect access type generally converts a large DC current (0∼300 A) into a small DC voltage (0∼75 mV) through a shunt and then sends it to the electric energy meter.When the indirect access type is adopted, the shunt's performance also needs to be verified besides the verification of the DC energy meter.Therefore, the DC electric energy meter verification needs to consider the direct-connected electric energy meter, the indirect-connected electric energy meter, and its external DC shunt.These three types of appliances can be verified through the DC energy meter (shunt) verification device. Traceability of DC Voltage and Current 4.1 DC Voltage Traceability Method In the scheme designed in this paper, the large DC voltage is converted into a small voltage of 1 V through precise V/V conversion and then sampled by a high-precision digital multi-meter to output the sampling signal of the voltage U1. It is necessary to measure the large DC voltage of 10 mV∼1150 V with accuracy within 0.0025%, and the results of the DC voltmeter cannot be directly obtained.On the one hand, the maximum DC voltage measurement capability of the mainstream commercial 8 1 / 2 -digit digital multi-meters is only 1050 V. On the other hand, the digital multi-meter's input impedance decreases as the voltage range increases, such as the 1000 V range input impedance of 3458 A. It is 10 MΩ, and the calorific value is proportional to U2/R.Using its large voltage range for a long time will cause the instrument to heat up and affect the measurement accuracy.Therefore, it is necessary to use an indirect measurement method for the DC voltage.Firstly, a 0.001-level precision AC-DC voltage divider is used for the V/V conversion, which converts the DC large voltage of 10 mV∼1150 V into a DC voltage of 1 V. Then the DC voltage of 1 V is measured by the digital multi-meter 3458 A. The overall measurement accuracy equals to the proportional accuracy of the voltage divider (10 ppm) + the 1 V measurement accuracy of the 3458 A (about 4.5 ppm), which is better than 0.0025% (25 ppm). DC Current Traceability According to the requirements of the traceability system, it requires measuring DC large currents ranging from 0.0001 to 600 A with accuracy within 0.0025%.The maximum DC current measurement capacity of mainstream commercial 8 1 / 2 -digit digital multi-meters is 20 A, which must be the same as the voltage.Use indirect transfer measurement.This paper divides the current range into 0.1 mA∼100 A and 100∼600 A to design, respectively. (1) 0.1 mA∼100 A DC current In this paper, a precision AC-DC coaxial shunt is used for I/V conversion, and the DC large current is converted into a DC voltage of 1 V, and then measured by a 3458 A digital multi-meter.The overall measurement accuracy = the proportional accuracy of the coaxial shunt (20 ppm) +3458 A's 1 V measurement accuracy (about 4.5 ppm) is better than 0.0025% (25 ppm). (2) 100∼600 A DC current In this paper, the DC proportional standard is used for I/I conversion, and the large DC current is converted into a DC current of 1 A, and then converted into a DC voltage of 1 V through a 1 Ω precision resistor, and then measured by a 3458 A digital multi-meter.The overall measurement accuracy = DC proportional standard ratio accuracy (1∼2 ppm) + 1 Ω precision resistance accuracy (<10 ppm) + 3458 A 1 V range measurement accuracy (about 4.5 ppm), better than 0.0025% (25 ppm). Traceability of DC Power The principle diagram of the traceability of the DC energy value is shown in Fig. 3. To ensure that the technical requirements of DC energy meters meet the requirements of DC energy meters.The DC precision micro voltage output of the meter verification device must meet: when the verification device adjusts the output mV level voltage signal, the minimum value is ±20 μV, the basic error limit is ±1 μV, and the voltage output stability is 300 nV/min.However, when the verification device verifies the DC energy meter, the influence of the thermal electromotive force and contact electromotive force of materials, e.g., wires and terminals, will bring great errors to the verification process.Therefore, to remove the influence of interfering signals on the verification device, e.g., thermocouples and contact electromotive force, we adopt the four-wire measurement mode to measure the DC precise micro voltage, and design a precise DC small signal voltage standard source in combination with the small signal testing process.Moreover, we increase the input resistance of the DC energy meter and shunt to reduce the influence of wires on the measurement accuracy. Uncertainty Experiment In the on-site verification of electric vehicle chargers, the measurement error obtained by the electric vehicle charger on-site tester to verify the tested charger is η 1 (%), and the measurement error introduced by the electric vehicle charger on-site tester is η 2 (%).The measurement error introduced by the rounding off of the error data of the electric vehicle charger under inspection is η 3 (%).The measurement error of the electric energy of the electric vehicle charger under inspection is η (%): Then the combined uncertainty is Among them, c 1 , c 2 , c 3 are the sensitivity coefficients of error propagation obtained by formula ( 1) where, u(η 1 ) represents the standard uncertainty introduced by the measurement repeatability of the electric vehicle charger under test and adopts the Uncertainty A. u(η 2 ) is the standard uncertainty introduced by the measurement error η 2 of the electric vehicle charger field tester.u (η 3 ) is the standard uncertainty introduced by rounding off the error of the tested electric vehicle charger by η 3 , which is evaluated by the Uncertainty B. Uncertainty Evaluation Experiment (a) Standard uncertainty u(η 1 ) introduced by the measurement repeatability of the tested electric vehicle charger For a level 1 electric vehicle charger, the data obtained by repeating the measurement 10 times at a reference voltage of 200 V and a reference current of 100 A are shown in Fig. 4. Note that the relative error is defined as the ratio of the absolute error of the measured value to the actual measured value.The relative error is dimensionless.The unit of the absolute error and the actual measured value is kWh. Figure 4: Measurement repeatability results The average of 10 measurements in Fig. 4 is about 0.3%.The experimental standard deviation for a single measurement is S = 0.0489%.The standard uncertainty result can be taken as the average of the two measurement errors: (b) The standard uncertainty u(η 2 ) introduced by the measurement error η 2 of the DC electric energy metering and verification system of the electric vehicle charger Assuming that 0.005 is used for the electric vehicle charger's DC energy metering and verification system.Wherein the maximum allowable error is e1 = ±0.005%,which is uniformly distributed, k = √ 3, then (c) Standard uncertainty u(η 3 ) introduced by rounding off η 3 of the tested electric vehicle charger error According to the data rounding rules, the rounding interval of the measurement results of the Class 1 charger is 0.1%.The measurement uncertainty introduced by rounding is 0.1%.The half-width of the interval is 0.05%.The standard uncertainty introduced by machine error rounding η 3 is According to (2), the standard uncertainty of the system synthesis is Consequently, it yields u c (η) = 0.0451%. From the above results, it obtains that the proposed method has robustness against the uncertainty.And the combined standard uncertainty of the DC power metering verification system can be 0.0451%. On-Site Verification Experiment of Charger The designed DC power metering and verification system are used to verify the DC fast charger for electric vehicles produced by XXX Company (Changsha, China).The main parameters of the tested electric vehicle DC fast charger are depicted in Table 1.In the traceability process, for the energy pulse output of the DC standard energy meter, the energy pulse output frequency can be calculated according to the following formula: where, f 0 is the calculated energy pulse of the standard device.P X is the current measured power of the DC standard energy meter.U RG is the current/voltage range of the DC standard energy meter.I RG is the current range of the DC standard energy meter, and k is the electric energy of the DC standard energy meter Pulse output constant. The energy pulse can be calibrated by measuring the output frequency of the standard energy meter energy pulse by the frequency meter.First, obtain the measured power P X of the standard electric energy meter.Secondly, employ the frequency meter to measure the pulse frequency issued by the standard electric energy meter.Lastly, the measured frequency is f 1 , and the relative error of the electric energy pulse of the tested standard electric energy meter is The pulse method's results to test the working error are shown in Fig. 5.The operational error test under 200 V, 100 A is 8 times, and the working error under 200 V, 200 A, and 10 A is tested 3 times each. Figure 5: Relative error of charger pulse method test The experimental results verify that the designed DC electric energy metering device has high accuracy with 0.005.Therefore, the built DC electric energy metering equipment is an effective tool to attain the DC high current measurement and DC energy traceability. Comparison with the State-of-the-Art Method Some methodologies of the DC electric energy metering have been proposed previous, while compared with the methods presented in [21][22][23][24], the advantages of the DC energy metering method proposed in this paper are as follows: 1) The wider operation condition of voltage for the charger is considered.The maximum DC voltage measurement capability of the mainstream commercial 8 1 / 2 -digit digital multi-meters is only 1050 V, and the results of the DC voltmeter cannot be directly obtained.In contrast, the designed equipment in this paper can measure the large DC voltage of 10 mV∼1150 V with accuracy within 0.0025%.2) The wider operation condition of the current for the charger is considered.The maximum DC current measurement capacity of mainstream commercial 8 1 / 2 -digit digital multi-meters is 20 A, which must the same as the voltage.However, this paper's designed equipment can measure the large DC currents ranging from 0.0001 to 600 A with an accuracy of 0.0025%.3) The uncertain impact of the on-site chargers is considered.Compared with the laboratory, the impact of on-site of the charger contains (1) the temperature and humidity range is large −40°C∼55°C, 20%∼90% RH; (2) the dynamic range of the power supply of the charger is wider, and the surge current is larger, especially the fast DC may generate a large current of several hundred Amperes; (3) On-site calibration work requires that the test equipment is portable and has certain seismic performance.Considering the on-site chargers' impact, the designed equipment in this paper inherits good environmental adaptability.However, the stateof-the-art methods often ignore the aforementioned uncertainty influence [21][22][23][24]. The experimental results have verified that the designed equipment in this paper has a highprecision DC current measurement system with a range of 0.001-600 A, the DC energy measurement traceability system can be with an accuracy of 0.005, and the combined standard uncertainty of the DC power metering verification system is 0.0451%.Hence the proposed method of the DC high current measurement and DC energy traceability is effective and can resolve the emerging issues of the on-site verification of DC chargers. Conclusion Aiming at the problems of DC high current measurement and DC energy traceability that urgently need to be solved in the on-site verification of DC chargers, this paper studies the energy value traceability method for high-power DC chargers for automobiles and analyses the built-in DC energy meter and shunt value of the charger.Traceability method, research the output and measurement technology of high-precision large DC current and DC precise micro-voltage, design a high-precision DC current measurement system with a range of 0.001-600 A, and propose a DC energy measurement traceability system with an accuracy of 0.005.The combined standard uncertainty of the DC power metering verification system is 0.0451%.The experimental results of verifying the first-level DC charger show that the system can meet the field verification requirements of high-power DC chargers.The harmonic impact on this research is considered.In the future, the influence of the harmonic on the accuracy of measurement will further be explored.Furthermore, as to the increase of the electric vehicle penetration, the interaction influence of different electric vehicles on the DC electric energy metering will also be explored. Funding Statement: The authors received no specific funding for this study. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study. Figure 1 : Figure 1: Schematic diagram of precision small-signal voltage source output Figure 2 : Figure 2: DC energy value traceability system diagram Figure 3 : Figure 3: Schematic diagram of traceability of DC energy value Table 1 : Main parameters of the inspected electric vehicle DC fast charger
2023-01-12T18:16:31.666Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "d61fedd3848dab09280976d16c636cb4b9d405b8", "oa_license": "CCBY", "oa_url": "https://file.techscience.com/files/energy/2023/TSP_EE-120-3/TSP_EE_22990/TSP_EE_22990.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec90816a8bae7d0d85b5f804d0f688b2570df8a5", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
16906325
pes2o/s2orc
v3-fos-license
Rabbit platelet bactericidal protein. The heat-stable antibacterial activity of rabbit serum against Gram-positive microorganisms has been shown to reside in a cationic protein fraction of platelet lysosomal granules. The activity is released during platelet aggregation. No plasma or serum component is required for the bactericidal effect. The platelet bactericidin resembles the antibacterial proteins of leukocyte granules both in cellular localization and in biochemical characteristics. It can be differentiated from platelet factor 4, the antiheparin factor, which is also a basic protein in platelet granules. The antibacterial effect of the platelet bactericidin may be related to the metabolic activity of the organisms. This antibacterial activity of platelets may represent another means by which platelets can participate in host inflammatory defense reactions. permeability-enhancing factors (7,8). Normal human platelets contain similar basic proteins which are not bactericidal but are associated with enhancement of vascular permeability (9). Rabbit platelets have not been studied previously for their content of basic proteins. The studies reported here were designed to determine whether the heatstable bactericidal activity of rabbit serum is related to platelet cationic proteins. Our observations indicate that cationic proteins which are present in platelet lysosomal granules have bactericidal activity. These proteins are released from the platelets during the process of aggregation and represent the source of the rabbit serum bactericidin. Materials and Methods Preparation of Rabbit Platelet Acid Extract.--Healthy, outbred, adult New Zealand White rabbits were used for all studies. Rabbits were bled from the ear artery by plastic cannula (Medicut, A. S. Aloe Co., St. Louis, Mo.) into plastic tubes containing 0.1 vol of 3.8% trisodium citrate. Platelet-rich plasma (PRP) I was separated by centrifugation at 250 g for 30 rain at room temperature in an International model UV centrifuge (International Equipment Company, Needham Heights, Mass.), and the platelets were sedimented from the PRP in siliconized oil bottles by centrifugation at 1000 g for 30 min at room temperature. Contamination of platelets by leukocytes was less than 1 cell/104 platelets. The platelet buttons were pooled and washed six times in Alsever's solution and four times in Gaintner buffer. Washed platelets were resuspended in 4 vol of 0.2 • H2SO4 and stirred overnight at 4°C, or were sonicated for 30 sec in normal saline using a Branson sonifier (Branson Instruments, Inc., Stamford, Conn.) (4 amp, setting 4) before acid extraction. The rabbit platelet acid extract (RPAE) was dialyzed against 0.01 M phosphate-buffered normal saline at pH 5.6 before use in bacteriologic assays. For preparation of subcellular fractions, washed platelets were suspended in 0.44 ~ sucrose containing 0.001 ~t ethylenediaminetetraacetate (EDTA), homogenized in the cold, and subjected to sucrose density gradient ultracentrifugation by the method of Marcus et al. (10). All preparations were sterilized by filtration through a 0.45 ~ pore membrane filter (Millipore Corp., Bedford, Mass.) before use. Bactericidal Assay.--Bacillus subtilis was grown for 18 hr at 37°C with shaking in Trypticase soy broth (Baltimore Biological Laboratories, Baltimore, Md.) containing lVfv added dextrose. The bacteria were collected by centrifugation and resuspended in 0.15 • saline containing 0.1% bovine serum albumin (Pentex Biochemical, Kankakee, Ill.). A stock suspension of bacteria was prepared by adjustment to an optical density at 650 nm of 0.270, and a 1 : 1000 dilution in saline-albumin solution was used to inoculate test samples. Platelet acid extract dialyzed against buffered saline was serially diluted in saline-albumin solution in sterile plastic test tubes. After inoculation with B. subtilis the tubes were incubated at 37°C with mixing on a Lab Tek aliquot mixer (Ames Co., Inc., Elkhart, Ind.) for 1 hr. Aliquots were then pipetted into Petri dishes and pour plates were made with melted Trypticase soy agar. Surviving colonies were counted after overnight incubation. Duplicate samples agreed within 10%. A 50% reduction in bacterial colony count was considered significant killing. to a platelet count of 500,000/mm 3 with platelet-poor plasma (PPP). Rabbit platelets were washed three times in Alsever's solution at room temperature and resuspended in Gey's buffered salt solution, pH 7.0, containing 0.35% bovine albumin and 1% dextrose. Soluble collagen 2 (11) (4 mg/ml in 0.5 M CaC12), bovine thrombin (Parka, Davis & Company, Detroit, Mich.), and adenosine diphosphate (ADP) (Pabst Research Laboratories, Milwaukee, Wis.) (2 X 10 -6 ~) were added to 1.2-ml samples of the platelet preparations and aggregation was observed and recorded on an Aggregometer (Chronoqog Corp., Broomall, Pa.). Immediately after aggregation of the platelet preparation, the supernatant was separated by centrifugation at 1000 g and either assayed for bactericidal activity directly or extracted with 0.2 N H2SO4 and then assayed. Platdet Factor 4 Assay.--Antiheparin activity (platelet factor 4) of the platelet extract was measured using the thrombin clotting time method of Poplawski and Niewiarowski (12). The test system comprised 0.3 ml fresh citrated rabbit PPP, 0.1 ml saline or platelet extract, 0.1 ml heparin (0.2 units/ml). 0.1 ml of bovine thrombin (10 units/ml) was added and clotting time was recorded during mixing at 37°C. Heparin concentration was selected to yield a clotting time at least three times that of a saline-PPP control. Partially purified platelet factor 4 was prepared by the zinc acetate precipitation method of Niewiarowski et al. (13). Characterization and Purification of Rabbit Platelet Acid Extract.--Rabbit platelet acid extract dialyzed against 0.02 • sodium acetate buffer, pH 6, was applied to diethylaminoethyl (DEAE) cellulose columns equilibrated with the same buffer. In some studies the buffer system included 0.05 M NaC1.2-ml fractions were collected with a Buchler fraction collector (Buchler Instruments, Inc., Fort Lee, N. J.) equipped with an LKB Uvicord recorder (LKB Instruments, Inc., Rockville, Md.). For elution, a gradient of 0-0.5 ~ NaC1 in 0.02 • sodium acetate buffer, pH 6, was applied. Samples of representative fractions were adjusted to pH 5.6 with 0.01 ~ NaOH, sterilized by filtration, and assayed for bactericidal activity and for platelet factor 4 activity. The active material from the DEAE columns was concentrated, by ultrafiltration and subjected to gel filtration on G-75 Sephadex (Pharrnacia Fine Chemicals, Uppsala, Sweden). The molecular weight of effluent peaks was estimated by protein markers of known molecular weight. Active fractions were concentrated and analyzed by acrylamide gel electrophoresis using the Reisfeld buffer system (14). Assay for lysozyme was performed by radial diffusion of RPAE in agar containing nonviable Micrococcus lysodeikticus, using normal serum containing known concentrations of lysozyme as control. Pepsin digestion of the RPAE was performed according to the method of Bailey et al. (15). RESULTS The protein fraction extracted from washed rabbit platelets by dilute H2SO4 killed Bacillus subtilis suspended in saline-albumin solution. The bactericidal activity was directly related to the protein concentration of the RPAE ( Table I). Concentrations of 0.5/zg protein/ml, or greater, killed more than 50% of the bacterial inoculum (1 X 103-5 X 105 organisms/ml). Although different preparations of rabbit platelet extracts varied in bactericidal activity, all demonstrated a similar range with greater than 90 % killing at concentrations of 5 /~g/ml. Species sensitive to the bactericidal effect of RPAE included several strains of Staphylococcus albus and one strain of S. aureus (giorgio). Gram-negative organisms tested (Escherichia coli and Salmonella newport), as well as most strains of S. aureus, were not susceptible to killing by RPAE. The bactericidal spectrum of RPAE is essentially the same as that reported for rabbit serum by Hirsch (3). The bactericidal effect of the RPAE was rapid (Fig. 1). Significant killing of stationary-phase cultures (18 hr growth) occurred within 30 min of incubation of bacteria with RPAE and was complete after 90 rain. The rates of killing were similar at concentrations of 102-105 bacteria/#g of platelet protein. The fraction of bacteria surviving was slightly greater with larger inocula. Bacteria in the logarithmic phase of growth (4 hr) were killed more completely and rapidly than bacteria from stationary-phase cultures, with greater than 90% killing achieved by 10 rain exposure. The bactericidal effect of the RPAE (Table II). The maximal killing occurred when incubation with RPAE was carried out at 37°C. Cold was inhibitory, with more than 16 times as much RPAE necessary to produce significant killing at 4°C. The bactericidal activity was adsorbed from the platelet extract by exposure to high concentrations of heat-killed bacteria (Table III). The bacteria were not agglutinated by the RPAE. Platelet-free rabbit plasma, clotted to form serum, had no significant bactericidal activity when diluted more than 1:5 ( Fig. 2). Normal rabbit serum prepared from whole blood was bactericidal at dilutions of 1:40-1:80. RPAE contained from 5 to 10 #g of protein/ml extracted from rabbit serum. The addition of 1-20/zg RPAE protein/ml to diluted serum prepared from plateletfree plasma restored the bactericidal activity found in normal rabbit serum. The bactericidal capacity of the RPAE was neither enhanced nor inhibited in the presence of serum. Properties of the Rabbit Platelet Bactericidal Extract. -The bactericidal activ-ity of the RPAE was nondialyzable (Table III). The activity was optimal over the pH range of 5.6-7.2. Bacterial growth was itself inhibited by incubation at more acid or alkaline pH conditions. The biologic activity of the RPAE was stable to heating to 56°C for 30 rain or to 80°C for 15 min but activity was lost after prolonged boiling. RPAE retained full activity for prolonged periods FIo. 1. Bactericidal effect of rabbit platelet acid extract as a function of incubation period. Colonies of B. subtilis surviving 5, 30, 60, and 90 rain incubation with RPAE at 37°C at different bacterial concentrations. A, 3 X 108 bacteria/ml; B, 6 X 10 ~ bacteria/ml; C, 1 X 1@ bacteria/ml; D, 1.5 X 10 ~ bacteria/ml; E, 6 )< 104 bacteria/ml. RPAE concentration, 5 #g/ml. when kept at low pH (in dilute H2SO~) but activity was lost rapidly upon storage at alkaline pH and at low ionic strength, e.g. when dialyzed against 0.01 1~ sodium phosphate buffer without saline, or when dialyzed against distilled water. In the presence of 0.15 u NaC1 activity was retained during storage at -20°C for several weeks. Pepsin digestion resulted in complete loss of bactericidal activity. No lysozyme could be demonstrated in the RPAE. Bactericidal activity was precipitated from the RPAE by addition of ethanol to a final v/v concentration of 20%. This precipitate, redissolved in saline, demonstrated bactericidal activity against B. subtilis at a concentration of 0.3 #g protein/ml. Heparin inhibited the bactericidal effect of RPAE at concentrations as low as 0.1 unit//~g of platelet protein (Table IV). The concentration of RPAE used in the heparin inhibition experiments was 17 times that required to produce 90 % bacterial killing. RPAE excess was utilized in these studies in order (Table V). The membrane fraction was inactive even at high protein concentration. Release of Bactericidal Activity During Platelet Aggregation.--Bactericidal activity was released when rabbit platelets were aggregated in citrated platelet-rich plasma. Washed platelets suspended in buffered saline also released bactericidal activity after aggregation by collagen (Fig. 3). The bactericidal material appeared in the supernatant plasma or saline after sedimentation of the aggregated platelets. Some bactericidal activity was present in low dilutions of platelet-poor plasma after centrifugation, probably reflecting mechanical damage to and release from platelets during the centrifugation procedure. Serurn (no platelets] + platelet extract 70~20 Reciprocal of dilution of factor tested Fzo. 2. Bactericidal activity of normal rabbit serum prepared from platelet-free plasma, and rabbit platelet acid extract. Bacteria surviving incubation with: 0--0, dilutions of normal rabbit serum; O--O, dilutions of serum prepared from platelet-free plasma; /x--/x, dilutions of RPAE, 120 #g/ml, in the presence of platelet-free serum. Aggregation inhibitors such as imipramine and aspirin significantly decreased the release of bactericidal activity from platelets exposed to aggregating doses of collagen (Fig. 3). Acid extracts of the supernatant fluid remaining after sedimentation of aggregated platelets retained the bactericidal activity. Collagen-induced platelet aggregation released bactericidal activity from platelets in proportion to the concentration of collagen used (Fig. 4). The upper panel of Fig. 4 depicts the aggregation curves recorded by the platelet Aggre- gometer for three concentrations of collagen and for collagen in the presence of the inhibitor, imipramine. The greater the protein concentration of collagen used, the greater was the aggregation reaction monitored. The lower panel of Fig. 4 shows the bactericidal activity released into supernatant plasma at the end of the aggregation reaction. The degree of bactericidal activity similarly corresponded to the collagen concentration and to the degree of platelet aggregation. When collagen-induced platelet aggregation was inhibited by imipramine, the release of bactericidal activity was no greater than the activity in platelet-poor plasma alone. Release of bactericidal activity, therefore, closely paralleled platelet aggregation by collagen. Adenosine diphosphate~induced platelet aggregation was associated with only minimal release of the bactericidal activity into the supernatant (Fig. 5). At a low concentration, ADP produced reversible platelet aggregation which was not associated with significant release of bactericidal activity. Thrombin-induced aggregation of washed platelets released bactericidal activity similar to that released by collagen. Purification of Bactericidal MateriaL--The rabbit platelet acid extract was partially purified by chromatography on DEAE cellulose (Fig. 6). Bactericidal activity was confined to a small protein peak which was not retarded. The re-Col ony count ~ @ @ 00 @ * Antiheparin activity was tested by adding fractions to a thrombin clotting time system in which clotting time in seconds was prolonged by addition of heparin sufficient to yield three times baseline clotting time (seconds). The test system consisted of 0.3 ml fresh rabbit citrated, platelet-poor plasma, 0.1 ml test material, and 0.1 ml of heparin (0.2 units/ml); 0.1 ml thrombin (10 units/ml) was added as the stopwatch was started. All materials were incubated at 37°C and all tests were performed in duplicate. mainder of the protein, which was eluted with a gradient of NaC1, had no antibacterial activity nor did it alter the activity of the bactericidal peak when recombined with the latter. Cationic aerylamide gel electrophoresis of the bactericidal peak revealed two protein bands. Gel filtration of the bactericidal peak taken from DEAE cellulose was carried out using Sephadex G-75. Two peaks were detected, of approximately 40,000 and 10,000 tool wt. Both of these peaks demonstrated bactericidal activity. Antiheparin Activity and Rabbit Platelet Acid Extract.--The rabbit platelet acid extract demonstrated antiheparin activity similar to that of platelet factor 4 when tested in a thrombin clotting time system (Table VI). That is, RPAE added to a mixture of fresh plasma and heparin counteracted the prolongation of thrombin time produced by heparin. The bactericidal peak from DEAE cellulose chromatography of RPAE also possessed antiheparin activity. Platelet factor 4, prepared directly from washed rabbit platelets, demonstrated two to three times as much antiheparin activity per microgram of protein as did the RPAE preparations. Separation of the platelet factor 4 activity from the bactericidal activity of the RPAE was achieved by several methods. Separation of the two activities was first accomplished by ethanol precipitation. The fraction of RPAE precipitated by 20 % (v/v) ethanol demonstrated antiheparin activity (Table VI). This fraction was also bactericidal. However, the residual portion of the RPAE not precipitated by 20 % ethanol was equally bactericidal but lacked antiheparin activity. Zinc acetate precipitation was used to deplete the RPAE of platelet factor 4 (PF 4). The PF 4-depleted RPAE remained fully bactericidal but lost all antiheparin activity. Finally, separation of platelet factor 4 and bactericidal activities was achieved by Sephadex gel filtration of the DEAE cellulose bactericidal peak (Fig. 7). Fig. 7 demonstrates that two peaks were eluted from the Sephadex gel column after application of the DEAE cellulose bactericidal peak. The fraction of tool wt 10,000 possessed antiheparin activity. The fraction of tool wt 40,000 lacked antiheparin activity. Platelet factor 4, prepared according to the method of Niewiarowski, was moderately bactericidal to B. subtilis, but required a protein concentration of 30 #g/ml or greater for significant bactericidal effect, whereas an acid extract of the platelet residue after removal of PF 4, entirely lacking in antiheparin activity, was strongly bacteridical. DISCUSSION These studies demonstrate that the heat-stable antibacterial activity of rabbit serum not only is platelet dependent, but resides preformed within the platelets as one or more cationic protein constituents of platelet lysosomal granules. The platelet bactericidal proteins bear significant resemblance to the antibacterial proteins found in blood leukocytes by virtue of their cellular localization in granules, heat stability, absence of dependence on serum complement, positive charge, and inactivation bv anionic materials. Whereas leukocyte bactericidal proteins are released into intracellular vacuoles during phagocytosis, the platelet bactericidal proteins are released into the bloodstream during platelet aggregation and during blood coagulation. Characterization of the platelet bactericidal protein fraction by ion exchange chromatography and gel filtration has revealed a low molecular weight protein ot strongly positive charge which appears distinct from the antiheparin factor, platelet factor 4. This protein is much less stable than platelet factor 4 under conditions of low ionic strength and alkaline pH. In these characteristics the platelet bactericidal protein also closely resembles the leukocyte bactericidal proteins. The crucial mechanism in the development of serum antibacterial activity against Gram-positive microorganisms is the platelet aggregation which occurs concomitantly with blood coagulation, rather than coagulation per se. Blood coagulation involves the cctivation and cascading interaction of a series of protein precursors present in the blood plasma; platelets normally play a catalytic role by providing a specialized surface (platelet factor 3) which accelerates this series of fluid-phase reactions to form a blood clot. Coagulation may, however, occur normally in the presence of severe thrombocytopenia. During injury to a blood vessel, exposure of subendothelial collagen acts as an initiator both to coagulation and to the aggregation of platelets so that the two processes normally occur together (16). Platelet aggregation may take place quite independently of blood clotting. In vitro, evidence that the release of antibacterial activity depends only upon platelet aggregation and not on coagulation is derived from the observations that (a) washed platelets suspended in saline release bactericidal protein upon aggregation in the absence of plasma; (b) the coagulation of platelet-free plasma does not yield bactericidal activity in the serum produced; and (c) platelet aggregation in anticoagulated platelet-rich plasma releases bactericidal activity although clotting does not occur. The studies reported here suggest that the release of bactericidal activity from platelets requires irreversible platelet aggregation by surface active agents. Thus collagen and thrombin but not small doses of ADP produce release of bactericidal activity. It would appear that both aggregation and the platelet release reaction involving platelet degranulation are prerequisites for liberation of bactericidal proteins. This is supported by the data showing that the amount of bactericidal activity released is directly related to the degree of platelet aggregation produced by collagen, a surface-active aggregating agent (Fig. 4). Inhibitors of platelet aggregation effectively block the release of bactericidal activity into the supernatant (Figs. 3 and 4). The minimal effect of ADPinduced aggregation in releasing bactericidal activity also supports the concept that this granule-bound component only leaves the platelet under conditions of profound platelet disruption, for rabbit platelets exhibit only reversible aggregation on exposure to ADP (17). The relationship between the platelet bactericidal protein and platelet factor 4 (PF 4) is a close one. PF 4 is a low molecular weight, positively charged protein which possesses marked antiheparin activity. It is released by saline extraction of frozen-thawed platelets and by platelet aggregation. It is stable to heating and to storage and is precipitable by zinc ions. No previous assay of PF 4 for antibacterial activity has been reported. We have found that PF 4 has a modest bactericidal effect on B. subtilis at high protein concentrations. The crude rabbit platelet acid extract possessed PF 4 activity. However, after precipitation of PF 4 with zinc ions, the platelet acid extract retained strong bactericidal activity. Conversely the platelet residue after extraction of PF 4 possessed antibacterial activity which was acid extractable. This suggests that crude rabbit platelet acid extract contained a different and more potent bactericidal material as well as PF 4. Both activities appeared together in the cationic bactericidal peak on DEAE cellulose, as might be expected for two basic proteins. The two activities were separable by gel filtration on Sephadex. Many positively charged proteins and peptides have been reported to demonstrate antibacterial activity, and this may account for the bactericidal activity associated with PF 4. However, the data reported here indicate that the lysosoreal bactericidal protein of platelets is indeed separable from platelet factor 4. The bactericidal effect of the rabbit platelet cationic protein suggests that the bacteria need to be metabolically active. This is indicated by the greater killing effect at 37°C than at lower temperatures, and by a more rapid effect on rapidly growing organisms than on bacteria in the stationary growth phase. Bacteria exposed in the cold remove the bactericidal potency of the platelet extract yet are not killed, suggesting that adsorption of the protein onto the negatively charged bacterial surface may not completely explain the bactericidal effect. Basic proteins easily adsorb to negatively charged surfaces, even to the surfaces of artificial particles such as phosphatidylserine vesicles (18). However, only certain of such adsorbable proteins alter the cation permeability of such particles. Previous studies suggest that basic proteins may act as metabolic inhibitors or may alter membrane permeability. Amano et al. (19) demonstrated that plakin, a water extract of horse platelets, caused marked inhibition of oxygen uptake by susceptible aerobic bacteria. Zeya and Spitznagel (7) showed similar inhibition of oxygen consumption by bacteria exposed to cationic proteins prepared from rabbit leukocyte granules. They demonstrated that the bacteria underwent changes in cell permeability and correlated leakage of nucleotide components with bacterial killing after incubation with leukocyte cationic proteins. The sensitivity of rapidly growing B. subtilis to killing by platelet cationic protein, and the similarity of this platelet factor to the cationic proteins of leukocyte granules, strongly suggests that the bactericidal mechanism may similarly involve alterations in permeability and oxidative metabolism. What is the possible physiological role for a platelet-derived bactericidal substance? Different species of animals vary widely in serum beta lysin activity. Cationic extracts of normal human platelets demonstrate no bactericidal activity. Whether human platelets can develop such activity as part of the inflammatory response has not been determined. Many foreign particles and materials which may gain access to the bloodstream during infection or tissue injury are capable of producing platelet aggregation (20). Local intravascular aggregation of platelets by appropriate circulating stimuli, such as bacteria, bacterial products such as endotoxin, or antigen-antibody complexes, may initiate the release of bactericidal platelet protein into the blood. Hunder and Jacox demonstrated that rabbits given intraperitoneal injections of endotoxin developed an increased level of serum bactericidal activity (4). Des Preset al. showed that incubation of rabbit platelet-rich plasma with endotoxin released bactericidal activity (21). Recently, Clawson and White reported the aggregation of platelets by bacteria (22). The release of locally high concentrations of platelet bactericidal protein may permit killing of susceptible bacteria trapped in the platelet aggregates. The question whether subsequent ingestion of platelet-bacterial clumps by polymorphonuclear leukocytes is facilitated by the release of platelet constituents in the aggregates, coating the bacteria, remains to be answered. These observations suggest that the platelet, like the leukocyte, is capable of participating in the development of the inflammatory response in a variety of ways. Platelet constituents released after interaction of platelets with inflammatory stimuli include nucleotides, vasoactive amines, antiheparin factor, and cationic proteins possessfing permeability-enhancing and bactericidal capacity. These factors may be present in relatively high concentrations in the vicinity of the platelet aggregate at the vascular endothelial surface. Their role in affecting or altering that surface remains to be explored. SUMMARY The heat-stable antibacterial activity of rabbit serum against Gram-positive microorganisms has been shown to reside in a cationic protein fraction of platelet lysosomal granules. The activity is released during platelet aggregation. No plasma or serum component is required for the bactericidal effect. The platelet bactericidin resembles the antibacterial proteins of leukocyte granules both in cellular localization and in biochemical characteristics. It can be differentiated from platelet factor 4, the antiheparin factor, which is also a basic protein in platelet granules. The antibacterial effect of the platelet bactericidin may be related to the metabolic activity of the organisms. This antibacterial activity of platelets may represent another means by which platelets can participate in host inflammatory defense reactions. The authors thank Miss Jean Esposito for excellent technical assistance.
2014-10-01T00:00:00.000Z
1971-10-31T00:00:00.000
{ "year": 1971, "sha1": "5148a93fdff55ee58b1b0c90ea4eabc34d52ac4a", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/134/5/1114.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5148a93fdff55ee58b1b0c90ea4eabc34d52ac4a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
7953404
pes2o/s2orc
v3-fos-license
Real life persistence rate with antimuscarinic treatment in patients with idiopathic or neurogenic overactive bladder: a prospective cohort study with solifenacin Background Several studies have shown that the antimuscarinic treatment of overactive bladder is characterized by low long-term persistence rates. We have investigated the persistence of solifenacin in real life by means of telephonic interviews in a prospective cohort. We included both patients with idiopathic overactive bladder as well as neurogenic overactive bladder. Methods From June 2009 until July 2012 patients with idiopathic or neurogenic overactive bladder who were newly prescribed solifenacin were included. In total 123 subjects were followed prospectively during one year by means of four telephonic interviews, which included questions about medication use and adverse events. Results After one year 40% of all patients included was still using solifenacin, 50% discontinued and 10% was lost to follow-up. In the neurogenic group 58% was still using solifenacin versus 32% in the idiopathic group after one year (p < 0,05). The main reasons to stop solifenacin were lack of efficacy, side effects and a combination of both. Conclusions This prospective cohort study showed a real life continuation rate of 40% after 12 months. This continuation rate is higher than found in most other studies. The use of regular telephonic evaluation might have improved medication persistence. The findings of this study also suggest that patients with neurogenic overactive bladder have a better persistence with this method of evaluation compared to patients with idiopathic overactive bladder. Trial registration This study was retrospectively registered on march 17, 2017 at the ISRCTN registry with study ID ISRCTN13129226. Electronic supplementary material The online version of this article (doi:10.1186/s12894-017-0216-4) contains supplementary material, which is available to authorized users. Background Antimuscarinics are the first-line therapy in the treatment of overactive bladder (OAB). This applies to idiopathic OAB (iOAB) as well as neurogenic OAB (nOAB). The use of antimuscarinics in patients with iOAB is characterized by very low persistence rates. Results from short-term studies show discontinuation rates ranging from 4 to 31% [1]. The long-term persistence to antimuscarinics in OAB is not well investigated. A systematic review conducted by Veenboer et al. found that persistence beyond 1 year rarely exceeded 10% of the patients [2]. These data might even represent an overestimation of the persistence because reviews of medical claims data show much higher discontinuation rates (up to 83% within the first 30 days) [1]. Furthermore, patients who have collected the prescribed medications might not use them because of other reasons, like fear for adverse effects. Regarding the use of antimuscarinics in the treatment of nOAB much less studies have been performed compared to iOAB. Patients with nOAB are a heterogeneous group with different underlying neurologic conditions, such as multiple sclerosis, spinal cord injury, Parkinson disease, cerebral palsy and meningomyelocele [3]. Patients often suffer from incontinence, urgency, frequency or impaired bladder emptying. It has been shown that the use of antimuscarinics in this group is associated with better patient-reported cure/improvement compared to placebo. However, there is a higher incidence of adverse events [4]. This prospective study was carried out to investigate the persistence rate in real life among patients with idiopathic or neurogenic OAB who were prescribed solifenacin. We followed them during one year by means of telephonic interviews. Furthermore, we wanted to investigate the reasons why patients stopped taking their medications. Third, we wanted to investigate if we could find any differences between patients with idiopathic OAB versus neurogenic OAB. Methods This study was undertaken at the urology department of the Erasmus University Medical Center, Rotterdam, The Netherlands. The ethics committee of the hospital approved the study protocol. The inclusion was carried out from June 2009 until July 2012. After giving informed consent, patients older than 18 years and newly prescribed solifenacin because of complaints of idiopathic or neurogenic OAB, were included. Solifenacin, under the trade name Vesicare, is a urinary antispasmodic of the anticholinergic class. It is produced by Astellas Pharma BV. It is available in 5 and 10 mg. The starting dose was chosen by the doctor who prescribed the solifenacin but could be adjusted during the study period. Because this observational study investigated the persistence rate in real life in patients who had been prescribed solifenacin by their own doctor, they had to collect the solifenacin themselves at a pharmacy of choice. Patients who had used anticholinergic drugs less than 7 days before they started solifenacin were excluded. Participants were allowed to continue possible other urologic medications, for example alfa-blockers, but not other anticholinergic drugs. Telephonic surveys were taken at 1, 3, 6 and 12 months after starting solifenacin. The patients were asked whether they were continuing the medication. They were also interviewed about possible side effects and if they had discontinued the therapy, what had been reasons for stopping. Statistical analysis was performed using SPSS statistical software. The Chi-square test was used to evaluate the differences between groups. Results During the study period a total number of 123 patients were included in this study. Twelve patients were lost to follow-up. Table 1 displays the demographic characteristics. Eighty-three patients received solifenacin because of idiopathic OAB and 40 patients because of neurogenic OAB. Among this group 17 patients had a spinal cord injury, 10 multiple sclerosis. The rest was diagnosed with other conditions as you can find in Fig. 1. After one year 40% of all patients included were still using solifenacin, 50% discontinued and 10% was lost to follow-up. Table 2 shows the persistence rate after one year in patients with idiopathic OAB and neurogenic OAB. Persistence in the neurogenic group was 58% versus 32% in the idiopathic group (p < 0,05). The main reasons to stop taking solifenacin were lack of efficacy (39%), side effects (30%) and a combination of both (13%). Of the total group of 111 interviewed patients 64 patients (58%) experienced side effects within one year. Most common side effects were dry mouth, constipation, blurred vision, dry eyes and abdominal pain. Discussion Antimuscarinic drugs have been available for many years for the treatment of OAB. OAB is a chronic condition and long-term effective treatment might be of importance for the quality of life. Unfortunately, adherence and persistence to antimuscarinics are poor. OAB medication is known to have the lowest persistence in comparison to other chronic oral medication like cardiovascular, antidiabetic and osteoporosis treatments [5]. Regarding the use of benign prostate hyperplasia (BPH) medication, a large population-based cohort study using an administrative prescription database showed that the persistence was 29% after one year [6]. Doses of solifenacin succinate (5 mg and 10 mg) once daily (OD) have proven to be effective [7][8][9]. Haab et al. showed that 81% of the patients completed 40 weeks of open-label treatment with only 4,7% discontinuation because of adverse events [10]. Clinical and prescription database studies demonstrated much lower continuation rates varying from 9 to 35%. [11][12][13][14][15]. In our study we found a continuation rate of 40% after 12 months. This continuation rate is higher than found in most other studies. We think that this difference might be explained by the fact that the patients received telephonic interviews regularly. This is somewhat in line with other studies, which suggest that compliance to OAB therapy improves with patient education about OAB en its treatment [16,17]. Furthermore, an additional difference in our study was the possibility of adjusting the medication dose during the study period. Patients who complained about side effects could receive a lower dose, whereas people who had little effect could receive a higher dose. This possible adjustment might have contributed to a higher persistence. This observation might encourage other caregivers to evaluate regularly patients who receive antimuscarinic medication for OAB. A possible tool for the future is the use of Short Message Service (SMS) to improve utilization of and adherence to anticholinergic medication. It is a simple and inexpensive strategy, which has proven to help patients taking their medications on time [18]. Furthermore, it has been used to increase medication adherence to a variety of medication classes on a short term [19][20][21][22]. This tool could educate people with OAB and help them to improve persistence with antimuscarinic medication on the long term. A large screening survey performed in the USA to identify patient-reported reasons for discontinuing overactive bladder medication found that the most mentioned reasons were: "didn't work as expected"," switched to new medication", "learned to get by without medication" and "I had side effects" [23]. These reported reasons are similar to our study were the main reasons to stop taking the medications were lack of efficacy (39%), side effects (30%) and a combination of both (13%). A possible confounder of our study is that Dutch patients usually have to pay a part of the medication costs themselves when the product is still patented. No one reported these costs as a reason to stop, but we did not ask explicitly. As mentioned before, antimuscarinic treatment in patients with neurogenic OAB has not been thoroughly evaluated. Treatment for neurogenic OAB is important in order to provide more bladder control, decrease urinary incontinence and, therefore, decrease the risk of decubitus ulcers, prevent UTI's and ultimately to preserve renal function [24]. Antimuscarinics are advised to use as a first line medical treatment, but data on persistence in nOAB are lacking [25]. A study on the epidemiology and healthcare utilization of neurogenic bladder patients performed in the US found that 71, 5% was using one of more OAB drugs during the study period of one year. Only 29% of the patients continued that therapy. Another 38% of the patients stopped and did not restart, 34% stopped and restarted [24]. This suggests that neurogenic bladder patients are not adequately managed. In our study 32% of the patients with neurogenic OAB discontinued versus 58% of the patients with idiopathic OAB, which was a significant difference. This suggests that patients with neurogenic OAB have a better persistence compared to patients with idiopathic OAB. Conclusions This prospective cohort study showed a real life continuation rate of solifenacin of 40% after 12 months. This continuation rate is higher than found in most other studies. The use of regular telephonic evaluation might have improved medication persistence. This observation should be further investigated. The findings of this study also suggest that patients with neurogenic overactive bladder have a better persistence with this method of evaluation compared to patients with idiopathic overactive bladder. Additional file Additional file 1: (Dataset 1). Real life persistence Solifenacin. This is a data set for a study of the real life persistence rate with antimuscarinic treatment in patients with idiopathic or neurogenic overactive bladder. (PDF 56 kb) Abbreviations OAB: Overactive bladder; iOAB: Idiopathic overactive bladder; nOAB: Neurogenic overactive bladder; UTI: Urinary tract infection; BPH: Benign prostatic hyperplasia; SMS: Short message service
2017-06-28T04:37:38.488Z
2017-04-13T00:00:00.000
{ "year": 2017, "sha1": "f39b6b63abe892f19e6f15a1e5ba60ed251baa7d", "oa_license": "CCBY", "oa_url": "https://bmcurol.biomedcentral.com/track/pdf/10.1186/s12894-017-0216-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f39b6b63abe892f19e6f15a1e5ba60ed251baa7d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252316039
pes2o/s2orc
v3-fos-license
Identification of NHLRC1 as a Novel AKT Activator from a Lung Cancer Epigenome-Wide Association Study (EWAS) Changes in DNA methylation identified by epigenome-wide association studies (EWAS) have been recently linked to increased lung cancer risk. However, the cellular effects of these differentially methylated positions (DMPs) are often unclear. Therefore, we investigated top differentially methylated positions identified from an EWAS study. This included a putative regulatory region of NHLRC1. Hypomethylation of this gene was recently linked with decreased survival rates in lung cancer patients. HumanMethylation450 BeadChip array (450K) analysis was performed on 66 lung cancer case-control pairs from the European Prospective Investigation into Cancer and Nutrition Heidelberg lung cancer EWAS (EPIC HD) cohort. DMPs identified in these pre-diagnostic blood samples were then investigated for differential DNA methylation in lung tumor versus adjacent normal lung tissue from The Cancer Genome Atlas (TCGA) and replicated in two independent lung tumor versus adjacent normal tissue replication sets with MassARRAY. The EPIC HD top hypermethylated DMP cg06646708 was found to be a hypomethylated region in multiple data sets of lung tumor versus adjacent normal tissue. Hypomethylation within this region caused increased mRNA transcription of the closest gene NHLRC1 in lung tumors. In functional assays, we demonstrate attenuated proliferation, viability, migration, and invasion upon NHLRC1 knock-down in lung cancer cells. Furthermore, diminished AKT phosphorylation at serine 473 causing expression of pro-apoptotic AKT-repressed genes was detected in these knock-down experiments. In conclusion, this study demonstrates the powerful potential for discovery of novel functional mechanisms in oncogenesis based on EWAS DNA methylation data. NHLRC1 holds promise as a new prognostic biomarker for lung cancer survival and prognosis, as well as a target for novel treatment strategies in lung cancer patients. Background Lung cancer represents the leading cause of cancer-related deaths. Despite the improvements in therapies, it still accounts for about one fifth of all cancer deaths [1,2]. According to the World Health Organization (WHO), the mortality rate of lung cancer was 83% in 2018 [3]. This observation probably stems from the fact that most lung tumors are diagnosed at advanced stages III and IV [4]. Most patients suffering from lung cancer are former or current smokers. However, the lifetime risk for developing lung cancer varies between studies from 6.7% in males and 4.1% in females [5] to 8.8% in males and 6.5% in females [6]. This reflects inter-individual differences in lung cancer susceptibility, which are not only underlined by results from genome-wide association studies (GWAS) [7], but also by changes in DNA methylation upon smoking [8][9][10][11]. Epigenome-wide association studies (EWAS) have been conducted for a wide range of diseases such as cancer, which are, at least in part, linked to the individual lifestyle [12,13]. EWAS have the common aim to identify epigenetic variants by means of blood DNA methylation changes on an epigenome-wide level in pre-diagnostic samples to estimate their association with disease risk. Recently, a multicenter lung cancer EWAS identified CpGs sensitive to smoke exposure which were hypomethylated in pre-diagnostic blood from lung cancer patients. Stringent analyses point out that the observed hypomethylation may explain the effect of tobacco exposure on lung cancer risk [9][10][11]. The Heidelberg sub-cohort of the European Prospective Investigation into Cancer and Nutrition (EPIC HD) provided one of the validation sets applied in two previous lung cancer EWAS [9,14], which found the strongest associations of cg05575921 and cg03636183 hypomethylation with lung cancer risk. In EPIC HD, we obtained HumanMethylation450 BeadChip array (450K) data of circulating lymphocyte DNA from 66 healthy pre-diagnostic lung cancer cases and an equal number of matched control subjects. In this discovery cohort, we aimed to identify DNA methylation alterations associated with lung cancer. The top differentially methylated positions (DMPs) and differentially methylated regions (DMRs) in lymphocyte DNA identified from EPIC HD were tested for differential methylation in lung tumor versus adjacent normal lung tissue using existing data sets from The Cancer Genome Atlas (TCGA) as well as two independent replication sets. We primarily focused on those DMPs and DMRs located in regulatory regions important for gene expression (i.e., close to annotated transcription start sites). To address functional roles of these DMRs in lung tumorigenesis we performed in depth in vitro characterization in lung cancer cell lines and report here on a novel gene, NHLRC1, involved in AKT activation. Differentially Methylated CpGs from EPIC HD EWAS In order to identify differentially methylated sites in lung cancer patients versus controls, 66 pre-diagnostic blood samples from smoking EPIC Heidelberg participants (Table 1), who developed lung cancer at a later point of time, were screened together with 66 matched controls with Illumina HumanMethylation450 BeadChip arrays (450K). Table 1. Characteristics of the EPIC HD discovery blood sample set. EPIC HD 450K Discovery Sample Set (Blood Samples) Total sample number cases and controls (n) 132 Mean age at baseline (years) 56 (range: Mean time from blood draw to diagnosis (years) 4.6 (range: 1. EPIC HD differential methylation was calculated in 63 sample pairs as intra-pair differences and resulted in the identification of 1,106 significantly differentially methylated positions (DMPs). The autosomal genome-wide p-values of intra-pair methylation differences for every CpG probe on the 450K array are displayed in Figure 1A. The vast majority of those CpGs were hypermethylated and a particularly high fraction of the hypermethylated DMPs was found in gene regulatory regions close to transcription start sites (TSS, Figure 1B). By contrast, gene desert regions showed more hypomethylated sites. We ranked EPIC HD DMPs by mean differential methylation across all sample pairs. The six top hypo-and hypermethylated DMPs listed in Table 2 were further considered for analysis comparing lung tumor tissue with normal adjacent tissue from TCGA data. Analysis of TCGA 450K data from lung squamous cell carcinoma (SCC, LUSC) and lung adenocarcinoma (ADC, LUAD) of the top-ranked EPIC HD DMPs revealed that these candidate CpGs were also differentially methylated in lung tumor versus normal tissue. Strikingly, the top two hypomethylated (cg22586603, cg13291208) and the top two hypermethylated (cg06646708, cg17242351) CpGs from EPIC HD were highly significantly differentially methylated in lung TCGA data sets. While differential methylation in regions located in the open sea can also impact cancer risk, e.g., by overlapping an enhancer or an insulator region, for this study we focused our analysis on those CpGs that were annotated as being proximal to a TSS and for which additional 450K probes existed within a region of 500 bp up-or downstream, in order to maximise chances of focusing on regions where differential methylation has an impact on gene expression. This limited our analyses to cg06646708 located 374 bp upstream of the annotated TSS of the ubiquitin E3 ligase NHL Repeat Containing 1 (NHLRC1, Figure 2A) and to cg17242351, intragenic to the actin alpha cardiac muscle 1 (ACTC1, Supplementary Figure S1A). In addition, the latter overlaps an annotated transcript for the long non-coding RNA LOC101928174. Detailed analysis of the ACTC1 region showed that three CpGs (cg10580056, cg03844894 and cg05432213) adjacent to cg17242351 were hypermethylated in lung cancer versus normal tissue and in addition, ACTC1 expression was reduced in lung tumor tissue from TCGA (Supplementary Figure S1B-E). cg06646708 is framed by two 450K probes (cg18068140 and cg18232313) within the NHLRC1 upstream region, which were uniformly hypomethylated with elevated gene expression in lung tumor tissue from TCGA (lung adenocarcinoma (ADC): Figure 2B,C and squamous cell carcinoma (SCC): Supplementary Figure S2A,B. Hypomethylation of this DMR was also observed in TCGA bladder, liver and breast tumors (Supplementary Figure S3). Notably, the 450K analysis of EPIC HD blood samples had revealed cg06646708 as hypermethylated, which is contrary to the TCGA data set. Given that this site was among the top ranked DMPs in the EPIC HD study, appears as hypervariable in blood samples as well as in tumors, and may be involved in regulating NHLRC1 gene expression in lung tissue, we nevertheless considered it as worthy of further investigation. Current smokers (n) 45 Former smokers (n) 54 Unknown smoking status (n) 1 Mean pack years (py) 46 (range: 1-150) Number of patients considered in the analysis (n) 94 * * 6 patients did not meet quality control criteria. To validate the ACTC1 and NHLRC1 lung tumor DMRs and to confirm their differential expression in lung tumors, we performed sequence-specific methylation-sensitive mass spectrometry (MassARRAY) and real-time PCR (rtPCR) in 94 lung squamous cell carcinoma (SCC) and in lung adenocarcinoma (ADC) versus paired adjacent normal lung tissue samples from current or former smokers of the lung tumor versus adjacent normal sample set (Table 3). By MassARRAY, we were able to quantitatively determine DNA methylation of a 399 bp region within the ACTC1 gene surrounding cg17242351 (Supplementary Figure S1A) and a 256 bp region upstream of NHLRC1 containing cg06646708 ( Figure 2A). Thereby, we were able to confirm the hypermethylation of the ACTC1 DMR in tumors and link it to decreased gene expression (Supplementary Figure S1F,G), as well as the NHLRC1 DMR hypomethylation in line with the observed TCGA data (ADC: Figure 2D and SCC: Supplementary Figure S2C). Notably, NHLRC1 expression was increased by 5.4-fold in ADC ( Figure 2E) and 3.6-fold in SCC (Supplementary Figure S2D). By analysing an independent sample set of tumor versus adjacent normal (Replication Set II, Supplementary Table S1), we confirmed the NHLRC1 DMR hypomethylation (Supplementary Figure S4). The observed hypomethylation of the NHLRC1 DMR in MassARRAY and increased gene expression is congruent with the TCGA data. In conclusion, the EWAS results have pointed us to regions that may be functionally involved in deregulated gene expression in cancer. We hypothesize that the DMR containing cg06646708 upstream of NHLRC1 regulates NHLRC1 gene expression in lung tumor tissue. This was demonstrated by our MassARRAY and gene expression analysis in lung tumor versus adjacent normal lung tissue, as well as in TCGA data from different cancer tissues. Although the ACTC1 DMR locus also shows substantial changes in DNA methylation and gene expression, we did not focus on this region further. Since this CpG overlaps with a long non-coding RNA, we cannot exclude a potential formation of a RNA:DNA hybrid (R-loop) influencing DNA methylation and gene expression via alternative pathways as shown elsewhere [15,16]. Thus, ACTC1, for which high expression is shown to indicate glioblastoma progression [17], remains a promising candidate to be investigated in a future study. NHLRC1 Expression Is Epigenetically Regulated To assess the regulatory potential of the identified DMR in NHLRC1, we analysed ChIP-seq data of activating and repressive histone marks from the Encyclopaedia of DNA Elements (ENCODE) project from the lung cancer cell line A549 and the normal lung fibroblast cell line NHLF [18]. We found the activating histone marks H3K4me1, H3K4me3 as well as H3K27ac enriched in the A549 lung cancer cell line compared to the normal lung fibroblast cell line NHLF, whereas the repressive mark H3K27me3 was absent in both cell types (Supplementary Figure S5). This indicated that altered epigenetic regulation might be the cause for elevated NHLRC1 transcription in tumor tissue. To test the impact of DNA hypomethylation on NHLRC1 transcription, we treated A549 and H1299 lung cancer cell lines and BEAS 2B normal lung cells with different concentrations of 5-Aza-2 -deoxycytidine (DAC), an approved epigenetic drug that leads to a global loss of DNA methylation through inhibition of DNA methyltransferases [19,20]. We used increasing DAC concentrations ranging from 10 nM to 1000 nM and determined the optimal DAC treatment efficacy for each cell line by measuring global DNA methylation of the long-interspersed elements (LINE1) by pyrosequencing ( Figure 3A). H1299 and BEAS 2B showed the lowest LINE1 methylation with 500 nM DAC and A549 with 100 nM DAC. Next, we analysed the methylation of the DMR upstream of NHLRC1 ( Figure 3B), which exhibited decreased methylation for DAC treated compared to the control cells concordant with the observed DNA demethylation of LINE1. The bronchial epithelial BEAS 2B cells showed DNA methylation resembling normal lung tissue methylation levels ( Figure 3B, right panel). In contrast, the same region was less methylated in untreated A549 and H1299 lung cancer cells ( Figure 3B, white bars of left and middle graphs). The NHLRC1 baseline expression in lung tumor cells was approximately 2-fold higher compared to the expression level in normal bronchial BEAS 2B cells ( Figure 3C). In line with this, we observed higher levels of NHLRC1 expression induced upon demethylation in DAC-treated BEAS 2B cells compared to lung cancer cell lines ( Figure 3D). To confirm the regulatory activity of the NHLRC1 upstream region we performed a sequence-specific DNA-methylation-dependent luciferase reporter assay comparing unmethylated and in vitro methylated NHLRC1 upstream sequences. The NHLRC1 upstream sequence showed a significant promoter activity, which was on average 11-fold (A549; Figure 3E) and 17-fold (H1299; Figure 3F) higher upon demethylation compared to the empty vector. Supplementary Figure S6 depicts the region tested with the Luciferase reporter assay, as well as the corresponding in vitro DNA methylation measurements with MassARRAY. These results fit with the observation that chromatin remodeling via posttranslational histone modifications and transcription factor recruitment for the initiation of gene transcription occurs upstream of transcription start sites [21,22]. NHLRC1 Stimulates Proliferation, Viability, Migration and Invasion Since NHLRC1 was overexpressed in tumor tissue, we aimed to characterize NHLRC1 functions in more detail. To do this, we performed RNAi mediated knock-down (KD) of NHLRC1 in A549 and H1299 cells with highly specific siRNA pools [23]. NHLRC1 siRNA KD was confirmed by rtPCR ( Figure 4A,B). We measured numbers of proliferating and viable cells 96 h post-transfection. NHLRC1 KD cells proliferated less ( Figure 4C,D) and were less viable compared to control-treated cells ( Figure 4E,F). In addition, NHLRC1 KD attenuated transwell lung cancer cell migration ( Figure 4G,H) and basement membrane invasion ( Figure 4I,J). In brief, the overexpression of NHLRC1 in tumor cells is associated with higher proliferation rates with an increased ability for migration and invasion, which is characteristic for cancer cells. NHLRC1 Regulates AKT Activation and Modulates Expression of AKT Regulated Genes in Lung Cancer Cells A striking observation in our RNAi experiments was the downregulation of phosphorylated AKT at serine 473 (pAKT Ser473 ) in A549 and H1299 cells upon NHLRC1 KD ( Figure 5A,D). The ratio of total AKT to pAKT Ser473 was approximately 1:1 in both controltreated lung cancer cell lines ( Figure 5B,E), but in siRNA treated cells we found this ratio shifted towards 1:2 and 1:3, respectively. This indicates that NHLRC1 loss alone was sufficient to attenuate oncogenic PI3K-AKT-mTORC2 signalling. To rule out an off-target effect of the NHLRC1 siRNA pool, we transiently overexpressed NHLRC1 or a NHLRC1-C26S catalytic mutant reported in [24] in H1299 cells. Notably, we observed a 2-fold upregulation of pAKT Ser473 only for cells overexpressing the wild-type NHLRC1 but not for the C26S mutant ( Figure 5G,H). This was still dependent on upstream PI3K-signalling since we found attenuated pAKT Ser473 by PI3K inhibitor treatment in H1299 cells ( Figure 5G). (A549; Figure 3E) and 17-fold (H1299; Figure 3F) higher upon demethylation compared to the empty vector. Supplementary Figure S6 depicts the region tested with the Luciferase reporter assay, as well as the corresponding in vitro DNA methylation measurements with MassARRAY. These results fit with the observation that chromatin remodeling via posttranslational histone modifications and transcription factor recruitment for the initiation of gene transcription occurs upstream of transcription start sites [21,22]. Discussion Previously EWASs have identified DNA methylation markers associated with lung cancer risk [9,14]. Here, we investigated differentially methylated positions identified by Illumina HumanMethylation450 BeadChip arrays of prediagnostic blood samples from the EPIC Heidelberg cohort. The analysis of the top differentially methylated sites showed that they were also differentially methylated in lung tumors compared to adjacent normal tissue. Differential methylation at cg06646708 was of particular interest, since its hypomethylation was associated with overexpression of the closest gene, the ubiquitin E3 Previously, it has been shown that pAKT Ser473 translocates to the nucleus and phosphorylates FOXO transcription factors causing the export of FOXO from the nucleus [25]. Hence, active PI3K-mTOR signalling is crucial for cancer cell proliferation, since FOXO is prevented from regulating the expression of its target genes [26] as reviewed in [27]. Examples for FOXO-regulated genes are the apoptosis regulator tumor necrosis factor-related apoptosis-inducing ligand (TRAIL [28]), BCL2 like 11 (BIM [29]), p53 upregulated modulator of apoptosis (PUMA, also known as BCL2 binding component 3 [30]), cell cycle promoting genes CyclinD1 and CyclinD2 ( [31,32]), RB transcriptional corepressor like 2 (RBL2 [33]), cyclin dependent kinase inhibitor 1A (CDKN1A [34]), DNA repair enzyme growth arrest and DNA damage-inducible 45 (GADD45 [35]). Here, we investigated the effects of NHLRC1 knock-down on gene expression levels of these FOXO-regulated genes on a protein basis. The analysis revealed that TRAIL was more than 100-fold upregulated in A549 NHLRC1 KD cells ( Figure 5C), but not expressed in H1299 cells (data not shown). Furthermore, BIM and PUMA were upregulated in A549 ( Figure 5C) and H1299 cells ( Figure 5F) upon NHLRC1 KD. In addition, the FOXO-repressed genes CyclinD1 and CyclinD2 showed reduced expression in H1299 ( Figure 5F) and to a lesser extent in A549 ( Figure 5C). In contrast, RBL2 and CDKN1A were only slightly increased in A549 ( Figure 5C) and H1299 ( Figure 5F). GADD45 did not change upon NHLRC1 KD in A549 ( Figure 5C) and was only marginally increased in H1299 ( Figure 5F). CDKN1A and TRAIL expression were not detected in H1299 cells (not shown). This suggests an activation of intra-and extracellular apoptosis signalling upon NHLRC1 knock-down. Discussion Previously EWASs have identified DNA methylation markers associated with lung cancer risk [9,14]. Here, we investigated differentially methylated positions identified by Illumina HumanMethylation450 BeadChip arrays of prediagnostic blood samples from the EPIC Heidelberg cohort. The analysis of the top differentially methylated sites showed that they were also differentially methylated in lung tumors compared to adjacent normal tissue. Differential methylation at cg06646708 was of particular interest, since its hypomethylation was associated with overexpression of the closest gene, the ubiquitin E3 ligase NHLRC1 not only in lung tumors, but TCGA data from different tumor sites. Recently NHLRC1 methylation was linked with survival rates of SCC patients, with a decreased survival in patients upon lower methylation levels [36]. Comparing the consistent hypomethylation (and upregulation) in tumors to the hypermethylation (and downregulation) observed in the prediagnostic blood samples from cases vs. controls from 450K EPIC HD indicated that this is a tissue-specific hypervariable DNA methylation site. Hypervariability in DNA methylation in tumors at sites of methylation boundary shift in CpG shores has previously been associated with deregulated expression of cell cycle genes and as such these sites have been identified as being important regions on which to focus future epigenetic investigations [37]. NHLRC1 was previously functionally characterized only in Lafora disease, a neurodegenerative type of myoclonus epilepsy [38,39]. Its cellular functions include the regulation of the mTOR pathway, microRNA processing body formation, glucose metabolism and autophagy [24,40,41]. However, these findings were linked to loss of function mutations in NHLRC1 in the context of Lafora disease. Here, we demonstrated a novel mechanism in lung tumorigenesis resulting from epigenetic upregulation of NHLRC via DNA hypomethylation. Several ubiquitin ligases have been linked to malignant alterations through deregulation of oncogenic cellular processes such as proliferation, apoptosis and cell cycle regulation [40]. NHLRC1 was reported to contribute to p53 inactivation through nuclear to cytoplasmic translocation of homeodomain-interacting protein kinase-2 (HIPK2). Thus, elevated NHLRC1 expression in lung tumor tissue may be a mechanism to inhibit TP53-pathway regulated induction of apoptosis in lung cancer [41]. The tripartite motif containing 32 (TRIM32), another RING-type ubiquitin E3 ligase and structurally highly similar to NHLRC1 was shown to be deregulated in several malignancies leading to direct TP53 proteasomal degradation [42][43][44][45]. mTORC2 activation stimulates the central oncogenic downstream target AKT which triggers cell proliferation, survival, and chemotherapy resistance in lung and breast tumors [46]. Our results for NHLRC1 knock-down in A549 and H1299 cells are in line with the consequences of pAKT Ser473 loss. The observed changes in gene expression of the pro-apoptotic genes BIM, PUMA and TRAIL in NHLRC1 KD cells suggest that attenuated pAKT Ser473 leads to nuclear retention of FOXO3a. For instance, induction of PUMA causes intracellular apoptosis signalling in prostate cancer cells [47]. Similar effects were observed for lung cancer cells treated with cisplatin, a major first-line therapy for advanced nonsmall cell lung cancer [48,49]. Cisplatin interferes with PI3K signalling and stimulates the reactivation of FOXO3a, emphasizing the therapeutic importance of this mitotic cell signalling pathway. In summary, epigenetic deregulation of the DMR upstream of NHLRC1 leads to its upregulation and, in turn, stimulates pAKT Ser473 . This highlights the diverse routes for PI3K pathway activation apart from genomic mutations. Hence, the ubiquitin system represents a promising target for further investigation. In line with the work of Li et al., NHLRC1 could be a promising new prognostic biomarker for lung cancer survival and prognosis, as well as a target for new treatment strategies in lung cancer patients [36]. Study Population The epigenome-wide HumanMethylation450 BeadChip array (450K; Illumina, San Diego, CA, USA) was used for DNA methylation profiling of 66 healthy pre-diagnostic blood samples from incident lung cancer cases including adenocarcinoma (ADC), squamous cell carcinoma (SCC), small cell lung cancers and uncharacterized lung cancers and individually matched control blood samples from the Heidelberg component of the EPIC study (EPIC HD) conducted in accordance with the Declaration of Helsinki. Informed consent was obtained from all study subjects. The ethics committee of the Medical Faculty of the University of Heidelberg approved the use of these EPIC samples in this nested substudy (S-627/2013) within EPIC (GEK; 13/94). Detailed information on EPIC HD is given elsewhere [9]. The present study was based on 211 incident lung cancer cases identified by July 2015. Never-smokers, study participants with any diagnosed neoplastic disease, as well as those with a lung cancer diagnosis less than one year after blood draw were excluded, in order to minimize tumor-specific changes in peripheral blood methylation. After accounting for these exclusion criteria, the 66 samples from lung cancer cases with the shortest time to lung cancer diagnosis were selected. Controls were individually matched to cases based on sex, age (±5 years), smoking status (current and former) and pack years of smoking (py; ±3py). The detailed sample set statistics are shown in Table 1 and included 25 ADC, 15 SCC, 19 small cell lung cancer and 4 uncharacterized lung cancers. Details on the lung tumor versus adjacent normal lung tissue replication set I and II compositions are given in Table 3 (Replication Set I) and Supplementary Table S1 (Replication Set II). DNA Isolation EPIC HD laboratory procedures were carried out at the DKFZ and LGC Bioscience. Buffy coat DNA was isolated at LGC Bioscience using the company's standardized protocols and returned to DKFZ for the DNA methylation screen. DNA of the tumor versus normal replication set was provided by Lung Biobank Heidelberg, a member of the accredited Tissue Bank of the National Center for Tumor Diseases (NCT) Heidelberg, the BioMaterialBank Heidelberg (BMBH) and the Biobank platform of the German Center for Lung Research (DZL). All participants gave their informed consent. The study was approved by the ethical committee of the Medical Faculty of the University of Heidelberg (Nr. 270/2001) and conducted in accordance with the Declaration of Helsinki. DNA was isolated with an AllPrep DNA/RNA Mini Kit (Qiagen, Hilden, Germany). Only tumor tissues with ≥50% viable tumor cells were used. HumanMethylation450K BeadChip Array-Based Analyses of EPIC HD Samples HumanMethylation450K BeadChip arrays were conducted by the DKFZ core facility for Genomics and Proteomics according to manufacturer's instructions. Data analysis was conducted with RStudio (version 0.98.1091) [50] using RnBeads (version 0.99.17) [51]. Quality control measures included removal of probes overlapping known SNPs, probes not analysed in all samples or probes in non-CpG context, normalization with beta quantile dilation method (BMIQ) [52], as well as gender inference based on sex chromosome signal intensities. A total of 63 sample pairs passed the stringent quality control criteria and entered the differential methylation analysis. Blood cell type composition of every sample was estimated by a bioinformatics algorithm developed by Houseman and colleagues [53]. Principal component analysis of the cell type estimates was performed. The first two principal components were used for adjustment of observed intra-pair methylation differences in linear regression models for every CpG. p-values were corrected for multiple testing using the Benjamini Hochberg (BH) method [54]. A p-value threshold of 0.05 was applied and resulting CpGs were ranked by mean differential methylation across all sample pairs. HumanMethylation450K DNA Methylation Data Analyses of TCGA Data for Tumor versus Adjacent Normal Tissue All 450K data for primary tumor and adjacent normal samples for lung ADC (n Tumor = 361, n Normal = 43) and lung SCC (n Tumor = 424, n Normal = 33) available by July 2014 were downloaded from the TCGA data portal. Differential methylation analyses were performed using the package RnBeads (version 0.99.17) [51]. Tumor versus adjacent normal data analyses for tumor entities other than lung were conducted using TCGA Wanderer [55]. Table S2) with Hot Star Taq polymerase kit (Qiagen). PCR products were treated with shrimp alkaline phosphatase (SAP) and 2 µL SAP-treated PCR product were then in vitro transcribed with T7 polymerase, RNase A-cleaved, de-salted and finally subjected to matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry [56]. Knowledge of the expected sequence and mass distinction of the different fragments generated from the PCR amplicon allowed quantification of DNAmethylation for CpG unit, which contained 1 to 3 cytosines. Raw results are displayed as ratios of methylated and unmethylated detected fragments and were analysed by EpiTyper 2.0 analysis software. DNA Methylation Analysis by Pyrosequencing Genomic DNA was bisulfite-treated as described above. PCR and sequencing primers were designed using Pyromark ® assay design software 2.0 (Qiagen) and sequenced on Pyromark ® Q24 (Qiagen) according to manufacturer's instructions. The primer sequences are given in Supplementary Table S2 Proliferation assays using 4 µg/mL bisBenzimide (Hoechst 33342; Sigma Aldrich) were performed in vitro in A549 and H1299 cells treated with siNHLRC1 and siRNA control pool and cell viability was determined by incubating cells with 0.1 µg/mL Calcein-AM (Sigma Aldrich) as previously published [59,60]. A live cell staining was performed at 37 • C/5% CO 2 for 30 min followed by cell lysis (5 M NaCl, 10% TritonX in 1xPBS). Readouts were performed in a Tecan 200 plate reader (Grödig, Salzburg, Austria) at excitation/emission = 350 nm/460 nm for bisBenzimide and 485 nm/525 nm for Calcein in technical quadruplicates. Transwell Migration-Invasion Assays Migration-invasion assays were performed in siNHLRC1-treated A549 and H1299 cells. The 24 h post-starvation cells were seeded into 0.8 µM transwell-membrane chambers covered with (for invasion) or without (for migration) 1x basement membrane extract (Trevigen, Gaithersburg, MD, USA). The assay and the data analysis were performed adhering to the manufacturer's instructions. Statistical Analysis p values for functional analysis including luciferase assay, NHLRC1 siRNA knockdown and expression, invasion, and proliferation assays were calculated with a two-sided paired Student's t-test with a confidence interval of 0.95 in RStudio [50]. Conclusions In conclusion, this study has shown that there is a large potential for the discovery of novel functional mechanisms in oncogenesis based on EWAS DNA methylation data. The approach detailed here of analysing a top differentially methylated site from an EWAS study holds promise for the identification of further new mechanisms involved in cancer formation. The lung tumor versus adjacent normal lung tissue replication set was obtained from current or former smokers diagnosed with lung SCC or ADC who underwent surgery at the Thoraxklinik Heidelberg, with approval from the ethics committee of the Medical Faculty of the University of Heidelberg ("Immunologische und molekularbiologische Untersuchungen" 270/2001). All samples were processed anonymized, i.e., with lab identification numbers only. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Since participants of this study did not agree for their data to be shared publicly, supporting data is not available.
2022-09-17T15:08:54.548Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "936304a1b55aa75ea92d84cbfd0cf5950892a8da", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/18/10699/pdf?version=1663153041", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25767741cfd95f7bd68236cc3be15c8356fc6aeb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
267585665
pes2o/s2orc
v3-fos-license
Astrobiocentrism: reflections on challenges in the transition to a vision of life and humanity in space Abstract Astrobiocentrism is a vision that places us in a scenario of confirmation of life in the universe, either as a second genesis or as an expansion of humanity in space. It manages to raise consistent arguments in relation to questions such as what would happen to knowledge if life were confirmed in the universe, how would this change the way we understand our place in the cosmos? Astrobiocentrism raises a series of reflections in the context of confirmed discovery, and it develops concepts that work directly with what would happen after irrefutable evidence has been obtained that we are not alone in space. Unlike biocentrism or ecocentrism, the astrobiocentric view is not limited to the Earth-centric perspective, and for it incorporates a multi-, inter- and transdisciplinary understanding. Therefore, the aim of this paper is to make a reflection on the astrobiocentric issues related to the challenges and problems of the discovery of life in the universe and the expansion of mankind into space. Here we explore some aspects of the transition from biogeocentrism to astrobiocentrism, astrobiosemiotics, homo mensura, moral community, planetary sustainability and astrotheology. Introduction Astrobiology is the science that studies the possibility of life in the universe, but it can also study the expansion of humanity to other habitable environments in the cosmos (Chon Torres, 2020).However, despite not having an empirical confirmation of a discovery of this magnitude, the research that is developed to try to detect extraterrestrial life allows the disciplines involved to acquire more and better knowledge, including astronomy, biology, chemistry, just to name a few (Kwon et al., 2018;Space Studies Board, 2019).That is to say, in the field of natural sciences, there is an important advantage. On the other hand, in the social sciences and humanities there is also an acquisition of knowledge, both at the level of law, politics and philosophy, among others.Thus, from an epistemological perspective, astrobiology brings together, but does not unify, the knowledge and methodologies of its component disciplines. As astrobiology is an academic discipline closer to a transdisciplinary way of working (Santos et al., 2016;Chon-Torres, 2021), it allows the communication of knowledge acquired from different disciplines, resulting in an interconnected understanding, respecting methodological differences.That is, despite being the same study phenomenon, such as the possibility of life in the universe, or our expansion in it, it lets each discipline develop the knowledge it needs to be able to address the aspect that requires it, so that it finally manages to find a nexus with other disciplines involved that the researcher in charge needs.Let us imagine that in the study of the possibility of life in the universe, it is possible to determine the presence of life in an environment outside the Earth.First of all, to gain knowledge about which planets are suitable candidates requires knowledge of astronomy, specifically radio astronomy.However, to know what you need to look for, you must first have knowledge of the basics that enable life, for which you draw on biology, and more recently, also to systems chemistry.At some point in this investigation, knowledge of planetary geology may be relevant, as well as other disciplines.Each one offering information to complete the transdisciplinary puzzle that astrobiology is presenting us.However, these sciences would not be the only ones, since we also have the social sciences and humanities.The natural sciences, insofar as they investigate on the basis of evidence and scientific verification, have a greater impact on the discovery aspect.The empirical sciences discover, while the social sciences and humanities have a greater incidence in the creation of constructs or concepts to understand reality (Restrepo, 2020).This division is not absolute, because after all the natural sciences need to elaborate theories within a conceptual network at its own level of autonomy that can explain the laws that are discovered, just as the social sciences and humanities are based on a shared world, which ends up being observable, but which needs a narrative to be able to make sense of it.In the social sciences and humanities there is a greater degree of interpretation of the same phenomena (such as understanding the development of society from the right or left, as ideology), or sustaining an action from a utilitarian morality or one based on Kantian ethics.It is not the same to replicate an experiment in a laboratoryeven when it is not completely attainable due to the challenges associated with subjectivity and interpretation, as in the fields of ecology and taxonomyas to expect a psychotherapy to have the same result in all people.However, each academic discipline (such as natural sciences, social sciences and humanities) has a way of working that respects a procedure based on logical criteria, generally known as the scientific method. It would be complicated to accept that cognitive processes, for example, can be reduced to laws of physics (Restrepo, 2020).What we could do is to establish a provisional division between disciplines that can be explained and derived from physics, and others whose dynamics cannot be reduced only to the former.Each discipline has its own modus operandis, respecting the disciplinary matrix (Kuhn, 2012) of its academic neighbours, although maintaining some connection as they develop in the same reality.In this scenario, astrobiology crosses a number of disciplines where not all are reduced to the dynamics of the laws of physics, such as philosophy, law, sociology and history, just to mention a few.In this sense, research on the presence of life in the universe, or our expansion in it represents a development in each area involved.However, only in the scenario where it is possible to confirm that extraterrestrial life exists, or that humanity has achieved considerable development as a multi-planetary being (Chon-Torres and Murga-Moreno, 2021), we could be talking about the emergence of a vision that is not limited to life on Earth: that is, astrobiocentrism. Astrobiocentrism is the scenario where life outside the Earth manages to confirm itself, but also where humanity manages to become multiplanetary (Chon-Torres, 2022).In any case, it is a situation where life manages to take presence beyond the Earth.In this context, academic disciplines experience a change in relation to the discoveries made in their areas of knowledge.On the other hand, we currently live within biogeocentrism (Chela-Flores, 2022), that is, a vision of life in the universe conditioned to related knowledge we have on Earth.In the natural sciences, it is impossible to get out of this panorama as long as there is no evidence, because it depends on the discovery to be able to propose a basic theory that is adequate about life in the universe.On the other hand, in the spectrum of the social sciences and humanities, insofar as constructs are handled, we can pose scenarios in the form of mental experiments, or hypothetical situations where our ethics are faced with new challenges (e.g.what should we do if life is discovered in a space mining settlement; are we the guardians of life in the universe; etc.).Thus, at least as far as the reflection of the latter disciplines is concerned, it is possible to reflect what would happen in case both the discovery of life in the universe and the expansion of mankind in it would take place.Thus, we have the transition from biocentrism to astrobiocentrism; the approach of a moral community independently of the informational code that would come about due to changes in the process of multiplanetarity; a semiotics of life outside the Earth; a sustainable interplanetary development; astrotheology. Astrobiocentrism beyond biocentrism The new topic of astrobiocentrism discusses how our scientific and humanistic concepts would change, as soon as extraterrestrial life were to be firmly confirmed (Chon-Torres, 2022).Such an event would occur when we accept multiple data from either different missions for solar system exploration, or with reliable results from observatories, including radio telescopes.Anticipating such a step forward in our knowledge, it is timely to confine our discussion to what we mean by a 'second genesis' (McKay, 2001;Chela-Flores, 2009). Astrobiology is an attempt that guides us to understand life in the universe in scientific terms.It is a tool for the search of an answer that science can provide, regarding the question of the fitness of the universe for the emergence and evolution of life.We can even attempt to formulate questions that are of interest beyond science, namely the humanities (Chela-Flores, 2005).We postpone the closely related consideration of the implications of the expansion of humanity into space (Chon- Torres and Murga-Moreno, 2021).The topic we will cover in this section is only biogeocentrism. These are questions that would be simpler to understand with more than a single genesis in the universe (a 'second genesis').Extrapolating elsewhere in the universe our present understanding for the emergence of life gives us an objective for astrobiology, since it allows us: • To provide theoretical bases for the eventual detection of extraterrestrial life (Aretxaga-Burgos and Chela-Flores, 2012).• To elaborate a strategy for obtaining reliable data of extant, or extinct life.An alternative option would be identifying biosignatures, with the payloads of forthcoming exploration of the Jovian system, from the point of view of biogeochemistry (Chela-Flores, 2022). Consequently, astrobiology can be understood as the scientific exploration of the universe searching for life elsewhere.This new science maintains that it is not premature to discuss the implications of abandoning what some philosophers have called biocentrism: a scientific question that can be approached with specific missions supported by several national space agencies, or with the eventual success of the SETI Project (Drake and Sobel, 1992): it is remarkable that there are ways, through radio telescopes, to investigate whether our level of intelligence is not uniquethe capacity of extraterrestrial civilizations to produce technological signatures that humans can detect (SETI, 2023)eventually going beyond the restricted philosophical doctrine of biocentrism.Regarding the universe as being biocentric goes back in time for over a century: Lawrence Henderson added a significant step to Darwinism (Henderson, 1913). Henderson's contribution is one of the earliest appearances of biocentrism in questions relevant to the emergence, evolution and distribution of life in the universe.However, Aretxaga (2004) has underlined that the term 'biocentrism' arises in at least, another two contexts, besides astrobiology: one of them is in the field of philosophy, the other is environmental science. In the standard evolutionary theory, Darwin had argued in favour of the fitness of organisms for their environment.The contribution of Henderson to Darwinism was that, in addition to fitness of organisms to the environment, we should also consider profitably the fitness of the environment itself (Gingerich, 2008).The often-quoted citation from Henderson's (1913) article is the following: The properties of matter and the course of cosmic evolution are now seen to be intimately related to the structure of the living being and to its activities; they become, therefore, far more important in biology than has been previously suspected.For the whole evolutionary process, both cosmic and organic, is one, and the biologist may now rightly regard the universe in its very essence as biocentric. Henderson's suggestion refers to the relationship between the chemistry and environmental conditions that allow life to arise and evolve on Earth.In his book, fitness is not just limited to Earth.For clarity, within the astrobiological context, the word 'biogeocentrism' was coined (Chela-Flores, 2001) as a term that reflects a tendency observed in some contemporary scientists and philosophers, according to which life is only likely to have occurred on Earth. Space mining in an astrobiological scenario Practically, all exploration and research programmes, which are focused on the human and robotic study of space and the utilization of its geological resources, coincide with the need for applying a multidisciplinary approach to achieve the main scientific and technological goals.As Angel Abbud-Madrid, Director of the Center for Space Resources at the Colorado School of Mines in Golden, Colorado, has stated 'space mining has matured to the point where there are dozens of startup companies, even larger firms, addressing aspects of what's called the "space resources value chain"'.He also advanced several pertinent questions and standpoints on how this process would be developed (Abbud-Madrid, 2018). However, there is an astrobiocentrist bias, whose implications are usually undervalued (or at least not considered in its whole dimension).Thus, it undermines the geoethical and astrobioethical principles (Martínez-Frías et al., 2010, 2011;Martínez-Frías, 2016;Chon-Torres, 2018), the way of thinking and acting, which should be also involved in the In Situ Resource Utilization (ISRU) concept, and its wide spectrum of socioeconomic and cultural considerations. In order to take into account such astrobiocentrist bias, it would be relevant to establish a basic guideline, which could be used as a roadmap, depending on several factors.This guideline would also be useful at facing some key questions regarding the space mining of extraterrestrial natural resources, before developing any procedure.Here, some of these questions are put forward: □ Where will the space resources be mined, on the Moon or on the asteroids?□ Besides, in what area of the Moon (maria, terrae) will the mining take place or in what type of asteroids (e.g.carbonaceous, metallic, silicate-rich)?□ Is the space mining going to be part of the activities of a great private industrial company?If so, will they be developed under some special control or supervision?□ Will they be exclusively part of its corporate assets, or will they be widely and publicly used for the sustainability of humankind, particularly in the least developed countries?□ Are the ISRU only devoted to satisfy the needs of a small community of space astronauts or will it be a massive exploitation? □ Will the investigation and exploration of space resources be exclusively characterized by a scientific interest (e.g.mineralogenetic and metallogenetic studies) to understand the origin and distribution of the mineralization processes, petrogenetic and geochemical distribution of the mineral concentrations, as part of the evolution of solar system bodies, such as asteroidal, lunar or planetary)? All these questions, and probably many others, display a clear astrobiocentrist perspective (Garvin, 2005).In fact, all our activities in space follow this human-related guideline, and we can only to do our best to avoid our own human customs and preconceptions.Anyway, it is clear that appropriate geoethical and astrobioethical protocols and codes of good practices are needed in any mining activity related to space resources, in order to (at least) palliate the astrobiocentrist bias.In spite of that, such protocols and codes are neither unambiguous nor standard.Therefore, they should be tailor-made considering, among others: a) the type of natural resources to be mined; b) their distribution and location in the asteroidal, lunar or planetary body; c) the different, more or less, harmful extractive methodologies and their environmental consequences and d) the type of utilization and profit and the general goals of their mining. Carrying out previous assessments related to ISRU on selected outcrops at earth analogues (e.g.Lanzarote) (Martínez-Frías et al., 2017), and also using asteroidal, lunar and martian simulants, is of great importance to validate scientific models, test technological prototypes and potential extraction routines and their environmental damage.These efforts can provide information about the best methodological way to proceed and to uncover the strengths and weaknesses of the extractive procedures (also from the geoethical and astrobioethical perspectives). As previously defined, the next space missions will be astrobiocentric (Garvin, 2005), and this affects almost all of our future activities beyond our planet.Space mining, in its whole dimension and polyhedral and inter-and transdisciplinary development and approach, is a perfect example of it.In addition, if, as it occurs in some mining areas of our planet, we'd also attempt to use microorganisms to help extracting some metals of interest (or in relation to any other biomining activity), the ethical issues would be still much more complex than using exclusively abiotic procedures. Despite all these subjects arousing great interest and being tackled from different viewpoints in the framework of planetary protection, there is not a clear and appropriate legislation yet.Space mining incorporates many different industrial and scientific innovations and it can be, as any human activity, positive or negative depending on many variables.Ethics and the way of thinking and acting about their development are also key factors to be taken into account. Astrobiosemiotics The search for extraterrestrial life is more likely to yield indirect evidence, such as fossilized remains or chemical traces in exoplanet atmospheres, rather than live organisms for laboratory study.Much in the focus of astrobiological search is to look for those signs of life, biosignatures, that indicate certain biochemical processes that could have their origin in extraterrestrial biological activity (Lineweaver, 2008;Horneck et al., 2016;Cavalazzi and Westall, 2018).In other words, the astrobiologist (the interpreter) makes connections between the expression (the biosignature) and the content (the living organism).The astrobiologist is thus engaged in a meaning-making semiosis, where the sign (as expression) stands for something (its object).The sign does not include its meaning, rather the meaning is attributed through elaboration of an interpreter.So, for something to be meaningful, an interpreter is needed, a human being (or other meaning-making creatures) who endows the sign a meaning.In that perspective, the biosignatures are not solely 'out there', instead, they emerge in the interaction between our minds and the outer world.This astrobiosemiosis is thus triadic, it contains expression, object and interpreterwhich in our case respond to 'biosignature', 'life' and 'astrobiologist'. In astrobiology, biosignatures are a very diverse and inhomogeneous set of phenomena.Biosignatures can be of various kinds, such as fossils, molecules, traces, artefacts, structures, electromagnetic waves, etc.They can refer to chemical substances (such as elements or molecules), but also physical features (structures, sizes or morphology), and physical phenomena (electromagnetic radiation, or light and temperature).They can vary in scale from atomic to planetary magnitude, or perhaps even beyond.They can be searched for both by in situ investigations and through remote indirect sensing, e.g.atmospheric spectroscopy, chemical disequilibrium or isotope ratios (Hegde et al., 2015), on our nearest planets and moons as well as in other solar systems.These signatures are meant to be evidence for either living life or dead life, present or past life, distinctive from an abiogenic background. We propose here that a semiotic analysis of the sign relations of biosignatures could bring some semiotic order in this seemingly chaotic variation of signs.It turns out that the semiotic function of these signs varies a lot, and each has its own epistemological problems and semiotic peculiarities.The problem of biosignatures is very much a semiotic problem: how can meaning be discovered, invented, deciphered and interpreted?Astrobiosemiotics, as we understand it focuses on how astrobiologists as interpreters, establish connections between things, between the expression (the biosignature) and the content (the living organism) in various forms of semiosis, as icons, indices and symbols of life.Through a sincere analysis of the sign relations of biosignatures, we can achieve a more wellgrounded knowledge about the living Universe. In order to uncover the meaning-making strategies in the search for biosignatures, we rest on cognitive semiotics and related fields of research (Sonesson, 2007(Sonesson, , 2009;;Zlatev, 2012Zlatev, , 2015;;Dunér and Sonesson, 2016) that study meaning-making structured by the use of different sign vehicles, and the properties of meaningful interactions with the surrounding environment.A semiotic approach towards the semiosis of biosignatures have first been elaborated by Dunér (2018Dunér ( , 2019)), however there are also a few earlier examples of studies that put forward the relevance of semiotics for the construction and decoding of interstellar messages (Vakoch, 1998a(Vakoch, , 1998b;;Dunér, 2011;Sonesson, 2013;Saint-Gelais, 2014). The first problem that arises in a situation of interpreting a biosignature is realising that it really is a sign at allthat it contains an expression that refers to a contentleading to an interpretive process by the interpreter.Which signatures (phenomena) have meaning and which are just meaningless noise?The signifier (the biosignature) is directly given, but the signified (life) is however only indirectly present, through the link with the signifier.As interpreters, the astrobiologists determine the relation between the signifier and the signified by picking out those elements they assume to be relevant.Even though astrobiologists have good reasons to believe that the connection they infer between the expression and the content, between the biosignature and the living organism, is scientifically correct, they need to rule out other explanations of the sign relation.The 'biosignature' might not be a true biosignature at all, but instead it is caused by an unknown or known abiotic process. Biosignatures are in semiotic terms very diverse phenomena.Depending on how the interpreter makes or interprets the connection between the expression and the object, there are basically three types of sign relations: icon, index and symbol.Based on Peirce (1932), three sign relationsicon, index and symbolone could at least reveal some peculiarities of the semiosis of biosignatures.The meaning of the relation between expression and content, that the interpreter experiences, is based on either similarity (iconicity), proximity (indexicality), or habits, rules or conventions (symbolicity).Thus, we can detect three general kinds of biosignatures in semiotic sense: bioicons, bioindices and biosymbols. Biosignatures that share a similarity with living organisms, for example, fossils, are in our terminology bioicons, namely a sign relation based on similarity, where the expression shares some of the object's properties.The most obvious examples of bioicons are body fossils, the imprints of the hard parts of animals and plants, where the imprints of skeletons or foliage allow us, based on morphologic similarity, to establish a link between the fossilized structure and the living thing.Bioicons are not just of visual nature, a similarity based on morphology or structure, they could exist in any sense modality.Based on chemical analyses, the researcher sees similarities between the expression and the content, not because of structural similarity, but because they share some chemical properties. Bioindices are biosignatures that have a connection to their objects (the living organisms) by contiguity.In other words, the connection between the expression and the content is not based on similarity, but on indexicality.Perhaps the clearest examples of bioindices are atmospheric, chemical biosignatures that refer to biological processes, such as the metabolism of living organisms, discovered through spectroscopic methods (Arnold et al., 2002;Catling and Kasting, 2007;Arnold, 2008;Seager, 2014;Seager and Bains, 2015).The argument of spectroscopic analysis starts from the following premises: (P 1 ) that life produces certain gases as a by-product of metabolism; (P 2 ) some of these gases will accumulate in the atmosphere and (P 3 ) that these gases show a unique spectrum.From these premisesthat are believed to be sufficient for detectionthe astrobiologist concludes that life could, in theory, be detected.Bioindices call for a profound empirical knowledge of recurrent connections between object and expression.The challenge here is to distinguish those bioindices indicating existent life from features that are a result of known or unknown abiotic processes. Searching for extraterrestrial intelligence by means of radio astronomy has been an exciting challenge ever since the start of Project Ozma in 1960 (Sagan, 1975;Weston, 1988;Tarter, 2001;Drake, 2011;Shuch, 2011;Dunér, 2015Dunér, , 2017;;Traphagan, 2015;Vakoch and Dowd, 2015;Cabrol, 2016).The problem of interstellar communication, however, lies not so much in the physical or technological constraints, even though they strongly challenge our scientific and technological skills, but in the cognitive and semiotic problems that an interstellar message decoding would provoke (Dunér, 2011(Dunér, , 2014;;Dunér et al., 2013).The problem with symbolic messages, symbols, is that they are conventional, or arbitrary, as de Saussure (1995) called them.In contrast to icons and indices, the biosymbols are completely arbitrary, and depend on the socio-cultural context. To conclude, the general epistemological problem of biosignatures is to recognize the signatures as meaningful, as signatures of life; that it is an expression that refers to a content (i.e.life).Second, one needs to establish the connection between the expression and the object, the biosignature and the biological process that we call life, and arrive at a certain degree of certainty, and to be able to rule out other explanations for the signatures that are not of biological nature. Astrobiosemiotics is an emerging interdisciplinary field that delves into the interpretation of signs and symbols in the context of astrobiology.It seeks to understand and categorize biosignaturesindicators of lifeas distinct from abiotic signatures that arise from non-living processes.The significance of this field to astrobiocentrism, which places the search for extraterrestrial life at the core of astrobiological studies, is profound.Astrobiosemiotics provides the philosophical and methodological frameworks necessary to discern meaningful patterns indicative of life in the cosmic tapestry, amidst the multitude of signals that the universe presents. By developing a semiotic understanding of the cosmos, researchers can justify the search for life beyond Earth with a more nuanced approach.This involves not just the detection but also the interpretation of signals, which could potentially reveal the presence of extraterrestrial intelligence.It encourages us to explore the importance of meaning-making in a vast and seemingly indifferent universe, prompting us to ask not only if we are alone, but also how we might communicate and understand beings that are fundamentally different from ourselves.Astrobiosemiotics, therefore, not only supports astrobiocentrism but expands its vision by emphasizing the need for a deeper comprehension of life's signs across the cosmos. Homo mensura? The true meaning of the phrase 'homo mensura' uttered by Protagoras is debated.It is often translated as 'humankind is the measure of all things' and is commonly interpreted as signifying that all meaning and all values are in some sense created by us humans.We do not know if this is a correct interpretation of Protagoras, but it seems nonetheless to say something about a deep-seated attitude among many humans.What does this attitude imply and how much sense does it make when discussing in the future interactions with extraterrestrial life? When non-astrobiologists think about extraterrestrial life, they seem most of the time to think about complex life rather than microbial life (Offerdahl et al., 2002;Oreiro and Solbes, 2017).In many cases when talking to the public about astrobiology it becomes clear that what really interests people is the chance of finding life with a level of intelligence (to the extent that it makes sense to talk about intelligence in terms of levels of intelligence) similar to ours and with motivations, emotions, sense organs and perspectives similar to ours (Chon- Torres et al., 2020;Schwartz, 2020).This is not at all surprising.Grounding one's expectations in what is closest to oneself is quite natural.We can call this epistemic anthropocentrism on par with epistemic geobiocentrism.There is probably also an element of wishful thinking in here.We hope to meet other beings that we can communicate with and have some kind of meaningful exchange of knowledge and perspectives.Our focus on life with human-like properties is probably not only epistemic, however, but maybe to an even higher degree a matter of axiological anthropocentrism.That is, the more similar to humans another life form is, the more valuable it becomes (axiology = value theory).The most important and most troublesome form of anthropocentrism in this as well as other contexts seems to be ethical anthropocentrism, stating that only human beings have moral standing, or in other words, we have no moral obligation to care about the interests of non-humans. It cannot be denied that the way we value others, and even the moral status we adjudge others is often strongly tainted by prejudice.We tend to value other humans, as well as non-humans, based on how close they are to us in some sense, socially, culturally, biologically and geographically.Traditionally, we also tend to assign moral standing only to members of our own speciesand if we go further back in time, not always even to all members of our species, at least not to the same degree. Ethical anthropocentrism can have different bases.It may be that we only accept humans as moral objects just because we are the same species, but it can also be that because we believe that humans have a certain property that only humans have and that is the proper basis for moral status. Historically, it has been common to claim that only humans stand in a special relation to the god of one's choice.Descartes (1998) claimed for instance that only humans have moral standing because only humans have an immortal soul given to each of us individually by God.Until quite recently, it has been commonly believed that only humans are conscious or have feelings, commonly referred to as sentience.In more recent times, and even today, intelligence in a wide sense has been the property of choice for the defenders of anthropocentrism (Carruthers, 1992;Kant, 1998;Smith, 2009;Hart, 2010), either directly or indirectly under the assumption that it is a necessary prerequisite for some other property like the ability to express or defend one's own interests. Anthropocentrism is under heavy fire today (Singer, 1979(Singer, , 1993(Singer, , 1995;;Regan, 1986Regan, , 2001;;DeGrazia, 1996;Jamieson, 1998), both because it is becoming increasingly clear that there is a considerable overlap between humans and non-humans when it comes to the properties referred to as necessary for moral status (like intelligence), and because it is increasingly questioned whether any other property than the ability to experience things in a positive, or negative way is even relevant for having morally relevant interests. Even so, anthropocentrism is still the theory that, consciously or unconsciously, determines most policy as well as everyday behaviour among humans on planet Earth, so when we would be discussing our relations with extraterrestrial life, we still need to consider the implications of anthropocentrism. If we use the property of belonging to our species as the basis for anthropocentrism, then there is no way any extraterrestrial life can have status as moral objects for us, no matter how intelligent they are and no matter how similar they are to us in other respects (Persson, 2012(Persson, , 2019(Persson, , 2021)).They will not belong to our species and thus, according to this way of reasoning, we will not have any moral obligation to consider their interests.At first glance, this makes our life as explorers much easier, since we can do what we want on and with their worlds without having to care about what they think.In practice, however, it is more complicated than this.If we encounter extraterrestrial life forms with capabilities surpassing ourswhich might be linked to higher intelligence but not necessarilyit could pose significant risks.Should these beings have advanced abilities, or if we can't dismiss the chance that they do, it becomes crucial for our safety to consider their perspectives.Moreover, since human actions in extraterrestrial realms could affect all of humanity, we arguably have an ethical obligation to be mindful of how these advanced aliens may perceive us, prioritizing human welfare. What will happen if we base moral status on intelligence, directly or indirectly?If we encounter life with a less complex cognitive processes than ours, they will clearly not count.They may still have value for us, however, that would make us want to preserve them and their environments, for instance as study objects (Cockell, 2005(Cockell, , 2011b(Cockell, , 2011a;;Persson, 2013Persson, , 2019)).This kind of reasoning will in some cases motivate certain restrictions on our part, but it will still be on our terms and for our sake. What if we ever encounter extraterrestrial life with cognitive processes as advanced as or more advanced than our own?Then they would need to be regarded as moral subjects, requiring us to consider their interests (Persson, 2012(Persson, , 2013(Persson, , 2021)).This is a positive outcome for them, but the implications for us are uncertain.We cannot assume they reason as we do.If their cognitive capabilities are far beyond ours, to the extent that there is no common ground, we could find ourselves in a challenging situation (Persson, 2019). Is there anything we can do (other than evolving) to increase the chances that an intelligent extraterrestrial life form will see us as having moral status, preferably on the same level as them?We could abandon anthropocentrism on Earth and hope that when they meet us, they will be inspired to do the same.At least it would be inconsistent for us to object if they act similarly.Most probably, however, this will not make a difference.If they really consider it necessary that someone has the same degree of cognitive awareness as they do or belonging to their species as being necessary to count morally, there is a small chance that they will change their mind because of us.Our only hope in this case is instead that an advanced degree of cognitive awareness is somehow connected with a more inclusive narrower perspective on what it takes to have moral standing, so that they might exhibit a more inclusive approach than we are. An interesting complication that mixes epistemological anthropocentrism and ethical anthropocentrism is how to measure intelligence in a life form that is radically different from us and how to compare their intelligence with ours, and even whether we will be able at all to recognize an intelligent extraterrestrial life form as intelligent.These worries may seem improbable, but these complications are present already when measuring intelligence in other species on Earth, and even more so when trying to make inter-species comparisons of levels of intelligence.The problems will not be any smaller when dealing either with organisms that are even more different from us, or have a very different behaviour pattern or very different interests than we do.In principle, a highly intelligent extraterrestrial species should count as moral objects according to ethical anthropocentrism, but in practice this may not happen if we cannot even recognize them as being intelligent.One might say that epistemic anthropocentrism leads to a failure of ethical anthropocentrism.Correspondingly, and for the same reasons the extraterrestrials may not recognize us as being intelligent even if we are on the same 'level' of intelligence as they are, but that would clearly not be a case epistemic anthropocentrism, leading to the failure of ethical anthropocentrism but about epistemic extraterrestrial-ism leading to the failure of ethical extraterrestrial-ism. Humanity as a shared moral community But what becomes of our sense of humanity, and of the importance of humanity, if we acknowledge anthropocentrism as a prejudice or at least as a problematic bias?Might a concern about anthropocentrism require us to abandon our very identity as humans and begin to think of ourselves as something else?For example, as rational agents, or as part of the larger body of Earthlings (human and nonhuman), or as one particular group of sentient or rational beings out of many groups of such beings.A difficulty here is that such an attitude of indifference towards our humanity may not be advisable, even if it was psychologically available to us. It might not be a good idea given that the recognition of a shared humanity has been pivotal to combatting discrimination and extreme forms of injustice.While identification as human is sometimes regarded as anthropocentric prejudice or, in discussions of animal ethics (Singer, 1993;Regan, 2004) 'speciesism' on a par with antisemitism and racism more generally, the recognition of a shared humanity, independent of perceptions of race, has historically played a crucial role in combatting both.Any notion of political equality depends upon an ability to answer the simple question: 'equally to what?' And while there may be a case for the inclusion of non-humans within our conception of the common good (Donaldson and Kymlicka, 2011), any workable way of doing so will have to acknowledge the typical differences between humans and non-humans, even if these differences collapse in marginal cases. Doing so does not, however, even need to be seen as an appeal to species membership.If, for example, we were suddenly to discover that half of us belonged to one species while the other belonged to another species of hominids, it would not diminish the importance of having a shared overall conception of humanity.A conception which carries obligations to non-humans, as well as entitlements to certain kinds of equality.Indeed, such a conception of a shared humanity can be seen as a precondition of shared failures in our treatment of non-humans, either terrestrially or following contact with life elsewhere (Diamond, 1991).And shifting away from anthropocentrism and an overly Earth-focused way of thinking about life requires that some ways of acting should count as moral failures on our part, while other ways of acting count as morally praiseworthy. What is at work in such an approach is a way of thinking about humanity as a moral community rather than primarily thinking about humanity as a species.This is an idea which has already been put forward in work on ethics in the tradition of Wittgenstein (Cockburn, 1990;Gaita, 2001Gaita, , 2004)), in the context of deliberation about outer space (Milligan, 2015a(Milligan, , 2015b) ) and in the context of deliberation about future generations (Wallace, 2021).Of course, there are various biological traits, or species traits, which make our shared human ways of living, experiencing, being vulnerable and responding to others, possible.But it is these things that we typically value and want to continue, through future generations of beings in many ways like ourselves who may or may not belong to the same species as ourselves.If our descendants really do survive for much longer than the longue durée in which we think about past human history, it is unlikely that they will continue to belong to the same species, given current technological trends and emerging technologies of genetic modification.They will still be our descendants, but their continuity with us may be best thought of as the continuity not of a species, but of a moral community, namely, a community with similarities and overlaps in our ways of living, experiencing, being vulnerable and responding to others.If our remote descendants were incapable of love, or compassion, or hope or fear, it would make more sense to say that our kind of moral community had been replaced by another and very different sort of moral community.And again, this would be the case irrespective of various similarities or changes at the level of DNA, biology and genetics. Thinking of our humanity in this way, as a moral community, rather than thinking of it primarily in terms of species membership associated with distinctive biological characteristics, will better equip us for contact with other life forms and for change in the ways that we interact with other lifeforms here on Earth.It allows us a chance to recognize that a shared conception of humanity is a historic accomplishment, without trying to fix some set of biological traits as an everlasting ideal that other beings (terrestrial and otherwise) might then fail to meet.As a final qualification, none of this means that thinking about our moral community in terms of the concept of 'humanity' will itself go on forever.But even if the idea of humanity should at some point become outdated, there may be an ongoing need for a conception of moral community or of multiple moral communities, which plays many of the same unifying, differentiating and obligation-conferring roles. Planetary sustainability in the context of a non-terrestrial life form scenario Planetary sustainability was firstly coined by NASA and further developed by researchers at the University of Bern (NASA, 2014; Losch et al., 2019).In this sense, it is a consideration of all dimensions of sustainability on a planetary scale, including our space environment.Most of all, this means to consider aspects like the use of Earth orbits, the problem of space debris, the ambiguity of space tourism, eventual space mining and settlements on the Moon and beyond, whether they contribute to humankind's long-term survival.Sustainable development is a 'development that meets the needs of the present without compromising the ability of future generations to meet their own needs' (World Commission on Environment and Development, 1987), and survival is certainly a very essential need in this regard. Our modern civilization is already largely dependent on satellites, and also the monitoring of the UN's Sustainable Development Goals is largely pursued by the use of those devices (United Nations, n.d.).If we take our society into spacewhich we need to do to survive in the long run, because time on Earth is limited by the slow expansion of the Sunimportant ethical issues might arise when facing potential extraterrestrial life.This is currently mainly discussed under the heading of planetary protection, 'the practice of protecting solar system bodies from contamination by Earth life and protecting Earth from possible life forms that may be returned from other solar system bodies' (NASA, n.d.).We do not want to bring our life to other celestial bodies in the first place, because this would obscure the possibility to find traces of extraterrestrial life on those bodies, and eventually create false positives.Yet there could indeed be ancient traces of extraterrestrial life on Mars, or microbial lifeforms in the under-surface oceans of icemoons like Europa is one.And planetary protection then means the protection of Earth against microbial contamination from those places.Thus, the commitment to create harmless systems for both groups of living beings is born, and given the advance of astrobotany and the very neglected astrozoology, it is necessary to understand and constantly monitor the role of plants and animals in the transport or transmission of unwanted microbes, so as not to endanger life in the places we intend to colonize, or our own biological diversity on Earth. In the context of a non-terrestrial life form scenario, we would have to consider our ethical stance towards life, and particularly, towards extraterrestrial life.We pondered the ethical options elsewhere from a sustainability perspective (Losch, 2019b).There remain fundamental questions, like 'what is a (living) system?What and where are the fuzzy borders of living and not-living?Are there any limits, or how deep are we connected with our environment, where do feelings belong and what is mind?' (Losch, 2017(Losch, , 2019a)).Our ethics, however, is meant to guide us as beings who can make mindful decisions.That's why we promote a ratiocentrist framework which attempts at including all the other possible stances as complementary perspectives: because it is reasonable to consider their moment of truth. We understand planetary sustainability to be a consideration of all dimensions of sustainability on a planetary scale, including our space environment.Now, our space environment is huge.We already discovered thousands of exoplanets and can expect that there are billions in the cosmos.Will we, one day, even encounter extraterrestrial intelligent life?Although even if such an encounter is not very likely, as the universe is expanding, and the distances grow, a ratiocentrist framework would be wide enough to allow for ethical exchange in such case.What sounds very futuristic is actually an old idea, already Immanuel Kant defined his philosophy to be ratiocentric, with ethical extraterrestrials in mind (Losch, 2016). Astrotheology Work at the frontier of the humanities, specifically theology, and the natural sciences often focuses on methodological concerns.It focuses on the 'and' of this intersection: the additive way in which two distinct fields inform one another while remaining methodologically distinct.Astrotheology does something slightly different, though, that is tune with the tendency in astrobiocentrism to consider where our concepts about life in the universe fundamentally change in the light of a future confirmation of life on other worlds.Astrotheology drives toward transdisciplinary possibilities, not only multidisciplinary or interdisciplinary approaches to the integration of different fields.In short, astrotheology is a form of theological reflection in which new answers must be developed to respond to fundamental shifts in the existential questions driving theological reflection: questions otherwise undeveloped in other theological approaches. The reason for this change is that most theological reflection is fundamentally geocentric.While ostensibly theologies address the nature of divine power expressed throughout the vastness of the cosmos, they most often remain tied to articulating the highly local ways in which the possibilities and conditions for life to emerge on Earth are significant for theology as a process of distinctly human meaning-making.If we take seriously our ever-developing understanding of the scope of the cosmos and its propensity for the occurrence of life, we must ask how this affects our language about God's self-communication such that we question our axiomatic assumptions about living things and the natural world and how we subsequently act responsibly in the light of this fecund cosmos. John MacCarthy has helpfully described this in terms of the significance of the prefix 'astro-'.When this prefix is linked to a more traditional academic field of study, there is an amplification effect (MacCarthy, 2017).Astro-serves to link the current field of study 'with cosmic scales of time and space, with quantum physics, with planetary sciences, and the like'.Understanding 'astro-' as an amplicative prefix, we should expect to see (and are seeing) all sorts of new fields arise.These are not merely subdivisions of the hard sciences, like astrophysics, astrochemistry and astrobiology, but also fields imagining the wider social implications of space research, such as astrosociology, astroethics, astroanthropology and astroeconomics.By doing this, the prefix has an abductive effect on the field of study to which it is attached.'Abductive' indicates two things about the quality of the resulting inferences that are drawn. First, it indicates the body of observations from which inferences in the given field of study are drawn is inherently incomplete.Second, it indicates the rhetorical force of the inferences that can be made in an 'astro-' field is correspondingly broader.The breadth of possible yet unrealized observation has the effect of making the explanatory statements of astro-fields operate like a general rule from which subsequent deductive reasoning might proceed.In the case of astrotheology, there is an abductive shift on the intensely personal existential question driving much theological thinking, 'Why do I exist?'The assumed personal focus in this question is deemed inherently incomplete and significantly widened: 'Why do we (or any living things for that matter) exist?' Astrotheology fundamentally shifts the existential quality of questions driving the theological reflection, and thus demands an astrobiocentric perspective from which to shape any theological effort at meaning-making. Another way to describe this would be that astrotheology has to begin from the distinct existential questions that result from a self-understanding driven by the fundamental interdisciplinary insights of astrobiology regarding living systems.In astrobiology one cannot study either living systems or habitability; this is a complementary discussion since the living system and the habitable environment are co-constitutive.In turn, astrotheology needs language that captures the extent of this mutuality.The concept of 'intra-action' employed by Karen Barad's understanding of agential realism provides a helpful conceptual tool in this regard.Interaction assumes the prior existence of entities that relate to one another.Intra-action connotes the priority of the phenomenon as a holistic unit.As she variously describes it, intra-action indicates that '[r]eality is composed not of things-in-themsleves or things-behind-phenomena, but of things-in-phenomena' (Barad, 2007). Rather than thinking about the intersection of a living system and the habitable conditions allowing such a system to arise as distinct phenomena interacting (distinct parts of a greater whole), in astrobiology living systems and environmental habitability form an intra-active phenomenon itself that is ontologically primordial.The meaningfulness of a living system is not something that can be determined in contradistinction to its habitable environment: these concepts work in intra-active tandemthings-in-a-phenomenonas a meaningful unit of existence that we cannot tear asunder without violating what constitutes a sufficient understanding of either part in itself.In so doing, life is better understood as a planetary or statistical quality of certain phenomena, not a descriptor of specific organisms.Bluntly put, the shift in our thinking would entail something like claiming it is not that a bacteria, bug, plant or human being is alive, it is that the systems in which those creatures appear are living in a way that we might contrast with systems in which such features could not or do not appear that would be non-living.The driving questions for an astrotheology would have to address the existential threats, questions and meaning-making processes of significance to these living-systems as they appear in all their distinctiveness.This requires a decentring of the human in theological reflection to account for the anthropocentric and geocentric biases that emerge in the more intensely personal reflections of traditional theological models.Discussion 1. Astrobiocentrism expands the concept of biocentrism to consider the implications for science and humanity if extraterrestrial life is confirmed.This paradigm shift, anticipated by data from space missions and observatories, moves towards the idea of a 'second genesis'the emergence of life elsewhere in the universe.This notion challenges biogeocentrism, the view that life is unique to Earth, and promotes the search for extraterrestrial biosignatures as part of astrobiology's goal. Henderson's early 20th-century biocentric view of the universe as inherently related to life's structure and evolution lays the foundation for this discussion, integrating Darwin's fitness concept with the environment's suitability for life, thus advocating for a universal application of biocentric principles beyond Earth.2. Space mining represents a multidisciplinary challenge poised at the intersection of technological advancement and ethical consideration.As the industry matures, with startups and established companies joining the 'space resources value chain', questions arise about the astrobiocentric biasoften overlookedthat influences geoethical and astrobioethical principles.These questions extend to ISRU and its socio-economic and cultural impacts.To address this bias, tailored protocols considering resource types, extraction methods, environmental impacts and usage goals are necessary.Moreover, these practices must be informed by thorough scientific assessments and validations, reflecting the complexities of extraterrestrial mining and the ethical implications of using either abiotic or bio-mining techniques in space resource extraction.As space endeavours continue to be human-centric, the need for clear legislation and ethical guidelines becomes paramount to guide the industrial and scientific aspects of space mining, ensuring the sustainability and responsibility of off-world activities.3. Astrobiosemiotics bridges astrobiology and semiotics, focusing on interpreting biosignatures as evidence of life.It's based on the premise that meaningful signs, or biosignatures, emerge from the interaction between our cognition and the cosmos.Biosignatures, such as molecular traces or fossils, are diverse, ranging from atomic to planetary scales and require an interpreter, the astrobiologist, to connect the sign with its life-related meaning.This interpretative process is a semiotic act involving icons, indices and symbolseach with unique epistemological and semiotic challenges.Astrobiosemiotics aims to bring order to the study of these biosignatures, enhancing our understanding of life in the universe through the structured interpretation of these signs.It underscores the importance of human cognition in ascribing meaning to potential evidence of extraterrestrial life, thereby enriching the search for life beyond Earth with a nuanced appreciation of universal semiosis.4. The Protagorean maxim 'homo mensura' encapsulates a potentially anthropocentric outlook that has profound implications for the ethical consideration of extraterrestrial life.This perspective inherently values life forms with human-like intelligence or characteristics, often overlooking less complex organisms.Ethical anthropocentrism, which prioritizes human interests, is challenged by the possibility of encountering intelligent extraterrestrial beings.If such beings' cognitive capabilities are on par with or surpass ours, they could necessitate moral consideration, which would demand a reassessment of our anthropocentric ethics.Additionally, recognizing intelligence in radically different life forms poses epistemological and ethical dilemmas.The encounter with extraterrestrial intelligence would not only test our capacity to identify and value other forms of cognition but also force us to confront the limitations and biases of our anthropocentric worldview.5. Reconceptualizing humanity as a moral community rather than a species addresses the problem of anthropocentrism.It acknowledges our shared human identity's role in fighting discrimination, suggesting that our commonality transcends species and includes ethical obligations to non-humans. This moral community perspective equips us to ethically engage with extraterrestrial life and adapt our interactions with terrestrial life.It emphasizes a continuity of values like compassion and vulnerability over mere biological traits, preparing us for a future where humanity may evolve beyond current definitions, yet maintain the essence of our moral and social bonds.6. Planetary sustainability, as formulated by NASA and expanded by scholars, encompasses the stewardship of Earth and its extraterrestrial environs, including space debris, space tourism and off-world colonization.It underscores development that does not hinder future generations' needs, a principle integral to our survival as our civilization relies on satellites and contemplates space expansion due to the Sun's life cycle.The concept raises ethical questions, especially when considering extraterrestrial life, which is governed by planetary protection policies to avoid biological crosscontamination.A ratiocentric ethical framework suggests a comprehensive approach that acknowledges the value of all life, inviting a broad, inclusive dialogue on sustainability within the vastness of space.7. Astrotheology, an emerging discipline at the intersection of theology and natural sciences, ventures beyond traditional multidisciplinary approaches, aiming for a transdisciplinary integration.It is distinct from conventional theology, which tends to be geocentric, focusing on human-centric interpretations of divine power.On the other hand, considers the implications of potential extraterrestrial life on theological concepts, thereby questioning our fundamental assumptions about life and the universe.This field emphasizes the need to expand our theological perspective to include astrobiocentric considerations, recognizing that our understanding of life and its existential questions should not be limited to Earth.The term 'astro-' as an amplifying prefix extends the scope of various fields, including theology, to cosmic scales, incorporating insights from astrophysics, quantum physics and planetary sciences.Astrotheology thus challenges and widens the scope of traditional theological inquiries, shifting from individual existential questions to broader considerations of life's existence in the universe.This approach necessitates new language and concepts, such as 'intra-action' as opposed to 'interaction', to better understand the co-constitutive nature of living systems and their environments.Ultimately, astrotheology calls for a decentring of human perspectives in theology, acknowledging the anthropocentric biases of traditional models and embracing a broader, more inclusive view of life's significance in the cosmos.
2024-02-11T16:34:31.954Z
2024-02-08T00:00:00.000
{ "year": 2024, "sha1": "f93db973aaedd47be0a3cab41fa18f352230f6df", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/38546738032189AD82AD1E1CB22CC401/S1473550424000016a.pdf/div-class-title-astrobiocentrism-reflections-on-challenges-in-the-transition-to-a-vision-of-life-and-humanity-in-space-div.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "c0dc5685e78873f58cb418635c722932f6530839", "s2fieldsofstudy": [ "Philosophy", "Physics" ], "extfieldsofstudy": [] }
1654807
pes2o/s2orc
v3-fos-license
Valproic acid inhibits neural progenitor cell death by activation of NF-κB signaling pathway and up-regulation of Bcl-XL Background At the beginning of neurogenesis, massive brain cell death occurs and more than 50% of cells are eliminated by apoptosis along with neuronal differentiation. However, few studies were conducted so far regarding the regulation of neural progenitor cells (NPCs) death during development. Because of the physiological role of cell death during development, aberration of normal apoptotic cell death is detrimental to normal organogenesis. Apoptosis occurs in not only neuron but also in NPCs and neuroblast. When growth and survival signals such as EGF or LIF are removed, apoptosis is activated as well as the induction of differentiation. To investigate the regulation of cell death during developmental stage, it is essential to investigate the regulation of apoptosis of NPCs. Methods Neural progenitor cells were cultured from E14 embryonic brains of Sprague-Dawley rats. For in vivo VPA animal model, pregnant rats were treated with VPA (400 mg/kg S.C.) diluted with normal saline at E12. To analyze the cell death, we performed PI staining and PARP and caspase-3 cleavage assay. Expression level of proteins was investigated by Western blot and immunocytochemical assays. The level of mRNA expression was investigated by RT-PCR. Interaction of Bcl-XL gene promoter and NF-κB p65 was investigated by ChIP assay. Results In this study, FACS analysis, PI staining and PARP and caspase-3 cleavage assay showed that VPA protects cultured NPCs from cell death after growth factor withdrawal both in basal and staurosporine- or hydrogen peroxide-stimulated conditions. The protective effect of prenatally injected VPA was also observed in E16 embryonic brain. Treatment of VPA decreased the level of IκBα and increased the nuclear translocation of NF-κB, which subsequently enhanced expression of anti-apoptotic protein Bcl-XL. Conclusion To the best of our knowledge, this is the first report to indicate the reduced death of NPCs by VPA at developmentally critical periods through the degradation of IκBα and the activation of NF-κB signaling. The reduced NPCs death might underlie the neurodevelopmental defects collectively called fetal valproate syndrome, which shows symptoms such as mental retardation and autism-like behavior. Background The importance of cell death for normal brain morphogenesis in the developing nervous system has been acknowledged since the beginning of modern neuroscience era [1,2]. Programmed cell death is a normal and physiological process to allow proper development of structure and function. In nervous system, more than half of the neurons die via apoptotic cell death during the developmental course [3][4][5][6]. At the level of the individual cell, apoptosis is triggered by a wide spectrum of stimuli during embryonic development, not only in response to stress and disease but also as a part of normal tissue homeostasis [3]. Because of the physiological role of cell death during development, aberration of normal apoptotic cell death is detrimental to normal organogenesis. Excessive cell death can result in functional deficits from the loss of specific cell populations, which occurs during age-associated neurodegenerative disorders [7,8], while reduced cell death may results in overgrowth and functional disorganization of the organ. In peripheral system, the loss of cells during development regulates sculpting digits, removing a tail, or eliminating lymphocytes of unwanted specificities [9]. Although the reduced neuronal death is not considered detrimental in vertebrates in the laboratory environment, reduced neuronal death in vivo may alter functional and structural properties of nervous systems leading to the developmental disorders with abnormal brain function. Apoptosis occurs in not only neuron but also in NPCs and neuroblast [10][11][12]. When growth and survival signals such as EGF or LIF are removed, apoptosis is activated concomitant with the induction of differentiation [13,14]. To investigate the regulation of cell death during developmental stage, it is essential to investigate the regulation of apoptosis of NPCs. Valproic acid (VPA), discovered as an anti-convulsant drug and is also used as the anti-bipolar disorder drug, regulates several signaling pathways in brain cells. VPA inhibits class I and II HDACs. In vitro studies showed that VPA specifically triggers phosphorylation of ERK, the upstream modulator of AP-1, without alteration of JNK and p38 pathways [15]. It also inhibits GSK-3βmediated phosphorylation of proteins including β-catenin. In addition, VPA has been implicated in the regulation of LOX, PPARs, PTEN pathways [16]. Through the regulation of the above mentioned pathways, VPA has been generally considered to be neuroprotective. Of interest is the recent reports suggesting the upregulation of the cytoprotective protein Bcl-2 by VPA in neuron [17]. As we reported previously, VPA induces developmental defects when administered at developmentally critical periods [18], which includes functional deficits including mental retardation as well as structural abnormalities such as neural tube defects and the overgrowth of brain. In this study, we investigated whether VPA protects NPCs from cell death and if so, the mechanism by which VPA mediates the effects. Animals Sprague Dawley (SD) rats were used throughout this study. Pregnant rats were injected with VPA or normal saline at E12 and brain tissues were dissected out from E14 and E16 embryos. Animal handling was in accordance with national guidelines and approved by the 'Seoul National University Institutional Animal Care and Use Committee (SNUIACUC)'. Neural progenitor cell culture The preparation of cortical progenitors from embryos was based on the method previously described and slightly modified by us [19,20]. NPCs were prepared from E14 embryos of SD rats. Cortices were dissociated into single cells by mechanical trituration and the cells were incubated with Dulbecco's modified Eagle's medium/F12 (DMEM/F12) supplemented with B27-serum free supplement, 20 ng/ml EGF and 10 ng/ml FGF in a 5% CO 2 incubator. EGF and FGF were added every day and the cells grew into floating neurospheres. The primary neurosphere was dissociated into single cells with trypsin-EDTA (GibcoBRL, a subsidiary of Invitrogen, Carlsbad, CA) and the cells were incubated as neurospheres in EGF and FGF containing media. This procedure was repeated and neurosphere colonies were again dissociated into single cells and plated on poly-Lornithine coated plates with DMEM/F12 media containing 20 ng/ml EGF. The purity of culture was checked by immunostaining using an antibody against nestin, which is a marker for NPCs. In this study, 95% of cells were positive to nestin. Next day, the media was removed and NPCs were incubated with fresh growth factor-free media. One hour later, reagents were treated to NPCs culture. Protein samples were harvested 8 hours after VPA treatment for Western blot. Samples were fixed with 4% PFA 8 hours after VPA treatment for immunocytochemistry. For RT-PCR analysis, cellular RNA was harvested 2 hours after VPA treatment. Preparation of whole brain lysate Whole brains were taken from embryonic day 14 and 16 animals. For the Western blot and RT-PCR, homogenized brain tissues were prepared in lysis buffer and Trizol respectively. Lysates were diluted by 2X sample buffer and adjusted to 1 μg/μl concentration of protein after the BCA protein assay. FACS analysis NPCs dissociated into single cells were used to detect ratio of dead cells by FACS analysis. Approximately 1 × 10 6 cells were used for each analysis. NPCs were trypsinized and PBS containing 1% FBS was added. After PBS washing, cells were resuspended in PBS containing 1% FBS and 0.5 μg/ml propidium iodide (PI). The cell suspension was kept for 5 min at room temperature, and then ratios of positive or negative PI signal were measured by flow cytometry (FACS Calibur System, BD Biosciences, San Jose, CA). The experimental data were analyzed using CellQuest software. Preparation of nuclear and cytoplasmic fractions Nuclear extracts were prepared according to a method published previously [22]. Briefly, the cells in dishes were washed with PBS. Cells were then scraped, transferred to microtubes, and allowed to swell after the addition of 100 μl of hypotonic buffer containing 10 mM HEPES, pH 7.9, 10 mM KCl, 0.1 mM EDTA, 2 mM dithiothreitol (DTT), and 0.5 mM phenylmethylsulfonylfluoride. The lysates were incubated for 10 min in ice and centrifuged at 7,200 g for 5 min at 4°C. Supernatants were used as cytoplasmic fractions. After washing, pellets containing crude nuclei were resuspended in 50 μl of extraction buffer containing 20 mM HEPES, pH 7.9, 400 mM NaCl, 1 mM EDTA, 10 mM dithiothreitol, and 1 mM phenylmethylsulfonylfluoride and then incubated for 1 hour in ice. The samples were centrifuged at 12,000 g for 10 min to obtain supernatants containing nuclear fractions. The association of Bcl-xL promoter region with NF-B: Chromatin immunoprecipitation (ChIP) analysis ChIP analysis for the Bcl-xL promoter region was performed based on the method previously described [23]. NPCs were treated with VPA and 8 hours later, cells were washed with PBS. NPCs were prepared and crosslinked with 1% formaldehyde for 20 min at room temperature. Then formaldehyde was quenched with 125 mM glycine for 5 min at room temperature. Cells were scraped and collected by centrifugation (2,000 g for 5 min at 4°C), then washed twice with cold PBS. Pellets were resuspended with lysis buffer and centrifuged several times at 12,000 g for 1 min at 4°C and the supernatant was removed. The nuclear pellet was washed with 1 ml lysis buffer, by resuspending the pellet, followed by centrifugation. To shear the chromatin, the washed pellet was sonicated after resuspended in 1 ml of lysis buffer. The lysates were cleared by centrifuging at 12,000 g for 10 min at 4°C and the supernatants were retained. For IP, an antibody against NF-B p65 was added to samples and the tube was rotated for 12 hours at 4°C. For mock IP, we incubated samples with beads without antibody. The precipitated chromatin was cleared by centrifugation at 12,000 g for 10 min at 4°C and the top 90% of cleared chromatin was transferred to a tube with protein A-agarose slurry and the tubes were rotated at 4°C for 45 min on a rotating platform. The slurry was washed at 2,000 g for few seconds and the supernatant was removed. The beads was washed 5 times with 1 ml cold lysis buffer then, 100 μl of 10% Chelex 100 slurry was directly added to the washed beads and boiled for 10 min. After centrifugation at 12,000 g for 1 min of 4°C , supernatants were transferred to a new tube. PCR amplification was carried out for 34 cycles, and PCR products were separated on 1.5% agarose gels. The PCR amplification was performed for 34 cycles (94°C, 0.5 min; 57°C, 0.5 min; 72°C, 1 min) with the following oligonucleotide primer sets and analyzed by DND gel electrophoresis: For Bcl-XL Forward primer: 5'-GGGAGTGGTCTTTCCGAA-3' Reverse primer: 5'-CTCCATCGACCAGATCGA-3' Statistical analysis Data were expressed as the mean ± standard error of mean (S.E.M) and analyzed for statistical significance using one way analysis of variance (ANOVA) followed by Newman-Keuls test as a post hoc test and a P value < 0.05 was considered significant. VPA reduced NPCs cell death We first investigated whether VPA protects cultured NPCs from cell death. To induce cell death by withdrawing growth factors, we changed medium with fresh DMEM/F12 without growth factors. VPA (0.2 or 0.5 mM) was added at the time of media change. To induce stimulated cell death, 100 nM staurosporine or 100 μM H 2 O 2 was also added to NPCs at 1 hour after VPA treatment in some cases. Cells were trypsinized and stained with PI solution 8 hr after media change. After growth factor deprivation, approximately 14% of NPCs showed shrinked morphology and was positive to PI. In FACS analysis, the ratio of PI positive dead cell was increased by staurosporine or H 2 O 2 treatment. 0.2 and 0.5 mM of VPA decreased cell death in basal condition as well as staurosporine-or H 2 O 2 -stimulated conditions ( Figure 1A, B). To visualize the protective effect of VPA, we stained NPCs with PI solution, which gave similar results as FACS analysis ( Figure 1C, D). In this condition, staurosporine and H 2 O 2 didn't change either NF-B expression or translocation to nucleus (data not shown). Valproic acid induced Bcl-XL expression via NF-B signaling pathway To determine the molecular mechanism of VPAinduced suppression of NPCs death, we investigate Bcl-XL and NF-B signaling pathway. Bcl-XL is a well known anti-apoptotic molecule and highly expressed in NPCs as well as in brain during developmental period. We first investigated whether VPA changes the expression level of Bcl-XL in NPCs. VPA (0.2 mM and 0.5 mM) increased Bcl-XL protein and mRNA expression in a concentration-dependent manner in Western blot (Figure 2A, B), immunocytochemistry ( Figure 2C) and RT-PCR ( Figure 2D), respectively. VPA also increased Bcl-XL expression and inhibited PARP-1 cleavage and caspase-3 activation induced by staurosporine or H 2 O 2 ( Figure 3). Next, we investigated the involvement of NF-B signaling pathway. It was previously reported that NF-B activation drives Bcl-XL promoter to increase the protein expression in hippocampal CA1 cells [24]. NF-B pathway affects myriad of cellular responses including cell death and survival, which is mediated at least in part by the regulation of the expression of Bcl-XL. We hypothesized that VPA regulates NF-B signaling pathway, subsequently increases expression of Bcl-XL. Although VPA did not affect the expression levels of NF-B p65 and NF-B p50, it decreased the level of IBα, a biological inhibitor of NF-B ( Figure 4A, B). The decreased level of IBα was also confirmed by immunocytochemistry ( Figure 4C). To determine whether the decreased level of IBα mediates the activation of NF-B pathway, we investigated the nuclear translocation of NF-B by performing Western blot of cytoplasmic and nuclear fraction of NPCs culture. Although the level of NF-B in cytoplasmic fraction remains constant, NF-B level in nuclear fraction was significantly increased. (Figure 5A, B) In addition, a lot of NF-B immunoreactivity was localized in nucleus which was co-stained with DAPI in VPA group, whereas almost all the NF-B immunoreactivity was localized in cytosplasm in control group ( Figure 5C). Furthermore, 1 hour pretreatment of 10 μM of TDZD-8, a NF-B inhibitor, suppressed VPA induced NF-B p65nuclear translocation and Bcl-XL expression ( Figure 5D, E). To unequivocally demonstrate the role of NF-B pathway on VPA-induced Bcl-XL expression, we performed ChIP assay. VPA significantly increased interaction between NF-B p65 and Bcl-XL promoter region ( Figure 6A, B). These results suggest that VPA activates NF-B by reducing the level of IBα, which may up-regulate Bcl-XL expression to inhibit apoptosis of NPCs. Because VPA may triggers ERK phosphorylation [15], we tried to examine ERK activation by VPA, however, we did not observe consistent increase in ERK activation by VPA (data not shown). VPA induces Bcl-XL expression in developing rat brain We injected 400 mg/kg of VPA or normal saline to pregnant rat at E12 to investigate whether VPA inhibits cell death in vivo. Although the level of PARP-1 cleavage in embryonic cortex was not significantly changed at E14, it was decreased at E16 by VPA injection. Similar to in vitro results, the level of IBα was markedly decreased, whereas that of Bcl-XL was significantly increased at E16 (Figure 7A, B). The expression of Bcl-XL mRNA was also increased in E14 and E16 brain of VPA injected rats ( Figure 7C). The effect of single injection of VPA was not sustainable and no other definite changes of PARP-1 cleavage, IBα or Bcl-XL were detected at E18 and postnatal day 2 (data not shown). These results suggest that prenatal exposure to VPA may reduce the physiological apoptotic cell death of NPCs in vivo by mechanism involving degradation of IBα and overexpression of Bcl-XL. VPA suppressed Bax expression in cultured NPCs and developing rat brain VPA regulated not only anti-apoptotic Bcl-XL expression, but also down-regulated pro-apoptotic Bax expressions both in cultured NPCs ( Figure 8A, B) and in developing rat brains (E14 and E16, Figure 8C), which inhibits Bcl-XL by hetero-dimerizing with Bcl-XL [25]. Double immunostaining result of NPCs with antibodies against Bax and COX4, a marker of mitochondria, showed that the expression of Bax in COX4 positive mitochondria was also decreased as was the case in cytosol. The up-regulation of Bcl-XL with concomitant down-regulation of Bax by VPA may facilitate the inhibition of apoptotic cell death of NPCs. Growing number of studies reported that NF-B activation is not only involved in the nervous system response to injury or inflammation, but also in neuronal survival in developing brain as well as in the adult nervous system. Although many studies suggested that VPA down-regulates NF-B activity possibly via increase in acetylation on NF-B, which may contribute to the anti-inflammatory effects of VPA in nervous system [28][29][30], at least one study suggested that VPA protects neuron from oxidative stress-induced cell death by acetylation-induced activation of NF-B, although it's still possible that inhibition of JNK activity mediates the observed protective effects of VPA [31]. Similar pro-survival role of VPA has been reported in hippocampal NPCs [32]. In addition to above reports, we provide here new mechanism for the regulation of NF-B pathway by VPA, i.e. the downregulation of IB. The different effects of VPA on the regulation of NF-B activity in different experimental conditions suggest that the regulatory effects of VPA may differ in the context of dosing and time window of treatment in different cell types. At present, the mechanism by which VPA down-regulates IB is not clear. Although the transcriptional or translational down-regulation of IB is also possible, one interesting hypothesis is the rapid degradation of IB by VPA, especially considering the rapid degradation of IB by proteasomal pathway (for a review, see [33]). In fact, VPA induced the proteasomal degradation of HDAC2, which might be regulated by the induction of E2 ubiquitin conjugase Ubc8 and increased ubuiquitination [34]. The concentration of VPA (0.2-0.5 mM) used in this study is similar with the concentration of VPA required for the reduction of HDAC2 protein levels as well as that required for inhibition of HDAC enzymatic activity. In our experiment, we also observed that 0.2 or 0.5 mM VPA strongly increased histone acetylation in NPCs, which may suggest that the concentration of VPA used in this study might be in the range of HDAC inhibitory concentration as well as that required for the regulation of ubiquitination-dependent HDAC degradation. In other study, VPA decreased steroid secretion by increasing the ubiquitination and degradation of SF-1 in similar dose ranges [35]. Although the ubiquitination dependent regulation of key regulatory molecules in NPCs by VPA is an emerging area of investigation, which we hope to explore further using a series of biochemical and molecular biological tools in the future, these results suggest that VPA might regulate the ubiquitination and degradation of signaling molecules in a clinically relevant concentration r. Alternatively, VPA may induce the phosphorylation of IB for its degradation through the activation of PI3K-Akt-GSK3β pathways or ERK 1/2 pathways, the two well known target pathways regulated by VPA. Those two possibilities are now under active investigation in this laboratory. Regulation of the expression of anti-apoptotic protein Bcl-XL by VPA Although Bcl-2 is the prototype of anti-apoptotic protein and is extensively studied so far, Bcl-2 levels decline rapidly during development [36] and targeted disruption of Bcl-2 resulted in only subtle neurodevelopmental abnormalities [37]. These observations suggest that other Bcl-2 family members may play a more significant role in the development of embryonic brain. Bcl-XL is another anti-apoptotic Bcl-2 family member which is expressed at relatively high levels in the nervous system [38,39]. In contrast to Bcl-2, high levels of Bcl-XL are maintained throughout development into adulthood [36,[38][39][40] and the Bcl-XL-/-mice die during embryonic period [36] with extensive apoptotic cell death in the developing nervous system [41]. In our experiment, VPA up-regulated Bcl-XL expression and inhibited NPCs cell death in basal condition as well as pro-apoptotic condition induced by staurosporine and H 2 O 2 . VPA induced abnormality in cell death during development may implicates the underlying mechanism of hyperneurogenesis in some developmental disorders In our study, VPA inhibited apoptotic cell death of NPCs. The decreased apoptosis may contribute to the increased NPCs resulting in hyper-differentiation of neuron in VPA-treated subjects. VPA is a potent teratogen and causes behavioral and neuroanatomical abnormalities similar to those seen in autism [42][43][44][45]. Interestingly, one of the anatomical feature observed in autism patients is macrocephaly and increased neuron density [46,47]. Prenatal VPA exposure model is one of the most widely used animal model for autism [43]. These results suggest that exposure to VPA during developmentally critical periods may contribute to the anatomical abnormalities similar to autism, possibly via increased NF-B activation and decreased apoptotic cell death. Although it is also possible that increased proliferation of NPCs may lead to increased neuronal density, which is under active investigation in our lab, the involvement of similar mechanism, i.e. decreased apoptotic cell death, in other neurodevelopmental disorders such as tuberous sclerosis and Cowden syndrome would be an intriguing topic to be resolved in the future study. Middle, NPCs were stained with COX4, a marker of mitochondria. Right, merged image of Bax and COX4 staining. (C) VPA was injected into pregnant rats at E12 and Bax level in embryonic brain was determined at E14 and E16. The level of Bax in brain was decreased at both time points. Scale bar represents 10 μm. One of the important issues in this study is the clinical relevance of the concentration of VPA used in this study. In many studies, 0.2 -2 mM of VPA was generally used in neuron [48] as well as neural stem cell [49], which is higher than the VPA concentration used in this study (0.2-0.5 mM) without any immediate toxicity [50,51]. The concentration of VPA used in our in vitro study (0.2 mM or 0.5 mM) corresponds to 28.8 μg/mL or 72.1 μg/mL, which is slightly higher but well within the clinical concentration of VPA. In an animal model of autism, 400 mg/kg or 600 mg/kg of VPA was routinely used to mimic the human VPA exposure during pregnancy [43]. Serum or plasma VPA concentrations are generally in a range of 20-100 μg/mL during controlled therapy against epilepsy or bipolar disorder, with free concentration of VPA in blood and brain ranging from 7% to 28% of the total levels. In rats treated with 400 mg/kg VPA may have blood VPA concentration of 30-50 μg/mL [52] and the concentration of VPA in brain was one fifth of blood concentration [53]. In this regard, VPA concentration of brain in 400 mg/kg VPA injected animal may be calculated to approximately 6-10 μg/mL, similar to clinical VPA concentration of human brain (7.5 -20.8 μg/mL). Conclusion In this study, we provided evidences suggesting that VPA can suppress cell death of NPCs via regulation of NF-B pathway and Bcl-XL expression. Prenatal exposure to valproic acid produces neurodevelopmental and somatic abnormalities collectively called fetal valproate syndrome, which includes the behavioral and anatomical symptoms similar to those seen in autism [42,45]. Defining the role of the diminished apoptotic cell death by exposure to VPA during developmentally critical period on the manifestation of the anatomical and behavioral defects would provide more insights into the pathogenesis of the neurodevelopmental disorders.
2016-05-12T22:15:10.714Z
2011-07-04T00:00:00.000
{ "year": 2011, "sha1": "dc424a196a6e76815e2bd576e02ccd3c7aae6728", "oa_license": "CCBY", "oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/1423-0127-18-48", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e4b0fb2f3b6cc2e433fc0b500d196bd329fe27d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261530958
pes2o/s2orc
v3-fos-license
Classification of Lifshitz invariant in multiband superconductors: an application to Leggett modes in the linear response regime in Kagome lattice models Multiband superconductors are sources of rich physics arising from multiple order parameters, which show unique collective dynamics including Leggett mode as relative phase oscillations. Previously, it has been pointed out that the Leggett mode can be optically excited in the linear response regime, as demonstrated in a one-dimensional model for multiband superconductors[T. Kamatani, et al., Phys. Rev. B 105, 094520 (2022)]. Here we identify the linear coupling term in the Ginzburg-Landau free energy to be the so-called Lifshitz invariant, which takes a form of $\boldsymbol{d}\cdot\left(\Psi^{*}_{i}\nabla\Psi_{j} - \Psi_{j}\nabla\Psi^{*}_{i}\right)$, where $\boldsymbol{d}$ is a constant vector and $\Psi_{i}$ and $\Psi_{j}$ $(i\neq j)$ represent superconducting order parameters. We have classified all pairs of irreducible representations of order parameters in the crystallographic point groups that allow for the existence of the Lifshitz invariant. We emphasize that the Lifshitz invariant can appear even in systems with inversion symmetry. The results are applied to a model of $s$-wave superconductors on a Kagome lattice with various bond orders, for which in some cases we confirm that the Leggett mode appears as a resonance peak in a linear optical conductivity spectrum based on microscopic calculations. We discuss a possible experimental observation of the Leggett mode by a linear optical response in multiband superconductors. I. INTRODUCTION Superconductivity embraces rich order parameter dynamics, as shown by the macroscopic Ginzburg-Landau (GL) and microscopic BCS theories.They tell us that there are typically two types of collective modes in singleband superconductors: one of them is the Higgs (amplitude) mode [1][2][3][4][5][6][7] and the other is the Nambu-Goldstone (phase) mode [8,9], the latter of which is lifted up to the plasma frequency by the Anderson-Higgs mechanism [10,11].What remains at low energy is the Higgs mode, which constitutes a massive excitation on top of the continuum of quasiparticle excitations. Since Higgs mode does not linearly couple to electromagnetic fields in ordinary situations, previous studies have focused on the investigation of nonlinear optical responses of superconductors [12][13][14][15][16][17][18][19][20][21].In conventional superconductors, the energy scale of the Higgs mode is usually a few meV, which is in the frequency range of THz lasers.That is why one had to wait for the arrival of highintensity THz lasers and techniques like THz pump-THz probe experiments and third harmonic generations [22][23][24][25][26]. Another experiment that has observed the Higgs mode is the Raman spectroscopy in 2H-NbSe 2 , where the charge density wave (CDW) phase coexists with the superconducting phase.In this particular situation the Higgs mode becomes Raman active, and Raman experiments have observed the signal prior to the development of THz lasers [27,28] (see also [29][30][31][32] for recent research). The physics of superconductors with multiple order parameters is even richer than the single-band case.Many superconductors of interest, such as iron-based super-conductors [33], MgB 2 [34], niobium-based superconductors [35], and Kagome superconductors [36][37][38][39][40][41][42][43], are all multiband superconductors, and it is natural to consider those cases.Two-band superconductors, for instance, have four real collective modes.Two of them are amplitude modes, and the others are phase modes, one of which is just an overall phase and is absorbed into electromagnetic fields due to the Anderson-Higgs mechanism.The remaining phase mode corresponds to fluctuations of the phase difference between the two order parameters, which is called the Leggett mode [44].So far, there is an example of the observation of the collective phase fluctuations by Raman spectroscopy [45].Other examples of collective phase fluctuation have been studied in the nonlinear response regime [46][47][48][49][50][51][52][53][54][55].We also note that the phase solitons [56,57] arise from the multiband nature of superconductors with a nontrivial geometry like a ring. Recently, it has been shown that a term containing only a single spatial derivative, responsible for the linear order Leggett-light coupling, could in principle appear in the GL free energy, and its existence was demonstrated in a one-dimensional two-band superconducting model [58].In general, however, it is not clear under what condition the Leggett mode would appear in the linear response regime, or which crystal symmetry could host the linear Leggett mode response.Particularly it is not known whether the Leggett mode can appear in the linear response in dimensions higher than one. In the present work, we point out that the singlederivative term corresponds to the so-called Lifshitz invariant d • Ψ * i ∇Ψ j − Ψ * j ∇Ψ i (i ̸ = j), which is invariant under symmetry operations of the system.The possi- F Im[ ] Figure 1.A schematic picture of the free energy of a superconductor with three order parameters.In multiband superconductors, the fluctuations of the relative phase between two order parameters (the Leggett mode) generally constitute a massive mode.A three-band superconductor has two such phase modes, with characteristic frequencies ω L1 and ω L2 .In contrast, the overall massless phase mode (Nambu-Goldstone mode) is lifted to the plasma frequency due to the Anderson-Higgs mechanism.For simplicity only one of the Leggett modes is shown and the other Leggett and Higgs modes are not depicted.The colored arrows show the oscillation of the Leggett mode (blue and purple arrows indicate oscillating directions at different oscillation phases).bility of such an antisymmetric term appearing in the GL free energy has been studied by Lifshitz [59] in the context of the stability of second-order phase transitions.Dzyaloshinskii has also discussed the term considering the helicoidal structure in antiferromagnets [60].The Lifshitz invariant is linear with respect to the spatial gradient of the order parameter, so it has been supposed to appear in inversion symmetry broken systems.The Lifshitz invariant is known to emerge, e.g., in noncentrosymmetric superconductors [61][62][63], parity-and time-reversal symmetry broken superconductors [64,65], commensurate-incommensurate transitions [66,67], liquid crystals [68], and as the Dzyaloshinskii-Moriya interaction term in magnets [69,70].This linear gradient term modifies the free energy and causes various different states such as nonuniform superconducting states in noncentrosymmetric superconductors, magnetic skyrmion [71], and instability around the phase transition point.Group theoretical classification of this term has been given in the incommensurate phase transition [66] and solids [72], but not in multiband superconductors.We here classify all combinations of irreducible representations of order parameters in crystallographic point groups that permit the presence of the Lifshitz invariant. We also found that the inversion symmetry is not crucial, and, the system could have the Lifshitz invariant even in the presence of the inversion symmetry.The condition for the Lifshitz invariant to appear in multiband superconductors with sublattice degrees of freedom is determined by the induced representation of the crystallographic point group, which is induced by the trivial representation of the site-symmetry group for each Wyckoff position of lattice sites.As a result, a wide range of multiband superconducting systems are shown to be able to have the Lifshitz invariant. This paper is organized as follows.In Sec.II, we review the GL free energy framework of a two-band superconductor.While we focus on the two-band case, the argument can be straightforwardly extended to cases of a larger number of bands.Within the GL approach, we perform a group theoretical analysis of the Lifshitz invariant in Sec.III.We apply the group theoretical techniques to some models in Sec.IV to see that the inversion symmetry is not crucial to discuss the Lifshitz invariant in multiband superconductors.In Sec.V we study a family of Kagome models and explicitly compute signatures of collective phase modes in the linear optical conductivity using an imaginary time path integral approach.The paper is summarized in Sec.VI.We set ℏ = 1 throughout the paper. II. GINZBURG-LANDAU FREE ENERGY This section reviews a phenomenological theory of multiband superconductors within the GL free energy framework [58].Fig. 1 depicts the schematic picture of the free energy of a three-band-superconductor, which we will microscopically consider later in Sec.V. To illustrate the physics of the Lifshitz invariant, however, it is sufficient to study the case of two order parameters.The argument can be generalized to arbitrary n-band superconductors in a straightforward manner.The single-band case is described in detail, e.g., in the review [6]. A. Two-band superconductor We shall consider the GL free energy density F for a two-band superconductor given below [58], where a i = a 0,i (T − T c ), a 0,i and b i are positive constants, T is the system's temperature, T c is the transition temperature, m * i is the effective electron mass, and D = −i∇ − e * A is the covariant derivative with an electric charge of a Cooper pair e * = 2e and the electromagnetic vector potential A. See Appendix.A for a microscopic derivation.The first line in Eq. ( 1) describes the free energy density of two independent single-band superconductors with complex order parameters Ψ i , each describing a Mexican hat potential below the critical temperature T c . The remaining terms represent couplings between the two order parameters, with coefficients ϵ, η and a constant vector d.Here, the term proportional to ϵ corresponds to the Josephson (proximity) coupling.The term proportional to η is interpreted as a drag effect [87][88][89]. Of particular relevance in the context of the present study is the term proportional to d.It is responsible for inducing the Leggett mode in the linear response regime as we will see below [58].We will discuss this in Sec.II B that the vector d may be interpreted as an "internal field" that induces a flow of the phase.We note that there is a slight difference in the d term between Eq. (1) and that of [58].However, the difference is only the total derivative, and hence is not physically relevant. We now expand the free energy around the mean-field ground state Ψ 1,0 and Ψ 2,0 , where H i and θ i describe amplitude and phase fluctuations, respectively.The overall phase θ 1 + θ 2 can be removed by gauge transformation due to the Anderson-Higgs mechanism.The only relevant phase degree of freedom will be θ 1 − θ 2 .In general, one has n − 1 phase degrees of freedom, where n is the number of order parameters.In the expansions, we only keep terms including the electromagnetic vector potential up to the second order.Additionally, we restrict ourselves to the uniform limit ∇H i = ∇θ i = 0.A uniform solution usually has a lower free energy.In the presence of the Lifshitz invariant, however, it is not obvious.As we show in Appendix.B, if the magnitude of the vector d is sufficiently small, the uniform solution has a lower free energy, and the order parameters are not spatially modulated at the ground state. We obtain where and The first term, F 1 , represents the nonlinear coupling between the amplitude fluctuation (Higgs mode) and the external field for each band [6].This coupling yields the third harmonic generation responses of Higgs modes in multiband superconductors. The second term, F 2 , is a linear coupling of the electromagnetic vector potential to the collective modes.It describes the collective mode contribution to the linear response.The real part induces the linear response of the Higgs modes, while the imaginary part is responsible for the Leggett mode. In superconductors, the particle-hole symmetry is an effective (approximate) low-energy symmetry, that acts as Ψ i → Ψ * i , or It is thus clear that only the Leggett mode linear response contribution is invariant under the particle-hole symmetry.The constant vector d is in fact purely imaginary according to the microscopic calculation (see Appendix.A).Consequently, the amplitude contributions are suppressed in the linear response.Potential observations of collective modes in the linear response therefore require a multiband structure of superconductors.The term F 2 may further be restricted by spatial symmetries of the underlying crystal lattice.Symmetry requirements that allow for the presence of the linear Leggett coupling will be discussed in Sec.III. B. The Lifshitz invariant The term F 2 in Eq. ( 5) originates from the expression in Eq. (1).For simplicity we consider the case without the vector potential A, and we put D = −i∇.We also set d = id I because the real part Re[d] is suppressed by the particle-hole symmetry and it is actually zero according to the microscopic mean-field calculation (see Appendix.A, particularly Eqs.(A28) and (A36)).We then obtain The above term takes the form of the so-called Lifshitz invariant [59].In the context of the Dzyaloshinskii-Moriya interaction, the vector d I can be interpreted as an"internal field".Moreover, the vector (Ψ * 1 ∇Ψ 2 − Ψ 2 ∇Ψ * 1 ) (and the term with interchanged 1 and 2) is similar to the form of the quantum mechanical current where the usual probability density (ψ * ψ) has been replaced by an overlap of superconducting order parameters (Ψ * 1 Ψ 2 , Ψ * 2 Ψ 1 ).Since the overlap is determined by the phases of order parameters, this current transfers the phase.In this sense, the vector d I is understood as a "field" that drives the phase flow.Because of the "internal field" d I , it is possible for the phase to couple to the external electromagnetic field to activate the Leggett mode in the linear response regime. III. GROUP THEORETICAL CLASSIFICATION OF LIFSHITZ INVARIANT This section presents the symmetry analysis of the Lifshitz invariant in multiband superconductors.Although the Lifshitz invariant has been studied in many contexts [61][62][63][66][67][68][69][70], it has never been discussed in the linear response of multiband superconductors as far as we know.The Lifshitz invariant is usually associated with the broken inversion symmetry.However, in multiband superconductors the inversion symmetry itself is not crucial to induce the Lifshitz invariant, and the broken inversion symmetry is neither a necessary nor sufficient condition to have the Lifshitz invariant. The basic strategy to determine whether the Lifshitz invariant is allowed or not is as follows [90].We consider the representation of the term Ψ * i ∇Ψ j (i ̸ = j).After calculating the direct product of the representations of the order parameters and spatial gradient and decomposing it into the direct sum of the irreducible representations, we check whether the term has a trivial irreducible representation or not.Since the free energy must be invariant under symmetry operations, the Lifshitz invariant is allowed to exist if the term has the trivial irreducible representation, but is not allowed if the term does not have the trivial irreducible representation. To identify the representation of the order parameter, we need to specify the physical degrees of freedom that the order parameter has.As an example, let us assume that the pairing symmetry is s-wave and the order parameter has sublattice degrees of freedom, i.e., the order parameter is defined on each lattice site in the unit cell.These order parameters on different sites can be interchanged by symmetry operations, and may belong to a nontrivial representation, which determines whether the Lifshitz invariant can appear or not.We first give a general procedure to derive the representation of the order parameters with sublattice degrees of freedom, which can be constructed from the induced representation induced by a site-symmetry group (a subgroup of the crystallographic point group that fixes a certain lattice site).The obtained representation is used to see whether the system can have the Lifshitz invariant or not, which results in a classification table of pairs of the order parameter representations for each crystallographic point group allowing the Lifshitz invariant. Let us consider the general construction of the representation of the order parameter induced by sitesymmetry groups [91].When a group G and a subgroup H of G are given, a left coset decomposition of H in G is given by where g α ∈ G.The induced representation of G written by ρ H;G := ρ H ↑ G is produced by each representation ρ H of H. We can explicitly construct a representation ρ H;G from the representation ρ H .To be precise, if the rows and/or columns of ρ H are labeled by i and j, then the rows/columns of ρ G can be labeled by iα and jβ.Here, α and β vary over the cosets g α H in Eq. ( 9).Then we can define the representation ρ H;G as where h ∈ G and This is the general construction derived from group theory. In our case, the group G corresponds to the crystallographic point group that describes the whole system, while the group H corresponds to the subgroup of G that describes the site-symmetry group.If one takes the trivial representation of H (i.e., the one-dimensional representation [ρ gives the representation of the sublattice degrees of freedom of the superconducting order parameter, where α and β correspond to sublattice indices. Here we assume that the superconducting pairing (swave, p-wave, etc.) and the sublattice degrees of freedom are transformed independently under symmetry operations.That is, the representation is assumed to be the direct product of the pairing and sublattice degrees of freedom.In addition, when we consider the representation of Ψ * i ∇Ψ j , the product of two superconducting pairings always yields the trivial irreducible representation because the two order parameters have the same pairing symmetry. The site-symmetry group H depends on each lattice site in the unit cell.However, if one classifies lattice sites in the unit cell by Wyckoff positions, then for each Wyckoff position the site-symmetry group is isomorphic to each other.Thus, the site dependence of H in each Wyckoff position does not affect the resulting induced representation. When different sites belong to the same Wyckoff position, more precisely we need to consider the orbit of the group, and this allows for the correspondence with the sublattice degrees of freedom.However, when the group orbits are different but at the same Wyckoff position, they will only appear identical in their representation.Model (b) below is one of the examples of this case. After obtaining the representation of the order parameter, we calculate the direct product of the representations Ψ * i ∇Ψ j , which is then decomposed into the direct sum of the irreducible representations by the reduction formula.The representation of ∇ is solely determined from the crystallographic point group.After checking whether the trivial representation is contained in Ψ * i ∇Ψ j , we can classify pairs of the irreducible representations of the order parameters that permit the Lifshitz invariant to show up. To show how our classification is obtained, let us take D 3h as an example.Since ∇ has the same transformation property as the coordinate r = [x, y, z] T , the representation of ∇, ρ ∇ , follows from the direct sum of the representations of the basis functions x, y, and z.The basis functions and direct products of the representations of crystallographic point groups are in detail given in [92].In the case of D 3h , z belong to A ′′ 2 , and x and y belong to the representation E ′ .Then ρ ∇ is given by the direct sum of these representations: Now we turn to the representations of order parameters.For the pair (Ψ i , Ψ j ) = (A ′ 1 , A ′′ 2 ), for instance, we can evaluate the representation of Ψ * i ∇Ψ j as which allows the Lifshitz invariant because it has the trivial irreducible representation 2 , E ′′ ), on the other hand, we can calculate as which does not allow the Lifshitz invariant since it does not have A ′ 1 .Here we give Table I/II, which lists all the possible pairs of representations of the order parameters (Ψ i , Ψ j ) for each crystallographic point group without/with inversion symmetry that allows the existence of the Lifshitz invariant Ψ * i ∇Ψ j .A similar classification for D 2h point group has recently been reported in Ref. [65].In Table I and II, we notice that there are many possible combinations of the order parameter representations that allow the existence of the Lifshitz invariant, both in systems with and without inversion symmetry.In the presence of inversion symmetry, the allowed representations are always combinations of gerade and ungerade, since ∇ is parity odd. We note that the results in Table I and II are universal, and do not depend on what kind of physical degrees of freedom the representation of the order parameters corresponds to.They are not even limited to superconductors but can be applied to any systems having multiple order parameters.In the present paper, we primarily consider the case of multiband superconductors having multiple degrees of freedom.If we assume that the order parameters have orbital degrees of freedom which also transform independently under symmetry operations, the argument can easily be extended to include orbital degrees of freedom.When the order parameters have multiple degrees of freedom in addition to the band indices like Ψ i,a , where a is the additional degrees of freedom, the vector d can in principle be a tensor.Even in this case the results of the classification remains the same and the argument does not change.Although we later study s-wave superconductors to see the Leggett modes in a linear response in simple and concrete models, the results in the table are directly applicable to any superconducting gap symmetries since the order parameter is assumed to be represented as a direct product of the pairing and sublattice components, and the product of the two identical pairing symmetries (e.g., p-wave and p-wave) always yields the trivial irreducible representation. IV. APPLICATION OF THE GROUP THEORY TO SEVERAL MODELS We apply the general group theoretical classification obtained in the previous section to several models to check the validity of our approach.In the following, we assume s-wave superconductivity with a single atomic orbital on each lattice site in each model.For the Lifshitz invariant to appear, it is not necessary to break the inversion symmetry in multiband superconductors as seen in the previous section and we will see concrete examples below.We start by analyzing the Rice-Mele model [93] for a two-band superconductor previously considered in Ref. [58] to reproduce the previous result that the Lifshitz invariant appeared to confirm the consistency.We then consider several different models with on-site pairing interactions: A honeycomb model with on-site potentials as an example of the system without inversion symmetry and a family of Kagome models, where we discuss lowering of symmetry by introducing different hopping parameters.We additionally consider a Kagome lattice model with a charge-density wave pattern, as found to occur, e.g., in CsV 3 Sb 5 [39,40].These models are summarized in Fig. 2. We can cover all the cases with/without the inversion symmetry and with/without the Lifshitz invariant in ), (A2, T1), (E, T1), (E, T2), (T1, T1), (T1, T2), (T2, T2) these models.We take (a) and (b) to confirm the consistency with the previous research by [58].(c) is the example of the system without inversion symmetry and the Lifshitz invariant.(d) is the case with inversion symmetry but without the Lifshitz invariant.(e) is the case without inversion symmetry but with the Lifshitz invariant, (f) is a variant of (e), and (g) is an example of the system with inversion symmetry and the Lifshitz invariant.The Table .III below summarizes all the cases. Before we delve into the specific models, let us review the useful method.We practically do not have to construct the induced representation ρ H;G explicitly.Instead, for each symmetry operation of G we can evaluate the character χ ρ H;G of the induced representation directly by considering how the lattice sites in the unit cell are interchanged by the symmetry operation.The resulting character table uniquely determines the decomposition of the induced representation into a set of irreducible representations by the reduction formula. The Rice-Mele model without on-site potentials (Su-Schrieffer-Heeger (SSH) model [94]).We shall first consider the Rice-Mele model without on-site potential, which has been studied in the context of collective modes in Ref. [58].It is a model with two sites in the unit cell with one orbital per site and attractive on-site pairing.We treat the on-site interaction on the mean-field level by introducing two order parameters Ψ 1 , Ψ 2 .The model is depicted in Fig. 2(a) where the two-site unit cell is shown by the yellow rhombus.Single and double lines connecting sites correspond to hoppings t ̸ = t ′ . The system has a bond-centered inversion symmetry, resulting in the point group C i = {E, i}.The two sites in the unit cell are exchanged under inversion, meaning that the subgroup is C 1 .Therefore, the two-dimensional reducible representation ρ, under which the pair of order parameters (Ψ 1 , Ψ 2 ) transform, has the following characters: It is then clear that the decomposition of ρ C1;Ci into irreducible representations of C i is ρ C1;Ci = A g ⊕ A u .The two order parameters can be decomposed into a symmetric A g component and an anti-symmetric A u component T1u (A1g, T1u), (A2g, T2u), (Eg, T1u), (Eg, T2u), (T1g, A1u), (T1g, Eu), (T1g, T1u), (T1g, T2u), (T2g, A2u), (T2g, Eu), (T2g, T1u), (T2g, T2u) Table III.List of the point groups (PGs), site-symmetry groups (SSGs), and the presence or absence of inversion symmetry and the Lifshitz invariant for the models shown in Fig. 2 Model PG SSG Inversion symmetry Lifshitz invariant (a) Next, we assess the transformation properties of the Lifshitz invariant Ψ * i ∇ α Ψ j .The derivative transforms like a vector, i.e., under the representation 3A u of C i .Thus, the Lifshitz invariant transforms as The Lifshitz invariant includes six invariant components that transform under the trivial representation A g and is thus allowed in the free energy.These correspond to A g ∇ α A u and A u ∇ α A g .Terms of the form A g ∇ α A g , A u ∇ α A u must vanish due to the symmetry.These results are summarized in the second row of Table II. In summary, the symmetry analysis shows that the Leggett mode can appear in the linear response for the Rice-Mele model without on-site potential, consistent with that in Ref. [58]. The Rice-Mele model with a uniform hopping.Next, we consider the Rice-Mele model with uniform hopping (i.e., t = t ′ ).We also introduce an on-site potential that is different for the two sublattices, as indicated by blue and red colors in Fig. 2(b).The point group of the system is again C i , but the inversion is now site-centered and does not exchange sublattices in contrast to the previous case, meaning that the subgroup is C i .The characters of the representation ρ Ci;Ci of the two on-site order parameters (Ψ 1 , Ψ 2 ) are from which it follows that ρ Ci;Ci = 2A g .In this representation, the term Ψ * i ∇Ψ j no longer contains an invariant, since it transforms as 2A g ⊗ 3A u ⊗ 2A g = 12A u .Hence the Lifshitz invariant cannot appear in this model and the Leggett mode does not appear in the linear response regime.This result is consistent with Ref. [58]. The previous models both have C i symmetry, yet only the first case exhibits a linear collective mode response.This example illustrates that both the point group and the specific representation of the order parameters determines the presence of the Lifshitz invariant, according to the classification of Table I and Table II. For simplicity, we denote ρ H;G as ρ and χ ρH;G as χ from now. Honeycomb lattice.We shall next study a Honeycomb model with broken inversion symmetry, illustrated in Fig. 2(c).Red and blue colors indicate different on-site potentials.The point group of the system is point group lacks inversion symmetry.The two superconducting order parameters supported on each sublattice transform under the representation ρ with characters from which it follows that ρ decomposes into the trivial irreducible representations of D 3h , i.e., ρ = 2A ′ 1 .The Lifshitz term transforms as 2 ), which does not contain any invariant and must therefore be absent in accordance with Table I.Even though the present model lacks a center of inversion, it still does not exhibit a linear optical collective mode response.The presence of the Lifshitz invariant is therefore not a simple consequence of broken inversion symmetry; instead all point group symmetries need to be carefully taken into account. Kagome lattice.We next turn to a model of the Kagome lattice depicted in Fig. 2(d).It has three sites per unit cell, supporting three superconducting order parameters.Nearest-neighbor sites are connected by identical hopping parameters t, resulting in the point group D 6h .The three order parameters transform according to the representation ρ with characters from which it follows that ρ is reduced to We find that neither A 1g nor E 2g can support the Lifshitz term, since they are both even under inversion. To change the symmetry, we add a trimerized hopping pattern to our model as shown in Fig. 2(e).Here, straight and dashed lines correspond to t ± δt, respectively.The point group of the system is reduced to D 3h .The characters of ρ are now given by and ρ decomposes into In this representation, the Lifshitz variant is symmetryallowed according to Table I, where we see that combinations (A ′ 1 , E ′ ) as well as (E ′ , E ′ ) give rise to a linear optical collective mode response. We can further reduce the symmetries of our model by introducing sublattice-dependent on-site potentials shown by white and black squares in Fig. 2(f).The point group of the system is reduced to the subgroup yielding the representation ρ = 2A 1 ⊕ B 1 of the three on-site order parameters.The Lifshitz term is still present, since lowering the point group leads to even fewer symmetry restrictions on the free energy. Kagome lattice with CDW.Finally we shall examine a Kagome lattice model with a charge-density wave (CDW) in a tri-hexagonal pattern (3Q-pattern).Such structures have, e.g., been experimentally suggested in CsV 3 Sb 5 [39,40].This material is also suggested to have a different CDW structure from the 3Q-pattern (so-called the star of david [37,38]).Nevertheless, both CDW patterns have the same symmetry, and the group theoretical results can be applied to either pattern.The CDW unit cell is shown in Fig. 2(g) where straight and dashed lines correspond to different hoppings.The point group of the system is The model has 12 onsite order parameters that transform according to the representation ρ with characters The reduction formula gives a decomposition representation Importantly, ρ includes both even and odd irreducible representations, which indeed leads to allowed Lifshitz terms and optical collective mode response according to Table II.This symmetry analysis suggests that CsV 3 Sb 5 might be an interesting experimental platform for the study of superconducting collective modes in the linear response. V. MICROSCOPIC CALCULATION OF COLLECTIVE MODES AND LINEAR OPTICAL CONDUCTIVITY In this section, we perform microscopic calculations of collective modes and the linear optical conductivity, since the group-theoretical classification just gives the necessary condition for the Lifshitz invariant and we should examine how the contribution of the Lifshitz invariant quantitatively appears in the optical responses.We compute the optical conductivity σ(ω) within the effective action approach in imaginary time, focusing on the fluctuations of the order parameters.We use the Kagome lattice model as a concrete example.In the following, we abbreviate the k-dependence of functions whenever appropriate.Our starting point is the Hamiltonian H = H 0 + H int that describes a multiband superconductor with a singlet on-site pairing, where is the kinetic part, and is the interaction part.We have set the volume size of the system to one.Here c † and c are the creation and annihilation operators, α = 1, 2, . . .n is the band index, ξ α,α ′ (k) represents the matrix elements of the kinetic term, and σ =↑, ↓ denotes spin.To include the effect of an external electromagnetic field by a vector potential A, we replace k by k−eA (Peierls substitution) and expand the kinetic part as where e is the electric charge.Since we are interested in the linear optical conductivity, we only need to expand ξ αα ′ to the first order in the vector potential A. Then the full Hamiltonian is given by where We use the path integral approach in imaginary time τ [95][96][97][98].The partition function of the whole system can be written as c] with the Euclidean action We perform the Hubbard-Stratonovich transformation to decouple the fermionic interaction [Eq.( 27)], introducing bosonic fields ∆ α (τ ) = ∆ 0α +∆ xα (τ )−i∆ yα (τ ) that have the saddle-point contribution ∆ 0α and fluctuating real and imaginary parts, ∆ xα (τ ), ∆ yα (τ ). After performing the path integration over the fermionic degrees of freedom, we divide the action into the mean-field part and the fluctuation part, where we take the vector potential A up to the first order.Integrating out the fluctuations ∆ µα (µ = x, y) and after analytic continuation of the Matsubara frequency iΩ → ω + i0 + , we obtain the fluctuation part of the effective action S FL as with a, b = x, y in the two-dimensional case.Here we have introduced the current-current correlation function Φ ab (iΩ): with the velocity operator with ∂ a = ∂/∂k a and Green's function H BdG (k) is the Bogoliubov-de Gennes Hamiltonian where ∆ 0 is defined by the saddle-point equation To simplify the notation, we have introduced the band representation τ µα,jl := ⟨φ j |τ µα |φ l ⟩ and v jl := ⟨φ j |v|φ l ⟩ with j, l = 1, 2, • • • 2n, where τ xα and τ yα are the generalized Pauli matrices where [A α ] γγ ′ = δ αγ δ γγ ′ , |φ l ⟩ is the l-th eigenvector of H BdG (k) with an eigenvalue E l and E lj := E l − E j .We have also introduced the polarization bubble Π(iΩ), the effective interaction U eff within the random phase approximation (RPA), and the vector Q a (iΩ), Here we introduce f jl := f (E j ) − f (E l ), where f is the Fermi distribution function at zero temperature, i.e., f = 1 for occupied bands and f = 0 for unoccupied bands.We can get the real frequency forms of U eff and Q a by analytic continuation iΩ → ω + i0 + .The diagram for the effective interaction U eff (ω) is depicted in Fig. 3(a). We take the functional derivative of S FL with respect to A b (−ω) to obtain the linear optical conductivity σ ab (ω) via the current density j b (ω) = E a (ω)σ ab (ω) (E a (ω) is the electric field): where [σ and we used E a (ω) = iωA a (ω).Here σ 1 (ω) is responsible for the quasi-particle response, and σ 2 (ω) for the collective mode one.The diagrams for the linear optical conductivities are shown in Fig. 3(b) and (c).The collective mode response comes from the poles of U eff (ω), which satisfy 1 + U Π(ω) = 0.For this ω, as pointed out in [58], Q a (ω) is off-resonant, or Q a (ω) does not have a singularity. Note that this formalism does not rigorously decompose the fluctuation of the order parameters into amplitude and phase.Nevertheless, one can see that the collective modes come from phase fluctuation (the Leggett mode).To see this, we shall write the bosonic field ∆ α as (∆ 0α + δ∆ α )e iθα .Assume that δ∆ α and θ α are small.Then we put The first approximation is valid as long as we use the RPA.In the second approximation we neglect the amplitude fluctuation δ∆ α .This approximation is not always valid in general, but we can check that the amplitude fluctuation is small by calculating only the τ xα part and neglecting the τ yα in the linear optical conductivity.In Appendix.C, the linear optical conductivity is decomposed into two parts (τ x and τ y channels) and we can confirm that the amplitude fluctuation is small.Moreover the Higgs mode is forbidden in the linear response because the real part of the constant vector d responsible for the Higgs mode linear response is zero (see Appendix.A).We thus focus on the phase fluctuation.In principle, it is possible to completely decompose the fluctuation into amplitude and phase as is done in [49] for a two-band superconductor.However, the calculation is complicated and we do not go into details here. B. Kagome lattice superconductor As a concrete model for a multiband superconductor showing the linear Leggett mode, we take a Kagome lat-tice model with two kinds of nearest-neighbor hoppings (t ± δt) and an attractive on-site interaction U shown in Fig. 2(e) and (f).The bold lines show the nearestneighbor hopping with strength (t+δt), while the dashed ones with (t − δt).The unit cell has three lattice points as represented by the yellow diamond.We set the lattice constant a = 1.The kinetic parts ξ αα ′ (k) in the momentum space are given as follows: where we denote the chemical potential by µ and the onsite potential by m.Refer to Appendix.D for the details of derivation.In this model, there is no on-site potential and m is set to be zero.In the model of Fig. 2(f), there are two kinds of on-site potentials, one of which is the white square with the potential m and the other one is the black square with the potential (−m).We calculate the poles of the effective interaction U eff and the linear optical conductivities to see the linear Leggett mode in these models in the next section. C. Results Here we shall see the linear response of superconductors with the Lifshitz invariant using the trimerized Kagome model depicted in Fig. 2(e).The classification in Sec.IV has already confirmed from the point of lattice symmetry that these models are allowed to have the Lifshitz invariant and we should analyze how the Lifshitz invariant and the Leggett modes qualitatively contribute to the linear optical response.We set t = −0.5, δt = −0.01,µ = 0, m = 0 and U = 6.We first see the characteristic frequencies of the Leggett mode, which correspond to the poles of the effective interaction; 1 + U Π(ω) = 0.The absolute values of the eigenvalues λ i of the inverse matrix of the effective interaction U −1 eff (ω) are shown in Fig. 4(a).Fig. 4(b) represents the linear optical conductivity of yy component σ yy (ω) with the vertical dotted line describing the gap value 2∆ 0 .Since there is no on-site potential, all the gap values on each lattice site are the same.The contributions from the quasi-particles are negligible and the collective mode is dominant.We would expect six eigenvalues because the matrix U eff (ω) is 6 × 6.However, only four of them appear in Fig. 4 the Nambu-Goldstone mode, not contributing to the optical conductivity.The "pole-like" structure at ω = 2∆ 0 is not an actual pole but a cusp indicating the suppressed contribution of the Higgs mode, which is confirmed in the optical conductivity σ yy (ω) in Fig. 4(b).The pole of the effective interaction at ω ≈ 0.75•2∆ 0 appears as a peak in the optical conductivity, which comes from the Leggett mode.As we show in Appendix.C, if we neglect the τ y channel contribution in the calculation of the optical conductivity, the peak at ℏω ≈ 0.75 • 2∆ 0 disappears.On the other hand, if we neglect the τ x channel, the peak remains with the same peak height.From these results, we can confirm that the peak in the optical conductivity signals the Leggett mode. Next, we consider the trimerized Kagome model with on-site potentials depicted in Fig. 2(f) to see the effect of on-site potentials.The definitions of bold and dashed lines are the same as in Fig. 2(e).The white squares display the on-site potential m, while the black ones show (−m) with m = 0.3.The absolute values of the eigenvalues of the inverse matrix of the effective interaction with on-site potentials are shown in Fig. 4(c).Fig. 4(d) represents the linear optical conductivity of yy component σ yy (ω) with the vertical dotted lines describing the gap values.The collective mode response is dominant as in Fig. 4(b).Because of the on-site potentials, one of the three gap values is different from the others, which is minimal.We call it ∆ 0,min and normalize the energy ω by 2∆ 0,min .The degeneracies of the eigenvalues are also resolved, and all six eigenvalues become non-degenerate.The eigenvalue that is responsible for the Leggett mode in Fig. 4(b) splits in Fig. 4(d) and we can see two peaks below 2∆ 0,min , one of which is multiplied by a factor of three for visibility.The physical interpretation of the splitting based on the free energy argument is as follows.The three order parameters live in the identical Mexicanhat potentials without the on-site potentials because all sites are equivalent and the couplings between the two of them are also the same.When we add the on-site potentials, however, one of the three Mexican-hat potentials becomes different from the others in our setting, and the couplings between the order parameters in identical potentials and different potentials are not equal, letting the Leggett mode peak split.The splitting originates from the lowering of the symmetry of the system, resolving the degeneracy of the Leggett mode mass.For the cases of σ xx (ω) and σ xy (ω) with the same parameter values, σ yy (ω) for δt = 0, and σ yy (ω) with µ = −2t (filling the flat band of the Kagome lattice), we refer to Appendix.E. VI. DISCUSSIONS We have studied the Lifshitz invariant in multiband superconductors and its effect on optical conductivities.We first used the macroscopic GL theory to see the linear coupling between the phase of the order parameter and the external field, interpreted the term to be the Lifshitz invariant, and classified all the combinations of irreducible representations of order parameters in crystallographic point groups that allow the Lifshitz invariant to appear by the conditions for the free energy to be invariant under symmetry operations.The Lifshitz invariant in multiband superconductors has been shown to be interpreted as a coupling between the "internal field" and the "current" of the overlaps of the order parameters, which is controlled by the lattice geometry.Because of the "internal field", it was possible for the phase of the order parameter to linearly connect to the external field. We also showed that the wide range of multiband superconductors can have the Lifshitz invariant according to the group theory.The reason that there has not been experimental detection of the Leggett mode in the linear response so far may be that the constant vector d in the GL theory would be practically small in many systems.Another possible explanation is that impurities in the real materials would suppress the Leggett mode in the linear response, whose effect has been neglected in our clean limit model.In previous papers [47,48], the signals of the Leggett mode in a nonlinear response is relatively suppressed by the effect of nonmagnetic impurities as compared to the Higgs-mode and quasiparticle contributions.It is thus interesting to study the impurity effects in the optical conductivity in the presence of the Lifshitz invariant, which we leave as a future problem.About the high harmonic generations, the signals of the Leggett mode were reported to be quite smaller than those of the Higgs mode, and they were hardly affected by nonmagnetic impurities.The similar results are expected in the linear response regime, though the impurity can disturb the coherence between the two phases contributing to the Leggett mode.Thus we leave this issue for future research. The condition for the Lifshitz invariant to appear is whether the system follows the nontrivial representation due to the sublattice geometry.As we saw in Sec.IV, the inversion symmetry itself is neither a necessary nor sufficient condition, and both of the cases with and without the inversion symmetry can have the Lifshitz invariant, which is different from the previous studies where the Lifshitz invariant had commonly been related to the broken inversion symmetry [61][62][63][68][69][70].The multiband nature, or the sublattice geometry plays a quite important role for the system to have the Lifshitz invariant.Since the Lifshitz invariant has the first-order spatial derivative (of the linear q term in momentum space), we may expect an instability toward non-uniform spatial modulation of the order parameter with finite q.As already stated in Sec.II A, however, the instability occurs only when |d| is large enough.The detailed analysis about the effect of the vector d to the ground state is given in Appendix.B. We would like to comment on open issues about the treatment of the order parameters in the group theoretical argument.We implicitly assumed that the order parameters are defined on each lattice point.This assumption seems to be well justified because the order parameters reflect the symmetry of the system even though the size of Cooper pairs is much larger than the lattice constant.Nevertheless, this situation may not be valid when we consider the retardation effect of phonons seriously.We also assumed that sublattice and other degrees of freedom form a direct product in the order parameter representation.If the system has a symmetry that intertwines these degrees of freedom (which cannot be represented by the direct product), there will be other interesting situations that are not studied in the present work. We additionally constructed the microscopic threeband superconducting models based on the Kagome lattice to see the linear Leggett mode in the optical conductivity.The degeneracy of the Leggett mode was resolved by adding the on-site potentials and reducing the symmetry. Finally, we list possible experimental observations of the linear Leggett mode.One possible candidate is CsV 3 Sb 5 [37][38][39][40].This material is reported to have a CDW phase above the superconducting transition temperature T c , and the phase is thought to coexist with superconductivity below T c .The superconducting pairing symmetry is predicted to be s-wave [36] and anisotropic [41].The CDW would be responsible for the lattice modulation, causing the two different hopping strengths as in our model of Fig. 2(e) and (f).It has also been reported that the material does not break the timereversal symmetry in the CDW phase for a high-quality sample [42], which is in accordance with our model preserving the time-reversal symmetry.The experiment of the optical Kerr effect has also concluded that it is highly unlikely that the material breaks the time-reversal symmetry [43].Although our model for the numerical calculation does not completely reproduce the CDW pattern or its modulation, the model could indicate the possible experimental confirmation of the Leggett mode in the linear optical conductivity since our model and the actual material (simplified as in Fig. 2(g)) can have the Lifshitz invariant and both would show the similar property arising from the Lifshitz invariant.Hence, by measuring the linear optical conductivity in the superconducting phase of CsV 3 Sb 5 , there would be at least one peak coming from the Leggett mode and we could obtain information about the phase difference between order parameters. We introduce the Nambu basis Ψ † k and Ψ k to express the action in a concise way: Note that we implicitly assumed that the creation/annihilation operators depend on the imaginary time τ .Then the action S c † , c, ∆ * , ∆ can be written as where is the inverse Green function with I n being the n × n unit matrix.For simplicity we abbreviate I n from now.We used the time-reversal symmetry in the second equality, [∆(τ )] αα ′ = ∆ α (τ )δ αα ′ , ∆ α (τ ) = ∆ 0,α + δ∆ α (τ ), and ∆ 0,α is the gap value at the saddle point (corresponding to the mean-field value).Here we used the following relationship: and where we used the periodicity of the operators.Then we move to the Fourier space by This puts the imaginary time derivative ∂ τ into −iω n , and we obtain the action expressed in Fourier space. where is inverse Green's function in Fourier space.Here, we define δ∆(iω m − iω n ) and A a (iω m − iω n ) by so that both δ∆ and A a have the same dimension before and after the transformation.We omit the δ∆ terms in the first order because they offset with unimportant terms (the L = 1 term in the expression below).Performing the fermionic path integral yields the action S[∆ * , ∆]: Tr ln −G −1 (iω m , iω n ; k) .(A17) Choosing the reference state, and corresponding Green's function G 0 and the self-energy Σ to be G −1 = G −1 0 − Σ, we rewrite the trace term as There are two important ways to choose the reference state.One is normal and the other is the superconducting ground state. The Ginzburg-Landau effective action.We first derive the Ginzburg-Landau effective action S eff in the equilibrium superconducting state.We first decompose Green's function into two parts; reference state Green's function and the self-energy.We neglect the electromagnetic parts to obtain the action in an equilibrium state. We are interested in the linear d term, or the term with ∆ * and ∆.Hence we focus on the term L = 2: We shall consider the simpler form of Green's function.By defining the normal state Green function g(iω n ; k) := (iω n − ξ(k)) −1 , we can write the reference state Green function in the form of Then we put At last, we get the effective action S eff : (1) We use the property of the normal state Green function g(iω; k) to derive the Lifshitz invariant term in the free energy in (1).Since the kinetic part ξ(k) is Hermite, the component of the transposed normal state Green function g α ′ α (iω n ; k) is written as Using this property and defining the vector d αα ′ as we expand the second term of (A24) with respect to k. We pick up the linear q terms to see the Lifshitz invariant terms, paying attention to the fact that q is treated as a canonical momentum in coordinate space.Here we restrict ourselves to the two-band case (α, α ′ = 1, 2).In this case the vectors we have to consider are d = d 12 and d * = d 21 .Taking this into account, F where iω n is replaced by −iω n of the second term in the first equality.F αα ′ similarly puts We move on to the coordinate space (q → D and −q → D * ) and obtain which is exactly the Lifshitz invariant term in the free energy except for the coefficient. We can prove that the vector d αα ′ is purely imaginary in usual lattice models.Fourier transformation gives the wavenumber dependence k via the plane wave e ik•r = cos (k • r) + i sin (k • r) and the real (imaginary) components of the kinetic part of the BdG Hamiltonian ξ(k) is even (odd) function of k.Hence ξ(k) has the preperty ξ(−k) = ξ T (k).With this relation we can say that and combining this relation with (A27) we get We thus prove that d αα ′ is purely imaginary: and the Higgs mode does not contribute to the linear response.We can also prove that the vector d αα ′ is produced by the imaginary part of the kinetic term ξ(k).When the system has no hopping difference, ξ(k) is real and satisfy the condition ξ(k) = ξ * (k) = ξ(−k).Because of this property it follows that By the similar procedure of (A36) we can obtain which shows that d αα ′ is real and becomes zero because it is purely imaginary.Therefore the Leggett mode in the linear response is induced by the imaginary part of the kinetic component ξ(k), or the hopping difference in the system. Gap equation.In this case, we are interested in the gap equation for the saddle point (the mean-field) value and the free energy, or the effective action in equilibrium.Thus we choose the reference state as the superconducting ground state and neglect the electromagnetic terms and the fluctuations of the order parameters.This puts inverse Green's function to be The effective action is given by neglecting fluctuations.The gap equation for ∆ 0,α is derived by the minimization of the effective action concerning ∆ * 0,α ; The functional derivative of the trace term proceeds as where the Green function is defined without β.By definition, the functional derivative of the inverse Green function becomes where are the generalized Pauli matrices.Thus we put We finally reach the gap equation for ∆ 0,α : Note that the summation over the frequency index m and the Kronecker delta δ mn gives β.Linear optical conductivities.We are interested in the optical conductivities of quasiparticles and collective modes here.Hence we choose the reference state to be the superconducting ground state and put We expand the action S[∆ * , ∆] at the Gaussian level (L = 2) and use RPA to obtain the effective action.Since we direct our attention to linear optical conductivities, we just keep the vector potential A in the first order.Besides, we decompose the fluctuation δ∆ α into ∆ x,α − i∆ y,α for calculation (we omit δ after the decomposition for simplicity).Note that we should include the effect of the non-trace term where µ specifies the index of the Pauli matrix and the imaginary part vanished by the summation over (iω m ).Now we proceed to the trace term (L = 2).Before calculation, we define the velocity operator v a (k): The trace term is written as and finally With these preparations, we can calculate the first-order current for each direction j b (ω) by taking the functional derivative of the effective action S eff about A b (−ω).For simplicity, we assume that the frequency ω for the functional derivative is positive (the same argument holds for the negative ω).Note that the integrals in S FL are written as Note that the condition m * i > 0 should be satisfied as well.To minimize the free energy, we take a functional derivative in terms of ϕ: δF δϕ = −2ψ 1,0 ψ 2,0 ϵ + ηq 2 sin ϕ + 4ψ 1,0 ψ 2,0 (d I • q) cos ϕ = 0. (B5) We then obtain tan ϕ = 2(d I • q) ϵ + ηq 2 . (B6) This yields This expression is valid independent of the sign of ϵ + ηq 2 .For simplicity we put The above condition corresponds to the state that every site is equivalent.For large q, Then the free energy is reduced to This must be stable, suggesting that 1 m * + 2η > 0 (B10) is satisfied.Under this condition, we now focus on the small q case. Here we assume that d I and q point in the same direction.Hence for small q we have We should classify the cases depending on the sign of ϵ.When ϵ < 0, meaning that the Josephson-like coupling between the order parameters is attractive, there are two cases as below. which is still valid even when we substitute the expression with ℏ.Therefore, we can conclude for the system with negative ϵ that if d 2 I is large enough, the order parameters are modulated spatially.If not, there is no spatial modulation of the order parameters.On the flip side, when ϵ > 0, namely that the Josephson-like coupling is repulsive, the coefficient of q 2 in the free energy is always positive.Hence the free energy becomes the lowest at q = 0, or tan ϕ = 0.This has two options ϕ = 0 and ϕ = π.Looking at the expression (B7), the free energy is found to be the lowest for ϕ = π.This indicates that the two order parameters are in the opposite phase, which can be interpreted as the system having a spatially modulated order. Usually, d 2 I is small enough to satisfy the condition of q = 0, and thus we can conclude that in this case the superconducting order parameter is not spatially modulated by the Lifshitz invariant.the nearest neighbor hopping and derive the kinetic part of the Hamiltonian.We impose an on-site potential and chemical potential on each lattice point.We call the point 1, 2, or 3 depending on the vector b specifying the point.Every lattice point r is specified by n 1 , n 2 and i = 1, 2, 3: where and a is the lattice constant.We take a = 1 unit and n 1 , n 2 are integers.Each lattice point has an expression r(n 1 , n 2 ; 1) = 2n Figure 2 . Figure2.Several target models to be checked to have the Lifshitz invariant or not.We simply assume the s-wave superconductivity with a single orbital in these models.Unit cells of these models are enclosed by yellow lines and the subscripts of the lattice points label them.(a) The Rice-Mele model without on-site potential (SSH model).The single and double lines connecting sites correspond to hoppings t ̸ = t ′ .The point group of the system is Ci.(b) The Rice-Mele model with the same hopping strength.Two sites are distinguished by on-site potentials (red and blue circles).The point group of the system is Ci.(c) Honeycomb model with two kinds of on-site potentials.Two sites are distinguished by on-site potentials (red and blue circles).The point group of the system is D 3h .(d) The Kagome lattice model with only one hopping strength.The black point represents the center of the symmetry operations.The red lines make it easier to see the star of David shape, which changes in other models (e) and (f).The point group of the system is D 6h .(e) Kagome lattice model after trimerization.The solid lines show the hopping strength (t + δt) and the dashed lines show (t − δt).The red solid and dashed lines show the star of David shape, which helps us see the transformation properties of order parameters under symmetry operations.The point group of the system is D 3h .(f) Trimerized Kagome lattice model with on-site potentials.White (black) squares represent the on-site potential m > 0 (−m < 0).The yellow line on the red star of David shows the axis containing the center of symmetry operation.The point group of the system is C2v.(g) Kagome lattice model with possible CDW pattern.The unit cell has 12 lattice points, constructing the tri-hexagonal pattern (3Q-pattern) with inner (outer) lattice points moving outward (inward).This model incorporates the effect of modified configuration by taking two kinds of hopping strength.The point group of this system is D 6h . Figure 3 . Figure 3. Diagrammatic representations of (a) the effective interaction U eff (ω) within the random phase approximation, and (b,c) the optical conductivities in superconductors.The diagram (b) ((c)) corresponds to the quasi-particle (collective mode) excitation.Here −U is the bare interaction.When the spatial dimension is more than one, the optical conductivities depend on two directions a and b. (a).The reason is that the two out of six eigenvalues are degenerate.The red line corresponding to the Leggett mode is doubly degenerate.The blue line with the biggest value at ω = 0 is also doubly degenerate.The pole ω = 0 coincides with Figure 4 . Figure 4. Linear optical responses in the Kagome lattice model as a three-band superconductor.The parameter values are set to t = −0.5, δt = −0.01,µ = 0, m = 0, and U = 6.(a) Absolute values |λi| of the eigenvalues λi of (1 + U Π(ω)) of the model in Fig. 2(e).The doubly degenerate red line corresponds to the Leggett mode.The matrix (1 + U Π(ω)) appears in the denominator of the effective interaction U eff in(40).The zeros of the spectrum correspond to the divergence of the U eff , and hence lead to the signal of the collective modes at a specific energy ω.(b) The linear optical conductivity of yy component σ yy (ω) with the vertical dotted line describing the gap value 2∆0.The calculation is performed for the model in Fig.2(e).The "pole-like" structure at ω = 2∆0 is not a pole but a cusp and does not lead to the divergence of U eff , suggesting the absence of Higgs mode contribution as we expect from the consideration in Sec.II.The peak appears at the pole of the effective interaction, indicating that the peak is coming from collective mode.(c) Absolute values |λi| of the eigenvalues λi of (1 + U Π(ω)) of the model in Fig.2(f) that has the on-site potential m = 0.3.Since the gap values are different in the model, the frequency is normalized by twice the minimum gap value 2∆0,min.The red lines correspond to the Leggett modes.(d) The linear optical conductivity of yy component σ yy (ω) with the vertical dotted line describing gap values.The peaks appear at the poles of the effective interaction, indicating that the peaks are coming from collective modes.Since the collective mode contribution in the linear response is largely controlled by the phase fluctuation, the peaks are suggested to be the Leggett modes.The peak in (b) splits in two in (d).In both (b) and (d), the responses are dominated by collective modes. 3 Figure 6 . Figure 6.The Kagome lattice model with two kinds of hopping strength and on-site potentials.The definitions of the bold and dashed lines are the same as in Fig.2(a) in the main text.White squares show the on-site potential m, while red squares do (−m) with m ≥ 0. The vectors a1 and a2 specify the center of the hexagon (n1, n2), and b1, b2, and b3 specify the lattice point. Table I . Pairs of representations of the order parameters (Ψi, Ψj) for each crystallographic point group without inversion symmetry that allow the existence of the Lifshitz invariant Ψ * i ∇Ψj − Ψj∇Ψ * i .Point group Representation of ∇ Allowed representation of the order parameter pairs (Ψi, Ψj)
2023-09-06T06:42:41.769Z
2023-09-04T00:00:00.000
{ "year": 2023, "sha1": "fb86bcf8e3e38b978939d4a6e32b0004a4fbd650", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.6.013120", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "fcad0ab9a05d66e8f7e3020e580c95968f908cf7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231852003
pes2o/s2orc
v3-fos-license
Impact of Industrial Agglomeration on Regional Economy in a Simulated Intelligent Environment Based on Machine Learning The Internet of Things is based on a communication network to automatically receive various information from the natural world. It uses intelligent objects with perception, communication and computers to automatically receive various information from nature. All independently managed physical objects are connected to each other to achieve integrated perception, reliable transmission and intelligent processing, and to establish intelligent information between people and objects, and objects and object service systems. This article mainly introduces the impact of industrial agglomeration on the regional economy in a simulated intelligent environment based on machine learning. This paper proposes a method to detect industrial agglomeration index to analyze industrial agglomeration. Through the establishment of the industrial agglomeration index system, the level of integration of the manufacturing industry in our city was objectively analyzed, and the impact of the integration of manufacturing industry on our city was tested empirically. Finally, the relationship between industrial integration and regional economic development was tested. The experimental results in this paper show that industrial integration in a smart environment based on simulation learning has a significant positive impact on regional economic development. Among them, the level of cooperation between the manufacturing service industry and the manufacturing industry increased by 1%, and the level of regional economic development increased by 0.025%. I. INTRODUCTION The development of the times and the improvement of the level of science and technology have promoted economic development [1], which has also made the country's economic ties increasingly closer, and the production industry is constantly changing. Among them, the changes in the industry have also led to changes in economic internships. On the one hand, due to the continuous progress of transportation and information technology, transportation costs continue to drop, communication costs are close to zero, and distance is no longer an obstacle to trade. The international circulation of commodities and technologies has become simple and The associate editor coordinating the review of this manuscript and approving it for publication was Ligang He. convenient, and the global market has become integrated. On the other hand, in the context of increasing economic globalization, the trend of regionalization has not weakened, but has become stronger than before. In the process of regionalization, the industrial system, a new form of industrial organization formed by the spatial agglomeration of many enterprises, appeared in connection with economic relations. The extent to which the agglomeration of the logistics industry affects economic growth and how the logistics industry affects the regional economy has not yet become a consensus. Therefore, exploratory research on the relationship between logistics industry agglomeration and economic growth is of considerable significance both in theory and in practice. In addition, the study of the relationship between logistics industry agglomeration and regional VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ economic growth has important practical significance and has an important guiding role in planning the future development direction of the logistics industry. The Internet of Things (IoT) is a dynamic global information network composed of objects connected to the Internet, which have become an integral part of the Internet in the future [2]. Perera C surveyed more than one hundred IoT smart solutions on the market and carefully checked them to identify the technologies, functions and applications used. His survey is intended to serve as a guide and conceptual framework for future research on the Internet of Things to inspire and inspire further development. However, due to the ambiguity in the understanding of the Internet of Things, the results obtained will not be so accurate [3]. A survey report by Buczak A describes a key literature survey of computer analysis (ML) and data mining (DM) methods used in network analysis to support intrusion detection. He provides a short tutorial description of each ML/DM method. Based on the number of citations or the relevance of an emerging method, papers representing each method can be identified, read, and summarized. Since data is very important in ML/DM methods, some well-known network data sets used in ML/DM are introduced. However, due to the large amount of data, the investigation process took too much time [4]. In order to cope with large fluctuations in commodity demand, manufacturing systems must have rapid response capabilities. This requirement can be guaranteed through performance metrics. Although manufacturing companies already use information systems to manage performance, it is still difficult to capture real-time data to portray the actual situation [5]. The latest developments and applications of the Internet of Things (IoT) have solved this problem. In order to demonstrate the functions of IoT, Hwang G developed a performance model based on IoT, which is consistent with ISA-95 and ISO-22400 standards, which define the manufacturing process and performance index formulas. However, due to the unstable factors in the demand of goods, the efficiency of the manufacturing part will be very low [6]. The innovations of this article are: (1) Use the threshold regression model to test, and use the endogenous data partition mechanism to find structural change points to achieve more scientific grouping. (2) Urban heterogeneity is included in the analysis, and the manufacturing service industry is subdivided into core manufacturing service industry and pillar industry manufacturing service industry. In addition, the impact of different types of producer services and manufacturing in cities of different sizes on the regional economy was discussed. II. DETECTION METHODS FOR INDUSTRIAL CLUSTERS IN THE INTERNET OF THINGS AND SMART ENVIRONMENTS A. INTERNET OF THINGS TECHNOLOGY 1) RFID TECHNOLOGY RFID is abbreviated as radio frequency identification technology, which mainly uses the unique identification number corresponding to the tag to identify the indicator light. RFID is a simple radio system with only two basic elements [7]. The system is used to control, detect and monitor objects. The POS machines we use to swipe bus cards and go to the supermarket to shop are all applications of radio frequency technology [8]. 2) SENSOR TECHNOLOGY For the collection of basic data, according to specific rules, it is converted into electrical signals or other forms of output information to meet the needs of data transmission, processing, storage, projection, recording, and control requirements [9], [10]. 3) ARTIFICIAL INTELLIGENCE TECHNOLOGY Used for network communication. In the Internet of Things, communication between objects and people is bound to be inseparable from high-speed wireless networks, which can transmit large amounts of data [11]. 4) WIRELESS NETWORK TECHNOLOGY Artificial intelligence is the study of how to use computers to simulate certain processes of human thinking and intelligent behavior (such as learning, thinking, thinking, design, etc.) [12]. In the Internet of Things, artificial intelligence technology is mainly responsible for analyzing the content of ''voice'' objects and starting automatic processing by computers. 5) CLOUD COMPUTING TECHNOLOGY The cloud in cloud computing technology actually represents the Internet [13]. Computer capabilities through the network, instead of using the software originally installed on the computer or replacing the original data storage energy on your own hard disk, using the network to perform various operations, these operations will lead to The realization structure exceeds the maximum operating range, resulting in a large excess of ore, and even large fluctuations, which is not allowed in production [14]. B. INDUSTRIAL CLUSTER MEASUREMENT INDEX METHOD According to the basic meaning of industrial agglomeration, this article selects the classic indicators of industrial agglomeration as a basis, and constructs a measurement indicator suitable for this article to study the level of manufacturing industry agglomeration in our city from the three perspectives of location quotient, industry concentration, and industrial spatial concentration [15]. 1) LOCATION QUOTIENT Location quotient was first proposed by Hargate and used in location analysis to measure the degree of concentration of a certain aspect of a specific industry in a certain area. It is a relatively common method to judge the degree of agglomeration [16]. The location quotient index indirectly reflects the way of inter-regional economic relations by calculating the ratio of the specific index of a certain industry in a certain area to the specific index of all industries in the region and the ratio of the specific index of the industry in the country to the index of all industries in the country And structure. Commonly used measurement indicators include output value, sales income, number of employees, etc. [17]. The calculation formula is: Among them, c x represents the output value population of industrial x in a certain area, and C x usually represents the output value population of the entire country's industrial x [18]. Generally speaking, when LQ is greater than 1, it indicates that the degree of specialization of the industry in the region is higher than the national level. Indirectly, it can also indicate that the degree of agglomeration is higher and belongs to an industry with comparative advantages. The industry or its products can be expanded or exported., The larger the LQ value, the higher the degree of aggregation [19]. When LQ is equal to 1, it indicates that the degree of agglomeration of industries in the region is the same as that of the whole country. When LQ is less than 1, it means that the degree of agglomeration of industries in the region is relatively low, and it is necessary to import industrial products or industries from outside the region to meet the needs [20]. 2) INDUSTRY CONCENTRATION Among the various methods for measuring the level of industrial agglomeration, it is more convenient to obtain industrial concentration index data, the data is very real, and the measurement is also very convenient [21].Therefore, it is an important indicator to measure the degree of market aggregation [22]. Manufacturing concentration refers to the market share of the original main business in the manufacturing industry. Total output, sales, number of employees, total assets, etc. represent the share of the entire manufacturing industry [23]. Calculated as follows: In the above formula, IC n represents the market concentration of the top n industries with the largest scale in the industry, X x represents the relevant values of the x industry, such as production, sales, number of employees, total assets, etc. The total number of industries in the manufacturing industry is represented by N [24]. IC n can vividly reflect the level of manufacturing concentration, calculate the concentration of major manufacturing industries in the market, and research on the impact of industrial agglomeration on regional economic growthtaking the manufacturing industry in our city as an example to indirectly understand the degree of manufacturing agglomeration, but this indicator It can only indicate the situation in the top industries, and the manufacturing concentration is also related to the total number and distribution of manufacturers, so this indicator does not reflect the overall situation of the market, and has certain limitations [25]. 3) REGIONAL SPATIAL CONCENTRATION The regional industrial spatial agglomeration index is developed on the basis of the industrial concentration index, and is used to calculate the production volume of many major industries in the region as the percentages of the national industries [26]. The highest percentage indicates that the largest local industry has a competitive advantage in the country. It also reflects the strong overall strength of the industry in this region. Relatively small means that the competitiveness of the most advantageous industries in the region has not been cultivated in the country, and the region has to increase support for key industries [27], [28]. The specific formula of regional industrial spatial concentration is: Among them, the numerator represents the sum of the output values of the n industries that account for the largest share in the region, and the denominator represents the sum of the output values of the corresponding industries in the country [29]. Of course, there are many indicators to measure industrial agglomeration, such as H index and regional Gini coefficient. In recent years, foreign scholars have developed many new methods for calculating industrial agglomeration, such as the agglomeration index of industrial clusters and dynamic agglomeration index methods. However, some statistical indicators are not used in China's statistical standards, and the calculations are too complicated, and data collection is very difficult. This article mainly uses the indicators introduced above to simply analyze the level of manufacturing agglomeration in our city [30]. C. CALCULATION OF INDUSTRIAL AGGLOMERATION To find a suitable indicator to measure the degree of industrial agglomeration, some statistical knowledge is needed [31]. The indicators describing the geographic concentration or agglomeration of industries should meet some basic conditions based on statistics: (1) Comparable among different industries. It can be seen from the results which industries are more concentrated; (2) Comparable between different spatial scales. For example, the calculation results in a region and a country require Be comparable; (3) The estimated value of the index is constant or unique; Here are a few more commonly used indicators [32]. 1) HERFINDAHL COEFFICIENT The Herfindahl coefficient is a comprehensive index to measure the degree of industrial concentration, and it is a VOLUME 9, 2021 commonly used indicator in economics and government regulatory agencies [33], [34]. The specific formula is as follows: Among them, A i represents the relevant value (output value or number of employees) of a certain industry in the i region, B i represents the relevant value (output value or number of employees) of a certain industry in all regions, and N is the total number of enterprises in the industry in all regions [35]. It measures the absolute concentration of the industry. If economic activity is only concentrated in a certain area, then H = 1; if economic activity is evenly distributed in each area, then H = 1/N . 2) SPACE GINI COEFFICIENT Krugman borrowed the concepts of Lorentz curve and Gini coefficient and put forward the concept of spatial Gini coefficient, which is used to measure the degree of industrial distribution in space. The specific formula is as follows: Among them, G is the spatial Gini coefficient, a i is the proportion of a certain industry in the i area to the number of employees in the industry in the country, and x i is the proportion of the number of employees in the region to the total number of employees in the country. Compared with the Herfindale coefficient, the spatial Gini coefficient measures relative concentration. The value range of G is [0, 1]. The closer G is to 0, the more even the industry is distributed in space; if G = 0, then the industry is evenly distributed in space. The higher the value of G, the more obvious the concentration of the industry in the region. 3) SPATIAL AGGLOMERATION INDEX In fact, a large spatial Gini coefficient does not mean that there should be a phenomenon of industrial agglomeration, because if there is a large-scale monopoly in a certain industry in a certain area, the rural Gini coefficient calculated from this will also be very large, but this kind of industry The phenomenon of accumulation does not necessarily occur in space. In order to solve the distortion of the Gini factor, improvements were made on the basis of Herfindahl and Gini factors, and a new cumulative index was proposed to measure the degree of industrial agglomeration, called the European Community Index. Among them, y is the spatial agglomeration index, G is the industry's spatial Gini coefficient, H is the industry's Huffendale coefficient, and x i is the proportion of employment in the region to the total employment in the country. among The EG index is divided into three ranges. When y < 0.02, the industry has a low degree of agglomeration; when 0.02 ≤ y ≤ 0.05, the industry shows a moderate degree of agglomeration; when y > 0.05, the industry has a higher degree of agglomeration. III. INDUSTRY CLUSTER EXPERIMENT IN SENSOR ENVIRONMENT A. ESTABLISHMENT OF INTELLIGENT ENVIRONMENT SYSTEM The overall structure of the smart sensor system is divided into three parts: smart sensor terminal, wireless smart monitor and user management system based on Bootstrap protocol [36]. These three parts are used for data transmission through a wireless network based on ZigBee technology and an RS-232 port based on Bootstrap protocol. The design idea of this text runs through the whole system, whether it is hardware selection, PCB design, embedded processor internal program, etc., all have been processed with low power consumption. The composition of the intelligent sensor system is shown in Figure 1: The smart sensor terminal is responsible for collecting sensor data [37], then process the sensor data and save it in the internal memory, and then transmit the data to the wireless smart terminal monitor through the wireless network. The wireless smart monitor is responsible for setting the monitoring threshold of each smart sensor terminal, and regularly collects the data information of the sensor terminal, and saves the information in the data memory. The user management system is mainly responsible for the monitoring data collection and analysis of the wireless smart monitor [38], and is responsible for the software upgrade of the smart sensor terminal and the wireless smart monitor. According to the functional division of the smart sensor system, the system can be divided into: sensor data collection function, sensor data processing function, smart terminal data collection function, monitor data collection function, smart sensor terminal software update function, and smart monitor software update function. The system function division is shown in Figure 2: B. ESTABLISHMENT OF THE THRESHOLD MODEL OF INDUSTRIAL AGGLOMERATION Since the single-threshold model is similar to the multiplethreshold model, it can be easily extended to the case of multiple thresholds with a slight change [39]. Therefore, only the setting and testing methods of the single-threshold model are introduced here. The basic model of the single threshold model is set as follows: (8) Among them, i = 1, 2, . . . , N represents different individuals (company, country, region),t = 1, 2, . . . , T represents time, threshold variable is represented by Q it , threshold value is represented by e, y it and x it are explained variables and explanatory variables respectively, I (•) is an indicator function, It can be regarded as a form of a dummy variable. When the condition in brackets is established, its value is 1, otherwise it is 0. The difference in the intervals separated by the threshold value is reflected in the coefficients of different intervals, such as b 1 and b 2 . Using the piecewise function in mathematics, the following formula will be expressed more clearly. Perform ordinary least squares (OLS) regression analysis on formula (9), and obtain the residual sum of squares function as: Finding the e corresponding to the smallest residual square sum S 1 (e) is the threshold value we are looking for, which is: According to the general form of the threshold model, in order to investigate the relationship between logistics industry agglomeration and economic growth, this paper uses the logistics industry agglomeration level as the threshold variable to establish a threshold regression model. Because the number of existing thresholds is undetermined, a single threshold regression model is first established. If the number of thresholds is greater than one, adjustments will be made based on this model. The empirical model of the single threshold is as follows: (12) Among them, i, t represents the province and year respectively, G represents the level of economic development, u i represents the individual effect, lq represents the level of logistics industry agglomeration, L represents the degree of government intervention, O represents the degree of economic openness, I represents the level of capital investment, and Lab represents the level of human capital, I (•) is the indicator function. IV. ANALYSIS OF THE IMPACT OF INDUSTRIAL AGGLOMERATION ON THE REGIONAL ECONOMY UNDER THE INTERNET OF THINGS ENVIRONMENT A. OVERALL ANALYSIS OF INDUSTRIAL ECONOMY In 2018, the city achieved a regional GDP of 431.8 billion yuan, an increase of 9% over the previous year at comparable prices. The primary industry increased by 17.9 billion yuan, an increase of 4%; the secondary industry achieved an added value of 241.2 billion yuan, an increase of 8%, the industrial added value was 210 billion yuan, an increase of 8.4%; the tertiary industry's added value was 177.9 billion yuan. Increase by 10%. The proportion of the three industries is 4:55:41, of which industrial added value accounts for 49%. Based on the permanent population, the per capita GDP is 61,000 yuan. The proportions of the three major industries in our city are shown in Figure 3: Statistics on the changes in the city's industrial GDP from 2014 to 2019, it can be seen that the city's industrial added value accounted for between 48% and 54% of the regional GDP, which shows that industry is in the city's economic development. Has a vital role. The specific results are shown in Figure 4: According to the data analysis in the figure, it can be seen that the industrial output value of our city will not be lower than the overall 48%, but in 2019 it increased by 1% over the previous year. From this we can see that the development of the industry will continue to grow. In the whole year, the production and sales of industrial enterprises above designated size in Ningbo increased by 35% and 34% respectively, reaching the highest annual VOLUME 9, 2021 growth rate in the past eight years. The overall industrial economy showed a good development trend of high growth and steady progress. In terms of different industries, among the 30 major industries in the city, except for the negative growth of the output value of the non-metallic mining and dressing industry, the other 29 industries have different growth rates, of which the production growth rate of 10 industries exceeds the city's average level; among the top 5 industries, The growth rates of four industries, including electrical machinery, petroleum processing, general equipment manufacturing, chemical raw materials and chemical products, were all above 40%. The growth rate of heavy industry is 12% faster than that of light industry, but the growth of light industry is more stable. The total industrial output value of all young industries was 340 billion yuan, a year-on-year increase of 27%, and the cumulative growth rate was 3 percentage points higher than that in the first quarter. The cumulative growth rate of heavy industry was affected by energy conservation and power restrictions, and the cumulative growth rate was 15 percentage points lower than the first quarter. B. ANALYSIS OF THE AGGREGATION OF THE LOGISTICS INDUSTRY UNDER THE INTERNET OF THINGS ENVIRONMENT As the country's statistics do not separate the logistics industry as an industry, it is difficult to find relevant statistics. According to incomplete statistics, the output value of transportation, warehousing and postal industry accounts for more than 80% of the output value of the logistics industry. The three of them can represent the logistics industry to a certain extent. Therefore, in actual operation, generally use transportation, storage and postal services. Industry to represent the logistics industry. Table 1 shows the changing trend of my country's total social logistics costs and its proportion of GDP in recent years. From the data in the table below, we can see that in 2019, the total cost of social logistics in the country reached 1.1 trillion yuan, which was not much higher than that of the previous year. The increase was only 2.01%, and its proportion of GDP fell by 0.7%. Logistics costs have decreased. Observing the trend of changes in the proportion of total social logistics costs in GDP across the country over the years, we can find that the value remained at about 19% from 2013 to 2017, and the proportion has of the year, the overall investment volume has been greatly improved. Although the total amount in the central and western regions is relatively small, in 2014, 2015, 2018, and 2019 respectively, their annual growth rates exceeded those in the eastern region. The largest increase was in 2015. The increases were 31%, 59%, and 53% respectively. In summary, the eastern region maintains its economic advantages in the more developed regions, and investment in fixed assets in the logistics industry is still large. Compared with the central and western regions, its logistics network system is relatively complete; however, there are several The annual growth rate of fixed investment in the logistics industry exceeded that of the eastern region, forming a catching-up trend. C. ANALYSIS OF THE IMPACT OF INDUSTRIAL CO-AGGREGATION ON REGIONAL ECONOMY In order to further study the impact of collective integration on the regional economy under the background of urban heterogeneity, this module adopts a threshold regression model and selects the city size as a threshold variable for experimental testing. In order to ensure the scientificity and rationality of the regression results, we must first check whether there is a limit. The threshold effect test can effectively determine whether there is a threshold effect and the number of 20700 VOLUME 9, 2021 thresholds. If the test fails, there is no threshold result. In the absence of a threshold, we use the ''self-sampling'' method to calculate one threshold and two thresholds. The specific results are shown in Table 2: We use city scale as a threshold variable to control the threshold phenomenon of cooperation and integration between the production service industry and the manufacturing industry. The results show that there is a threshold phenomenon and a triple limit. In order to ensure the rationality and sensitivity of the test results, the probability ratio test chart is further used to select the threshold. In order to have a clearer understanding of the process of establishing the levy point price and the confidence interval, we are designing a threshold probability program for regional economic development under different urban conditions. The criterion for selecting the estimated value of the threshold phenomenon is the value when the probability ratio test is zero. See Figure 6 for details: Through the self-sampling threshold test and the probability ratio test, and the thorough test of the confidence interval of the threshold, we believe that in cities of different sizes, cooperative clusters have a double threshold for regional economic development. The threshold price of city size is relatively close, 4 and 5. The actual values are 550,000 and 1.46 million respectively. V. CONCLUSION This article analyzes the level of industrial agglomeration in our city using indicators such as location scale, industrial concentration and regional industrial spatial concentration. The results show that the accumulation level of production in our city is relatively high. There is a two-digit average production location greater than the aggregation location. It is higher than the national average and has a competitive advantage. At the same time, the use of the Cobb production function verifies the positive impact of industrial integration on regional economic development. Based on China's logistics technology and logistics requirements, combined with the basic theories and technologies of the Internet of Things, this article proposes to build an intelligent logistics service system under the Internet of Things environment, implement a large logistics alliance strategy, focus on intelligent logistics information processing platforms, and integrate intelligent needs The management and operation system provides integrated solutions for the upgrading of China's logistics industry, and analyzes the logistics industry clusters and regional economies. In this paper, the indicators for measuring the regional economy are relatively simple, and only the GDP index is selected as the indicator for measuring the economic development of various provinces. It is hoped that more indicators can be discussed in future research, such as total factor productivity (TFP) and local industrial structure indicators. In addition, the analysis of the impact mechanism of the logistics industry agglomeration on the regional economy is slightly weak, and more theoretical basis and actual data are needed as support. The future research direction may be based on related theoretical models, combined with the characteristics of the logistics industry and my country's actual conditions to demonstrate its influence mechanism.
2020-12-31T09:06:22.637Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e9f8b54aa3f36ab61cf5bb5d4128c354a16adb1b", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09309389.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "2e30ce7de7b906fa738a804ebc1cfdff5c2f55f5", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14643682
pes2o/s2orc
v3-fos-license
Prevalence and predictors of aortic dilation as a novel cardiovascular complication in children with end-stage renal disease Background: Cardiovascular disease is the leading cause of death in children with end-stage renal disease (ESRD). Isolated aortic dilation (AD) is rare in children. We aimed to determine the prevalence and the risk factors for AD in children with ESRD. Methods and study design: We reviewed records of all ESRD patients followed at our institution from January 2007 to October 2012. AD was defined as Z-score > 2 in the dimension of at least one of the following echocardiographic aortic parameters: annulus, root at the sinus, sino-tubular junction, ,or ascending aorta. Results: The records of 78 patients on dialysis and 19 kidney transplant recipients were available. 30 patients (30.9%) had AD. Multivariate analysis revealed independent associations of AD with body mass index (BMI) Z-score (OR = 0.52, 95% confidence interval (CI): 0.35 – 0.78) and ESRD secondary to glomerular disease (OR = 4.58, 95% CI: 1.45 – 14.46). We developed a classification and regression tree (CART) model to identify patients at low vs. high AD risk. Our model classified 62 patients of the cohort (64%) to be high- or low-risk, with a positive predictive value of 89% and a negative predictive value of 100%. Conclusion: Our data suggest that AD, as a possible marker of aortopathy and early aneurysm formation, is a novel and prevalent cardiovascular complication in ESRD children. Glomerular disease and low BMI Z-score appear to be potent predictors. CART modeling helps identify high-risk children, potentially guiding decisions regarding targeted echocardiographic evaluations. List of abbreviations AD = aortic dilation; BMI = body mass index; BP = blood pressure; CART = classification and regression tree; CCHMC = Cin-cinnati Children's Hospital Medical Center; CKD = chronic kidney disease; CVD = cardiovascular disease; DBP = diastolic blood pressure; EDW = estimated dry weight; ESRD = end-stage renal disease; FSGS = focal segmental glomerulosclerosis; IDWG = interdialytic weight gain; iPTH = intact parathyroid hormone; SBP = systolic blood pressure; PD = peritoneal dialysis; UF = ultrafiltration Background Aortopathy, defined as pathological dilation of the aortic root and/or the ascending aorta is extremely rare in healthy individuals but can have devastating outcomes, such as aortic dissection and aneurysm formation with subsequent rupture [1]. While cardiovascular disease (CVD) manifesting as advanced atherosclerosis, abnormal cardiac remodeling, and impairment of systolic and diastolic functions has been extensively studied in patients with chronic kidney disease (CKD) and end-stage renal disease (ESRD) [2], structural abnormalities like aortic dilation (AD) have rarely been documented in this population. The survival of children with ESRD continues to be undesirably low, and the most likely cause of death remains CVD [3,4]. Our institutional practice is annual echocardiography in all patients with ESRD, and we noted that aortic dilation was a frequently reported finding, especially in patients with glomerular disease. We therefore undertook a systematic, single-center review of the prevalence of and risk factors for AD in our ESRD population and tested the hypothesis that the prevalence of AD is higher among children with ESRD secondary to glomerular disease as compared to those with ESRD due to nonglomerular disorders. Study population and methods With approval by Cincinnati Children's Hospital Medical Center (CCHMC)'s institutional review board, we retrospectively and cross-sectionally studied all patients who received ESRD care at our institution between January 2007 and October 2012. ESRD was defined as undergoing chronic hemodialysis (HD) or peritoneal dialysis (PD) or having received a kidney transplant. Additional specific inclusion criteria were age below 23 years, absence of congenital or structural heart disease, including bicuspid aortic valve, and absence of disorders known to be associated with a high incidence of aortic diseases, such as Marfan, Turner, Loeys-Dietz, or Ehlers-Danlos syndromes [5]. Predictors Demographic and clinical data were extracted from each patient's medical records. The clinical parameters of interest were collected closest to the time of echocardiography and are shown in Table 1. Body mass index (BMI) was determined using the Quetelet index: BMI (kg/m 2 ) = weight (kg)/ height (m) [2]. Systolic and diastolic blood pressure (BP) readings during the month prior to the date of echocardiography were extracted from medical records when available. BP status was defined based on the Fourth Report on the diagnosis, evaluation, and treatment of high BP in children and adolescents [6]. BP was adjusted for body size for direct comparison across all age groups, and the calculated mean BP values were divided by the age-, gender-, and height-specific 95 th percentiles for both systolic (SBP) and diastolic BP (DBP) to determine the respective BP indices for each subject. SBP or DBP indices > 1.0 were defined as uncontrolled hypertension [7]. Dyslipidemia was defined on the basis of the kidney disease outcomes quality initiative guidelines [8]. The dialysis parameters of interdialytic weight gain (IDWG), ultrafiltration (UF) volume, and Kt/V were determined from the dialysis records for the month prior to echocardiography. For children on HD, the collected data included pre-and posttreatment weights, estimated dry weight (EDW), and average UF for each of 12 consecutive HD sessions. Average IDWG was calculated from the difference between mean pretreatment weights and mean posttreatment weights from the preceding HD sessions. Average UF was calculated from the difference between mean pretreatment weights and mean posttreatment weights. Average excess weight was calculated from the difference between mean posttreatment weight and mean EDW. Normalized values for IDWG, UF, and excess weight were also calculated for each subject by dividing their mean values by the subject's EDW and then multiplying by 100 [7]. Children on PD received nightly treatments of continuously cycling PD. Average UF on PD was divided by body surface area to yield corrected UF for analysis. Data on dialysis adequacy, as estimated by double-pool Kt/V values for the month prior to echocardiography, were also collected. Echocardiograms Only one echocardiogram study was analyzed for every enrolled patient. When multiple echocardiograms were available, we chose: 1) the most recent study for patients on dialysis who had not received a kidney transplant, 2) the latest pretransplant study on dialysis for patients who subsequently underwent kidney transplantation, and 3) the first posttransplant study for patients with no available pretransplantation study. With this strategy, we aimed to capture the biggest possible effect of ESRD on aortic pathology and to simultaneously avoid as much as possible any potential improvements in aortic dilation after transplantation. Echocardiograms were retrospectively identified and had been obtained while at rest using a Vivid 7 GE ultrasound imaging system (Milwaukee, WI, USA) or a Philips iE33 x MATRIX ultrasound system (Andover, MA, USA). Testing was performed by pediatric registered sonographers. Images were obtained in standard views according to the American Society of Echocardiography's guidelines [9,10]. Echocardiographic data were obtained in the parasternal long axis view utilizing an inner edge to inner edge technique during systole and included aortic measurements at the sinus of Valsalva, annulus, sino-tubular junction, and ascending aorta. Inter-and intraobserver variability for these measurements in our laboratory is less than or equal to 5%. The Z scores of the aortic dimensions were calculated using the regression models of Boston Children's Hospital echocardiography laboratory [11]. In these models, BSA (body surface area) is the only variable used in the equation and was calculated using Haycock formula [12]. AD was recognized when the Z-score of at least one of the aortic dimensions was greater than 2 [13]. Study data were collected and managed using REDCap ® [14]. REDCap ® is a secure web-based application designed to support data capture for research studies. Statistical analysis Categorical data are presented as counts and percentages and were analyzed with Fisher's exact test. Continuous data are presented as means and standard deviations (SDs) and analyzed with Student's t-test. p-values less than 0.05 were considered statistically significant. Two multivariate logistic regression models were created to identify predictors of AD. Model 1 includes the variables of p-values ≤ 0.15 in the univariate analysis. Model 2 includes the clinical variables that are known to be associated with CVD in addition to the variables included in model 1.We developed risk prediction models using recursive partitioning (classification and regression tree, CART), for the presence or absence of AD. CART creates nonparametric discriminating trees by dividing the cohort into subgroups representing high or low risk of AD based on available clinical variables. The statistical analyses were conducted using version 2.13.0 of the R statistical package, SAS version 9.3 (SAS Institute Inc., Cary, NC, USA). Demographics Of the 133 patients with ESRD during the study period, 130 met the inclusion criteria, although 33 of them only had incomplete echocardiographic records, leaving data from 97 children to be analyzed. The mean age of the population was 11.5 (6.5) (range 0.2 -22.7) years. The majority of subjects were male (56.7%), Caucasian (77.3%) and had nonglomerular disease (55.7%). 19 patients had no pretransplant echocardiograms available and were thus studied posttransplant. From the remaining 78 patients, echocardiograms obtained on chronic dialysis were available and thus included. 34 of these patients had been on HD only, 41 on PD only, and 3 had switched modalities. Even though 80 subjects (82.4%) were prescribed antihypertensive medications, hypertension (defined as SBP or DBP indices > 1), was still present in 66 (82.5%). Incidence of aortic dilation and echocardiographic details AD was found in 30 of 97 (30.9%) patients. The mean age of the AD group was 12.3 (6.1) years. The mean values of aortic dimensions in the patients with and without AD are shown in Table 1. The aortic dimensions showed significant differences between the two groups at all the measured levels, but the difference was most apparent at the sinus. When considering a Z-score of > 2 as dilated, 25 of 30 (83.3%) patients had dilation at the sinus, 13 of 30 (43.3%) had it at the sinotubular junction, and 22 of 30 (73.3%) at the ascending aorta. Only 1 child had a dilated annulus. Mild aortic regurgitation was observed in 10 patients in the AD group (30%), and all of them except 1 had a dilated ascending aorta. (Table 1) Both groups were comparable with regards to age and sex, but children with AD had a lower BMI compared to children without AD: 18.4 (3.7) vs. 23.3 (7.15) kg/m 2 (p = 0.001). This difference was still significant when we compared the BMI Z-scores (-0.7 (1.7) vs. 0.9 (1.3), p < 0.0001). Interestingly, 7 (23.3%) children with AD were malnourished, defined as BMI Z-score < -2, compared to none in the group without AD. While more than half (54/97, 58.8%) of our patients had nonglo-merular ESRD, AD was found predominantly in those with glomerular ESRD: 46.5% of children with glomerular disease (20/43) compared to 18.5% (10/54) of children with nonglomerular disease had AD (p = 0.006). Specifically, focal segmental glomerulosclerosis (FSGS) tended to be more frequent in the AD group compared to the group without AD (33.3% vs. 14.9%, p = 0.056). Univariate Analyses Uncontrolled hypertension was more common in the AD group on univariate analysis. While children with AD had similar SBP and DBP when compared to those without AD, both SBP and DBP indices were significantly higher in the AD group. There were no significant differences between the AD and non-AD groups with regards to dialysis modality or other dialysis parameters (Table 1). Multivariate analyses (Table 2) Model 1 of our logistic regression analysis shows that both BMI Z-score and the presence of glomerular disease are associated with AD. An increase of BMI Z-score by 1 is associated with a decrease in AD risk (OR = 0.52, 95% CI: 0.35 -0.78). The presence of glomerular disease is a significant independent predictor of AD (OR = 4.58, 95% CI: 1.45 -14.46). In contrast to the univariate analysis, the association between AD and SBP and DBP indices is insignificant in model 1. Logistic regression analysis using our model 2 shows similar results as shown in Table 2. The results of the CART are shown in Figure 1. This model identifies three important clinical variables that can distinguish subpopulations at high versus low risk for AD: BMI Z-score, DBP index and intact parathyroid hormone (iPTH) level. As described before, 30 of 97 (31%) of our patients with ESRD had AD. CART classified two subgroups within this ESRD cohort that have an at least two-fold increased risk for AD (i.e. > 62%) and three low-risk subgroups at no more than half the overall risk for AD (i.e. < 15.5%). The high-risk subgroups include:1) patients with a BMI Z-score ≤ -2.0, with 7 of 7 (100%) patients having AD and 2) patients with a BMI Z-score in the range of -2.0 to +0.1 and an iPTH level ≥ 210 µg/ dL, with 9 of 11 (82%) patients having AD. The low-risk subgroups include: 1) patients with a BMI Z-score > 0.1, with 9 of 60 (15%) patients affected, 2) patients with a BMI Zscore of ≥ 0.1 and a DBP index < 1 (where no AD was found among 34 patients), and 3) patients with a BMI Z-score of > 0.1, a DBP index ≥ 1 and an iPTH level< 390 µg/ dL (where no AD was found among 10 patients). Among the whole cohort of 97 patients, the CART model could classify 62 (64%) patients as being at high or low risk for AD. For the patients classified as high-or low-risk, the CART model represents an efficient predictive tool with a positive predictive value of 89% (95% CI: 64 -98%) and a negative predictive value of 100% (95% CI: 90 -100%). Discussion Survival in children with ESRD has increased over the last 20 years, but their standardized morality rate remains very high. CVD is the leading cause of death in adolescents and young adult patients with ESRD, and annual CV mortality rates are elevated several hundred fold in young adults with long-standing CKD [4,15]. Left ventricular hypertrophy accelerated ischemic heart disease, premature dilated cardiomyopathy, aortic valve calcification, increased arterial stiffness, and arterial intima and media thickening are the most frequently observed CV alterations in young adult survivors of childhood-onset ESRD [16]. Vascular abnormalities in children develop in parallel with cardiac abnormalities early in the course of CKD and become more severe as ESRD is reached [17]. The literature describing the prevalence of and risk factors for thoracic AD in a young ESRD population is sparse [18]. Based on our findings, advanced kidney disease appears to represent a novel acquired etiology for thoracic aortopathy in children. AD is uncommon in healthy children and adolescents [13,19,20,21], however, it occurs in 2.8% of children with hypertension [22] and in association with congenital abnormalities like bicuspid aortic valve [23] and syndromic or nonsyndromic genetic disorders like Turner, Ehlers-Danlos, Marfan, Alagille, and Beals syndromes [1,24]. Children with ESRD represent a unique population in which multiple risk factors contributing to the development of aortopathy can coexist. These risk factors include volume overload, chronic anemia, anorexia, hypertension, and presence of arteriovenous fistulas. Furthermore, ESRD patients on chronic HD experience significant hemodynamic alterations because of abrupt changes in the intravascular volume and myocardial stunning [25,26] secondary to frequent myocardial ischemia. These alterations might trigger a degenerative modeling of myocardium and vasculature. Thoracic aortopathy is often asymptomatic until an acute and catastrophic complication like dissection takes place [1,20]. In the aforementioned conditions known to be associated with aortopathy, however, such complications may be seen at smaller aortic diameters than expected or even with normal aortic dimensions [1]. While it is presently unclear whether patients with ESRD and associated AD are categorically at increased risk for dissection, there are multiple case reports of aortic dissections in patients with autosomal dominant polycystic kidney disease [1,13,27] or cystinosis [28]. These reports raise at least some concern that patients with kidney disease may indeed have associated aortic pathology with potential for dissection. We found a 30.9% incidence of AD in our center's ESRD population compared to reference incidences of 2.3% in healthy children and 2.8% in hypertensive children [22]. We also found that low BMI Z-score is the most influential risk factor for AD in all univariate, parametric multivariate, and nonparametric multivariate analyses. This relatively low BMI possibly reflects nutritional deficiencies, even though our patients with AD had mean BMI Z-scores still above the threshold used to define malnutrition, however, all malnourished children in our cohort had AD, and malnutrition is known to negatively impact CVD risk and mortality in both children and adults with CKD. Moreover, the inflammatory state associated with malnutrition is directly linked to a high risk of atherosclerosis known as the malnutrition-inflammation atherosclerosis complex [29,30]. Our findings may therefore suggest that malnutrition with associated microinflammation and oxidative stress could also be an important determinant of aortopathy in pediatric and young adult ESRD. To date, we are not aware of clinical data showing such an association between malnutrition and vascular abnormalities. However, we believe that our observation of such an association between low BMI and AD may be valid, rather than merely reflecting the notion that thin persons tend to have larger aortas because we calculated Z scores using regression equations to determine where a specific aortic dimension lies in the normal distribution relative to BSA for that dimension. This approach in calculating Z scores adjusts for effects of BSA on the size of aorta, so that in normal subjects there is no residual relation between BSA and the size of aorta. Accordingly, lower BMI with subsequent lower BSA does not result in higher Z scores in aortic dimension unless there is an independent association. Any relationship of the Z scores with BMI should therefore independently reflect the impact of BMI on the size of the aorta. Our patients with AD were more likely to have glomerular disease and hypertension. At first glance, this association could be explained by the tendency of patients with glomerular disease to retain more fluid, leading to pre-and postcardiac overload. However, fluid-related variables, such as IDWG and normalized UF percentage, were not significantly higher in patients with AD compared to those without. This might suggest that both glomerulonephritis and AD share a common pathological pathway. Along these lines, Adedoyin et al. [31] also reported that other cardiac complications are relatively common in children with CKD secondary to glomerular disease, which may have immunological and inflammatory impact not only on the kidney but also the cardiovascular system. Furthermore, both the findings of Adedoyin et al. [31] and our findings may specifically implicate underlying FSGS as a risk factor for cardiovascular morbidity (cardiomyopathy and congestive heart failure in the former study and AD in ours). While our analysis only revealed a statistically insignificant trend for FSGS as a risk factor for AD, a definite conclusion cannot be made using our data because the number of patients with FSGS was relatively small. In addition to the possibility that these findings may be related to the fact that FSGS is the most common glomerular disease leading to ESRD at a young age, they also suggest that the immune or genetic mechanisms responsible for the development of FSGS may additionally negatively impact the cardiovascular system. Our data suggest that patients with AD are more likely to have uncontrolled or elevated BP than patients without AD in univariate analysis but not in multivariate analysis. Elevated BP has been a well-studied risk factor for AD in hypertensive adults [32,33,34] and children [22]. The most likely reason for hypertension not being a significant factor in multivariable analysis is the presence of more "powerful" predictors in our cohort, specifically malnutrition and the presence of glomerular diseases. In one study, the reported prevalence of AD in hypertensive children was 2.8% (0.5% higher than the reported prevalence of 2.3% in healthy children). Our cohort has an AD prevalence of 30.9%, indicating other factors more significant than elevated BP contributing to AD. Moreover, left ventricular hypertrophy markers, such as left ventricular mass indices, are similar between patients with and without AD in our study. Aortopathy, beginning with dilatation of the aorta and leading to aneurysm formation, is a subclinical condition that can be diagnosed only by using comprehensive aortic interrogation by noninvasive imaging strategies such as echocardiography. Such surveillance is likely not performed routinely in most centers. Our nonparametric CART is a highly predictive model based on clinical risk factors that are very likely available to physicians caring for ESRD patients and might be helpful in the identification of a high-risk population that will benefit from targeted close observation for developing AD. Despite of the increased awareness of the need to establish multidisciplinary care of adults and children with CKD [35], recent publications have highlighted the suboptimal quality of care for children with chronic illness [36] and for adults with CKD [37] or kidney transplant [38]. Similarly, patients in pediatric programs do not routinely receive the optimal quality of screening for cardiovascular complications such as routine echocardiograms [39]. This is one likely reason why our data are somewhat difficult to compare. Another reason is that, even at other pediatric centers where regular echocardiographic assessments are performed in the ESRD population, AD could be under-detected because measuring aortic dimensions is not part of the standard imaging protocol. As such, aortopathy may not be recognized unless the aortic root morphology is already severely abnormal and associated with aortic insufficiency. Our study has some significant limitations. First, this is an observational retrospective single center report. Second, our study describes a clinical observation, i.e., AD, that is not necessarily associated with a pathophysiologic process, i.e., aortopathy. Third, given the cross-sectional design of the study, longitudinal outcomes, and complications of AD cannot be evaluated. Lastly, gold-standard diagnostic measures, such as cardiac MRI and ambulatory blood pressure monitoring, were not available to further validate our data. Despite these limitations, we believe that our observations are significant enough to warrant further research into the occurrence of aortopathy in children with ESRD. Such further research could include prospective noninvasive surveillance with detailed assessment of the aortic root (as lack of such focus led to the exclusion of a number of echocardiograms from our retrospective, cross-sectional study) and concomitant measurements of inflammatory markers. These efforts could supplement and expand the findings presented here, especially because, to our knowledge, no prior research has evaluated the prevalence of AD among children with ESRD. Describing this prevalence represents the first significant step towards an improved understanding of this novel manifestation of CVD in young individuals with ESRD and thus at very high risk for CV morbidity and mortality.
2018-04-03T01:22:57.464Z
2015-03-27T00:00:00.000
{ "year": 2015, "sha1": "0aed6719f0057335bc093ffd8618ba7369beee55", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc4535175?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0aed6719f0057335bc093ffd8618ba7369beee55", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225526548
pes2o/s2orc
v3-fos-license
On Intergenerational Commitment, Weak Sustainability, and Safety : This article examines sustainability from a policy perspective rooted in environmental economics and environmental ethics. Endorsing the Brundtland Commission stance that each generation should have undiminished opportunity to meet its own needs, I emphasize the foundational status of the intergenerational commitment. The standard concepts of weak and strong sustainability, WS and SS, are sketched and critiqued simply and intuitively, along with the more recent concept of WS-plus. A recently proposed model of a society dependent on a renewable but vulnerable resource (Barfuss et al. 2018) is introduced as an expositional tool, as its authors intended, and used as a platform for thought experiments exploring the role of risk management tools in reducing the need for safety. Key conclusions include: (i) Safety, in this case, the elimination of risk in uncertain production systems, comes at an opportunity cost that is often non-trivial. (ii) Welfare shocks can be cushioned by savings and diversification, which are enhanced by scale. Scale increases with geographic area, diversity of production, organizational complexity, and openness to trade and human migration. (iii) Increasing scale enables enhancement of sustainable welfare via local and regional specialization, and the need for safety and its attendant opportunity costs is reduced. (iv) When generational welfare is stochastic, the intergenerational commitment should not be abandoned but may need to be adapted to uncertainty, e.g., by expecting less from hard-luck generations and correspondingly more from more fortunate ones. (v) Intergenerational commitments must be resolved in the context of intragenerational obligations to each other in the here and now, and compensation of those asked to make sacrifices for sustainability has both ethical and pragmatic virtue. (vi) Finally, the normative domains of sustainability and safety can be distinguished—sustainability always, but safety only when facing daunting threats. The Sustainability Commitment Is an Intergenerational Obligation Framing sustainability so as to respect its "forever" dimension requires an intergenerational perspective. The Brundtland Commission concluded that sustainability would be achieved if each generation bequeathed to the next an undiminished opportunity to meet its own needs [1], a stance foreshadowed by Solow [2]. Economists have interpreted needs in terms of welfare opportunities, which should be undiminished [3]. If this commitment is honored by each generation in its turn, the welfare of an unending sequence of generations will be sustained. A generation might be tempted to bequeath a little less in order to consume a little more, and the circumstance of its presence on this earth provides opportunity but not the moral authority to do so. The intergenerational commitment is a moral and ethical obligation. Several notions of the content of the bequest have been suggested [4,5], its most direct formulation, known as weak sustainability, WS, bequests of non-diminishing inclusive wealth, IW, would permit non-diminishing welfare. Alternatively, sustainability might be viewed as all, or mostly, about sustaining natural resources per se, a position known as strong sustainability, SS [6]. One SS approach might seek to identify a few truly critical resources to be sustained as essential complements to WS. At the other end of the SS spectrum, it might be argued that nothing short of a world with its planetary boundaries, PBs, intact is adequate. All of these formulations share a commitment to intergenerational equity: each generation should have equal opportunity to enjoy equal wellbeing-which WS formulations treat as welfare, while SS approaches may honor a variety of value-motivations including respect for nature per se [7,8]-and the ideal future pathway is the one that would provide each with the highest possible wellbeing consistent with intergenerational equity [9]. The foregoing frames the focus of this article: the economic-theory foundations of WS; the tension between WS and SS re the extent to which sustaining particular natural resources is essential to sustainability; the tension within SS as to motivations for sustainability (is there an obligation to nature per se, independent of human concerns?); the ethical foundations of the intergenerational commitment upon which sustainability depends. What Should We Aspire to Sustain? Weak Sustainability-Sustaining Welfare by Bequeathing Non-Diminished Inclusive Wealth WS approaches have in common the idea that what should be sustained is human welfare-technically, a money-metric measure of satisfaction, in which ability to pay has more influence than it perhaps deserves [10]-which can, in principle, be gained from many different combinations of goods, services, and amenities. Sustaining welfare makes intuitive sense when we cannot predict what new processes and products future technology will bring, and what particular goods and services future people will prefer. Welfare per se is not easy to pass forward through a sequence of generations but wealth, broadly defined, can be transmitted. Welfare can be sustained if each generation bequeaths non-diminishing inclusive wealth-including natural, built and manufactured, financial, human, social, and political capital-to its successor generation [11][12][13][14][15]. Interpreting WS as an intergenerational commitment to bequeath non-diminishing IW is consistent with the World Commission's [1] concept of sustainability, which urges endowing future generations with the means to meet their own needs. Wealth accounting frameworks have been developed and implemented. For example, the World Bank's Genuine Savings concept has been implemented in its Adjusted Net Savings and IW accounts, and has informed the United Nations Environmental Program IW accounts, which have been estimated and updated annually for more than 200 countries, in many cases for more than 40 years [16]. IW accounting includes the major forms of capital, implicitly assuming a considerable degree of substitutability among them. WS raises a variety of issues in principle and in practice. In principle, WS depends on strong assumptions about the substitutability of goods and services in consumption and of different kinds of capital and resources in production [2]. The conceptual literature underpinning WS has been abstract from the beginning. Theorems concerning the necessary and sufficient conditions for WS, and theorems re intergenerational equity, tend to be highly mathematical, heavily caveated, and seldom as conclusive as might be desired. For example, the Solow-Hartwick rule-that net depletion of exhaustible resources should be compensated by reinvestment of the economic rents generated by depletion [2,[17][18][19]]-establishes only a negative result; under the specified conditions, WS cannot be achieved if reinvestment falls short [20,21] Furthermore, the rules and procedures for IW accounting, a process of reducing different kinds of capital to a common metric, are guided by economic reasoning but, as Asheim [22] demonstrated, ultimately are arbitrary in some respects, and therefore, non-unique (see also [23]). Practical concerns with WS include unease with the maintained assumption that financial capital is and will continue to be readily substitutable for other kinds of capital, especially natural resources; maintained technological optimism; insufficient attention to uncertainty. (i) Substitutability is a technical matter [24] and the WS assumption reflects technological optimism-the belief that scarcity is and will continue to be the mother of invention. (ii) Typically, WS formulations elide issues of technology, which augments production, and population, which increases demand, by noting that if productivity and population grow at the same rate, the algebra of WS works out [25]. (iii) While lack of clairvoyance is palpable in the WS discourse regarding, for example, future technology and the content of future consumption bundles, explicit modeling of uncertainty in future welfare is rare in the foundational literature. This perhaps reflects the origins of the WS discourse, which had its roots in concerns that people are inclined to be greedy, consuming more than is sustainable, thereby condemning distant future people to predictable hardship. Strong Sustainability-Sustaining Particular Resources The essence of SS is in sustaining resources that might be critical to humans thriving, economically, morally, and/or spiritually [26]. With this as its mandate, SS is broadly defined. Motivations may include: • Sustaining the earnings humans can extract from nature as in the safe minimum standard of conservation, SMS [27][28][29][30] and the safe operating space, SOS, concept that emerges from the planetary boundaries, PBs, literature [31,32]; • Avoiding asymmetric risk, in some cases by taking pre-emptive action, as in the precautionary principle, PP [33][34][35][36] • Preserving nature for its amenity value to humans or for its own sake [37]. SS makes intuitive sense too, for resources that are in some way special: essential, i.e., non-substitutable in production; unique and highly valued, i.e., non-substitutable in consumption; and/or subject to asymmetric risks of, e.g., ecosystem collapse. SS raises a considerable variety of issues, which can be summarized in two broad categories, both of which raise concerns about opportunity costs, i.e., the potential rewards foregone by insistence on SS constraints. SS will diminish welfare if (i) assumptions of non-substitutability in production and/or consumption turn out to have been too pessimistic, and/or (ii) future generations have a different view of what is unique and highly valued [38]. At the practical level, while even strong SS proponents such as Daly [39] agree that applying SS to all natural resources would be "absurdly strong sustainability" there is no broadly acceptable unique rule for bounding the set of resources and entities to which SS constraints should be applied. In the real world, uncertainties abound, which motivate SS but make its application more challenging. While SS is intended to draw a firm line in the sand, its application in an uncertain world needs to be flexible and open to revision in light of experience ( [40]). Exhaustible and Renewable Resources The distinction between exhaustible and renewable resources is significant to both the WS and SS discussions. Exhaustible resources cannot be replaced in kind, and recycling and exploration can extend the sustainability time-horizon, but not forever. WS into the distant future requires that substitutes in production and/or consumption be found. Renewable resources can regenerate under favorable conditions and the challenge is mostly to manage harvest and regeneration to meet sustainability goals. Yet, it is likely that sustainability in the large will ask more of renewable resources than just sustaining themselves; the ultimate limits on exhaustible resources suggest that renewables will need to compensate in some way for the depletion of exhaustibles. There is a folk theorem to the effect that WS can be assured if harvest of renewable resources can be expanded sustainably to compensate for depletion of exhaustible resources [41]. The implication is that SS might be motivated by concerns about exhaustible and/or renewable resources but, even when exhaustibles provide the motivation, sustainability prescriptions frequently, perhaps typically, involve restraints on the depletion of exhaustibles and compensating enhancements of renewable resources. WS-Plus The opportunity costs of SS are such that it is hard to imagine a viable sustainability policy that requires SS across the board. However, nagging concerns remain that, with even the most optimistic plausible assumptions about substitutability, certain resources truly are critical. Therefore, a sustainability criterion with pragmatic appeal to policy makers and practitioners is WS-plus [42,43]: the intergenerational bequest should maintain non-diminishing IW while including adequate stocks of a few truly critical natural resources. This approach is broadly consistent with the axiological approach of [44] and the Brundtland Commission's observation: "At a minimum, sustainable development must not endanger the natural systems that support life on earth" [1] (p. 42). A Simple Model of a Stylized but Informative Case Barfuss et al. [45], hereafter BDLK, offer a stylized model, which they motivate as not only simplifying but offering "a deeper understanding more complex models might miss" (p. 2). My intention here is to introduce their model as an expositional tool, as its authors intended, and use it as a platform for thought experiments exploring the role of risk management tools in a reassessment of sustainability and safety paradigms. Then, I return to a more detailed treatment of the intergenerational commitment to sustainability. BDLK postulate a renewable resource subject to a tipping point (RRTP)-a case of considerable interest to ecology and ecosystem management [46,47]. In their model: • There is a decision maker (the agent) who seeks a reward on behalf of a group of resource-dependent people. The reward could come in welfare as economists define it or, in more isolated circumstances, perhaps food to store for the winter. • The resource is at any time in one of two possible states, Prosperous or Degraded. The agent chooses a level of exploitation-High-pressure exploitation generates greater rewards, but is possible only in the P state. • In any period under H, there is a chance that the resource collapses, i.e., tips to the D state where exploitation yields zero reward. • Under Low-pressure exploitation in state P, the reward is smaller, but there is no chance of resource collapse. • In state D, exploitation is impossible, but there is a chance that the resource will revert to the P state. A time period in (0, . . . , T), such that r t , π t and s t are r, π and s, respectively, in period t. d Probability of collapse from H action in P state in a given period (i.e., a measure of system vulnerability). p Probability of recovery in D state in a given period (i.e., a measure of system resilience). δ Discount factor expressing the agent's time preference, 0 ≤ δ ≤ 1; future rewards are discounted entirely at 0, while there is no discounting at 1. E(PV(r)) Expected present value of the time-stream of rewards. BDLK consider three paradigms for managing the resource: • Optimality, O, in which the expected present value of the time-stream of rewards is maximized; • Sustainability, S, in which the expected present value of the time-stream of rewards is maintained ≥ PV(r min ); • Safety, Sa, which avoids all risk by maintaining the cautious L policy at all times, so that reward never falls below r l . They conclude, as announced in their title, that O guarantees neither S nor Sa, and that "the sweet spot" is the set of solutions OSSa that satisfy all three paradigms. How Does This Model Relate to the Standard Sustainability Concepts? Because the objective is reward to humans, the S criterion resembles WS in at least that respect. On the other hand, WS permits the destruction of a particular resource if future welfare nevertheless can be sustained, a situation not addressed by these authors. Early WS formulations addressed exhaustible resources, and sustainability prescriptions focused on disciplining human impatience and greed rather than managing risk [2,19]. For BDLK, the focus is on managing a renewable resource that responds to exploitation with stochastic collapse and recovery. Sa obviously bears a relationship to SS in that it would preserve the resource in the P state. Like the SMS, it is targeted at conserving the resource but motivated by ensuring a continuing stream of rewards for people. Like an inflexible PP, it avoids all risk of collapse, sometimes at substantial opportunity cost. BDLK associate Sa with SOS, the safe operating space as defined in the PBs literature. The set of outcomes that are both S and Sa is interpreted as SAJOS, the safe and just operating space [48]. However, while there is a strong normative case for including sustainability among the criteria, the SAJOS concept of justice goes far beyond S. How General Is This Model? RRTP is a special case among sustainability problems, but nevertheless, a case of substantial and increasing interest. A little introspection suggests that the model can be applied, with appropriate modification, to a broader range of sustainability issues. It would accommodate the case of an exhaustible resource with uncertain reserves quite readily-reserves might be exhausted with probability d and exploration might augment them with probability p-and, with a little more gymnastics, could address the case where there is a d chance of resource exhaustion and a p probability of discovering a technology that enables the substitution of more plentiful resources. It might be objected that resource exhaustion is seldom an all-or-none process in a given time-period-and d could be expressed as a probability distribution to resolve that issue-but discoveries, whether of new resource deposits or new technologies, tend to be discrete but uncertain events. The unit of analysis could be a firm, a local or regional forest or fishery, or planet Earth; the agent could be a farmer, forester or fisher, a forest or fishery manager, or a benevolent global manager; the beneficiaries of paradigm and policy are human always, but could be a farm family, a resource-dependent community, a regional or national citizenry, or humanity as a whole; the reward could be denominated in monetary terms as appropriate for a sophisticated economy or in physical units of product as might be appropriate for an isolated subsistence firm or community. There is more than a hint that the authors believe their conclusions apply across this broad range of scales: "Our model is deliberately stylized, thereby applicable across multiple cases and scales" [45] (p. 2). The intergenerational setting of many WS formulations is missing, replaced by an agent serving stakeholders, and all parties, implicitly, are very long-lived. The BDLK model is not presented as a decision tool for managers and policy makers-rather, it is intended to clarify sustainability issues and concepts and communicate them simply to non-specialist readers. For that reason, they do not address issues of model specification, parameterization, calibration, and validation. They assume implicitly that the model contains everything the agent needs to know, and the agent knows it all. This is fine for their purposes, conceptual analysis and communication, but does not address the concerns of Brock and Tan [40], who address the roles of science and optimization methods in policy and management for a very uncertain and sketchily understood world. Thought Experiments to Elucidate the Implications, Bound the Scope, and Test the Generality of the Simple Model To dig a little deeper into the implications of the simple model, consider the following thought experiment. First, set δ = 1, i.e., the agent has neutral time-preference, because we know already that positive time-preference undermines intergenerational equity and ultimately, sustainability itself [49]. Note that Asheim et al. endorse discounting future prospects but only as much as would be required for intergenerational equity, because the present generation consumes too little unless it discounts for expected increases in productivity over time (see also [50]). Potential changes in productivity are not addressed in the BDLK model. Then, set r min at the subsistence level, in order to focus on conditions under which a failure of sustainability would threaten human survival. With these settings, examine the effects of varying r l , r h , d, and p. Beginning with this simple structure, I reconsider the role of safety given a sustainability constraint; examine the impact of scale on sustainability prospects; elaborate on the intergenerational commitment; introduce uncertain welfare and consider its implications for sustainability; and explore some interactions between intergenerational and intragenerational equity concerns. The Role of Safety in the BDLK Model First, observe that all of the variables in play-r l , r h , d, and p-influence E(PV(r)), which matters for optimality and sustainability, but the reward for safety (which requires the L action whenever a chance of harm is present, i.e., whenever d > 0 and p < 1) is influenced only by r l . BDLK's "sweet spot" is OSSa, the set of solutions that satisfy all three criteria-optimality (O), sustainability (S), and safety (Sa). I argue that, in this stylized context, O is always good, but it is not good enough if it is unsustainable. More formally, the intergenerational commitment requires non-diminishing IW. Therefore, the question is whether O should be constrained by S, Sa, or both. Sa in effect sets reward at r l forever, eliminating opportunity for greater reward as a direct consequence of eliminating risk. If the reward given the L action is r l >> r min , OSSa promises a future where people live well and risk-free. However, this outcome is plausible only in circumstances so well-endowed that living is good even under the cautious policy-it is good to be born rich! If r l is little more than r min , mere subsistence, the cautious policy holds the human beneficiaries in a poverty trap, and the H action may be a tempting gamble. Even worse, if r l < r min , the situation is safe but unsustainable. With low r l , Sa promises bare subsistence or worse. Constraining O by S provides sustainability by insisting that E(PV(r)) ≥ r min . Together, OS maximizes E(PV(r)) subject to E(PV(r)) ≥ r min , which is all the society needs in the way of sustainability given the BDLK formulation and my thought experiment settings. We need to dig a little deeper to find cases where anything is gained by adding a Sa constraint to OS when time-preference is neutral. Here, I address two issues submerged just beneath the surface in the BDLK analysis: the reliance on an implicit risk management strategy-savings to cushion collapse of the resource-that underpins their S paradigm; and the issue of scale, which I argue is central to the justification or otherwise for adopting the Sa paradigm in a particular case. Risk Management Observe immediately that collapse and recovery are stochastic, and E(PV(r)), the expected present value of the time-stream of rewards, plays a major role in the optimality and sustainability criteria. This has important implications. First, E(PV(r)) has the virtue of focusing the agent on a time-stream of future rewards. The discount factor, 0 ≤ δ ≤ 1, expresses the agent's time preference-when δ = 1 time-preference is neutral, i.e., the agent does not discount future rewards. The safety criterion makes no reference to time-preference, but it is effectively neutral-safety forever would sustain reward r l forever. Second, with imperfect foresight, future rewards obviously are expectations in vernacular terms-what we anticipate as opposed to what we eventually realize. However, expected value has rigorous meaning in statistics, i.e., the probability-weighted average of all the possible values of future rewards. To optimize and/or sustain expected value of rewards implies risk-neutrality-indifference between an outcome x for sure and a bet with a range of possible outcomes having a probability-weighted average of x-and certainty-equivalence given repeated trials and deep pockets. Suppose that in period t, the system collapses and r t falls to zero. If current period consumption is limited to current period rewards, the society perishes in period t. It follows that E(PV(r)) is meaningful for optimality and sustainability only if mechanisms exist to maintain consumption should the system collapse. Implicit in the model, the O and S paradigms use savings for risk management. Optimizing or sustaining, as the case may be, E(PV(r)) is effective in the long run only if consumption is limited to E(r t ) in each period. Therefore, if the H policy is in effect, rewards are r t h ≥ E(r t ) ≥ r t min -otherwise, the society is bound to perish-and savings are r t h − E(r t ) in each period until a collapse occurs. At that point, a society with accumulated savings of n.r t min can tighten its belt and survive n consecutive periods without recovery. More generally, O and S as defined here are predicated implicitly on adequate and successful risk management. Risk management provides several mechanisms [51] including: • Self-protection, i.e., expenditure of effort and resources to reduce the chance of harm by reducing d, and/or increasing p; • Self-insurance, which may include savings to help maintain consumption ≥r t min in all periods, even if r t falls below r t min , and diversification to reduce dependence on a single vulnerable resource; • Purchased insurance, i.e., a contract that promises compensation in the event of specified harms. In the real world, some risk exposure usually remains. In our simple case, there are two obvious possibilities. (i) Risks may extend beyond simple stochasticity, to include asymmetric risk, ambiguity, and/or unknown unknowns, in which case, risk neutrality is a hazardous stance [52,53]. (ii) Even with simple stochasticity, the BDLK model assumes certainty-equivalence, which requires many trials and deep-enough pockets. These caveats may be upended by a run of bad luck-in the simple case, a too long sequence of failures to recover. To summarize, certainty-equivalence is an attribute of the BDLK model but not necessarily the real world. Invoking the real world introduces additional categories of uncertainty; e.g., model specification uncertainty and parameter uncertainty, which unravel the BDLK implicit assumption that the world, the model, and the agent are all on the same page, and motivate questions about the roles of optimization, planning, and adaptive management in policy and management [40]. Where risk management falls seriously short of certainty-equivalence-i.e., when the threat of harm is inordinate to the potential benefit-there may be a case for a Sa constraint. Three elements to assessing whether a particular risk reaches the threshold for invoking a safety remedy have been suggested [33]-the evidence of threat, the magnitude of worst-case harm, and the expected efficacy of the best available remedy. 3.1.2. Scale: Within Limits, Increasing Scale Weakens the Case for Safety BDLK suggest that their reasoning and their case for S and Sa rules are applicable at any scale. To the contrary, I shall argue scale matters in theory and empirically. Moreover, increasing scale tends, if anything, to reduce the need for Sa, and we have seen that Sa often entails a non-trivial opportunity cost. Scale, in this discussion, has at least three dimensions: size in terms of geographic area; diversity of natural, built, and human capital; complexity of organization. A fourth dimension-openness to trade in raw materials, goods and services, and mobility of capital and people-substitutes for scale in that it allows smaller jurisdictions to enjoy many of the benefits of scale. All four dimensions of increasing scale tend to reduce sustainability risk. (i) Larger geographic scale increases the likelihood of greater diversity in resources and human capital. For one simple example, increasing diversity in weather-related exposure reduces overall risk. (ii) A society that is more diverse in natural resources, human and social capital, and product mix has greater ability to manage risks internally-including the ability to cushion harmful outcomes for particular subregions, firms, and people-which encourages specialization and increases welfare. (iii) Increasing complexity encourages emergent responses to challenging conditions and is likely to increase resilience. (iv) Relatively open borders increase scale, dramatically for small nations and regions, by permitting jurisdictions of a given size to operate at a larger scale via trade in raw materials, goods and services; cross-jurisdictional investing and borrowing; relatively unrestricted movement of people across jurisdictional boundaries. Larger scale enhances specialization both within a jurisdiction and, given relatively open borders, among jurisdictions, thereby increasing the level of welfare that is sustainable. In terms of risk management, larger scale increases opportunities for diversification, facilitates stronger savings and credit markets, and increases the feasibility of transfers of money and resources to regions stricken by natural disasters. More generally, increasing scale weakens the case for safety at every level. For the farm, forest, or fishery, the need for safety is diminished when society is sufficiently large and diverse to cushion failures in individual resources, firms, and sectors. For society, there is less need for safety when similar resources elsewhere may substitute for critical domestic resources. The limits suggested in the subheading are of at least two kinds. First, planetary boundaries, PBs, suggest inflexible constraints at the global level. However, that is far from the whole story-it is important to recognize the substantial opportunities for risk management and internal adjustments within the global community [54]. A similar point has been made about global hunger [55]: solutions are not so much about increasing total food production as about needed reorganization throughout the world food system. Second, global willingness to maintain openness to trade and migration matters. Retreat by nations and regions from open-border policies would reduce global capacity to cushion regional and local populations suffering localized resource crises. In summary, increasing scale tends to reduce the need for the Sa paradigm and increase the viability of the S paradigm as a sustainability constraint in OS. Given that S resembles weak sustainability, it is important to note that increasing scale does not diminish the role of S or WS for a jurisdiction that aspires to thrive by (among other things) importing critical resources, because such a strategy works only for jurisdictions that can afford to pay for them. There may well remain a role for Sa, and its near-analogs strong sustainability, the safe minimum standard of conservation, and the precautionary principle. Sa might be invoked to address critical raw materials at the global level, or those that are nationally critical in the event of global retreat from trade-friendly policies; to preserve iconic natural entities; to avoid or mitigate inordinate risks. The Intergenerational Commitment with Uncertain Welfare The intergenerational commitment obligates each generation in its turn to bequeath to the next an undiminished opportunity to meet its own needs, i.e., non-decreasing welfare opportunities or, equivalently, non-diminished IW. The present generation has, by virtue of its presence, the circumstantial power to consume and destroy, and to save, conserve, invest, and build, thus, influencing the prospects of future generations for good or ill. That is, each generation has the power to increase its own welfare by reducing its bequest of IW. This fact lies at the core of the sustainability question, which can be framed as how, and how much, should we restrain and redirect our exercise of our circumstantial power in order to sustain future prospects? Furthermore, the sustainability question is inherently an ethical question. What if any moral authority does a transitory generation enjoy, by virtue of its presence, to act in ways that diminish the prospects of future generations [56] or, conversely, what if any moral obligations limit a transitory generation's exercise of the power conferred by its presence? The Illegitimacy of Generational Greed A major contingency faced by future generations is whether they will exist and have the resources to thrive, and we who presently exist have non-trivial power over that. Our power over the future is asymmetric but only circumstantial, which provides a weak thread on which to hang a claim of moral priority over future generations. It follows that our presence gives us transitory power over the future, but that is a matter more of fact than of value and in no way undermines our obligation to the future. It might be objected that "all (human) lives are precious" endows the present with moral authority, i.e., the fact that we are here and the belief that our lives are precious might endow us with legitimacy, for example, to take care of ourselves first. "All lives are precious" is a non-trivial moral claim-many would elevate it to the status of a principle-and it suggests a justification for self-preservation. However, it implies that future lives also matter, even contingent future lives, and their self-preservation is justified in their turn. Parfit's non-identity problem [56]-that the commitment to future generations might be undermined by concerns that they might be unlike us, perhaps culturally-is also about how we use the circumstantial power conferred by our presence, and provides no moral foundation for stinting on the intergenerational commitment. What, then, are our obligations to provide opportunity for future generations? Perhaps self-preservation of future generations is justified only to the extent allowed by whatever bequest they receive from their forebears (a luck of the draw sort of thing)? We can dismiss this claim on ethical grounds because their inheritance is not entirely a matter of luck, in that the present generation has the power, but not the ethical mandate, to decide self-servingly to stint on the bequest provided. The fact that future generations are contingent in several dimensions does not undermine the intergenerational commitment-instead, I would argue, it enhances its salience. Uncertain Welfare Suppose welfare in each period, w t , is stochastic with non-decreasing E(w). Then, some generations will experience more welfare than E(w) and some will experience less. If the question of survival arises in the case of an unlucky generation, (how) might the intergenerational commitment be modified? First, consider generational self-interest. The more tenuous the prospects facing a generation, the stronger is its case for prioritizing its survival above its bequest. All subsequent potential generations also have a self-interest in the survival of an embattled present generation, because the first generation to perish ends the game for all subsequent generations. Now, consider the import of the intergenerational commitment. A bedrock principle is that each generation is valued, even if it turns out to be the last. We can say this much without triggering any concerns regarding interpersonal comparison of utilities. If generations are defined in binary terms-the generation exists in its turn, or it does not-no comparison of utilities is involved, because each additional generation is added at the end of the existing sequence. It is only if we argue that generations per se do not matter-what matters are people and their welfare-that Rawls' difficult questions arise [57] along the lines of: How should fewer generations living well, or generations of fewer people living well, be valued relative to more generations, or generations with more people, living more precariously? Therefore, each additional generation is valued, and with neutral time-preference, more generations are preferred to fewer. If w t approaches the subsistence level, the unlucky generation in t is not merely justified in tending to its own survival first; its obligation to future generations is to survive if at all possible, because if it fails, the game ends there. More generally, suppose that bequeathing a little less to the next generation would materially increase a generation's chances of surviving and producing a successor generation. Is this-an increase in generational self-protection at the expense of bequest-a chance that a penurious generation should take? It is tempting to postulate that intergenerational equity would be attained if each generation had an equal chance of survival-and that is surely plausible if the game starts anew with each generation. Yet, to the contrary, the first generation to perish ends the game for all subsequent generations. Therefore, the ethical argument tilts strongly toward survival of each generation in its turn. However, a generation's decision to increase its chances of survival entails potential costs and potential benefits in terms of viability and welfare for subsequent generations. Therefore, a generation's moral authority to pursue its own survival is not unlimited. A commonsense general form of the sustainability commitment in an uncertain world is that each generation in its turn is obligated to make a good-faith effort to endow the next generation with non-diminishing IW. Why a good-faith effort? In a certain world, there would be no need to deviate from an absolute obligation. However, in an uncertain world, a generation may be forced to choose between an undiminished bequest and its own survival. The good-faith caveat provides guidance in making that choice. A good-faith bequest from an unlucky generation may be smaller than the non-diminishing E(w) benchmark, if justified by increased chances of generational survival and evidence of moral consideration of the trade-offs involved. The good-faith caveat has relevance for lucky generations, too. Their good fortune brings them more than E(w) and their survival is not at issue, so they have an obligation to use at least a part of the excess, i.e., a part of w t − E(w t ), to increase their bequest to help get future generations back on to the non-decreasing E(w) path. What Can We Learn from the BDLK Model re Intergenerational Obligations? In the BDLK model, uncertainty is at the core of the sustainability question-not a tweak that can be added at the cost of additional complication-so I consider the implications of extending the model to the intergenerational context. The BDLK world already has stochasticity when d > 0 and p < 1; we could add more uncertainty in several ways, e.g., by making the parameter values d and p uncertain, by making rewards stochastic such that E(r h ) = r h and E(r l ) = r l , and by leaving the agent to discover these values by trial and error. The BDLK world does not have explicit generations, nor does that world have explicit savings, yet savings are essential if E(PV(r)) is to play a decisive role in policy choice. Therefore, I consider the implications of introducing distinct generations and savings explicitly. Generations If a single time-period t represents a generation, BDLK's future-regarding framework would be undermined. Given S, Sa would have strong appeal to the self-interest of the present generation because they would have no opportunity to recover from a collapse, and potential future generations would agree because it would improve their chances of existing. A more interesting formulation would define a generation as lasting for multiple but not unlimited time-periods. This would motivate future-regarding behavior in the present generation. In the BDLK model, concern for the distant future can be motivated by assuming a very long-lived agent with neutral time preference. With explicit generations, we can invoke the intergenerational commitment to sustainability. Savings We have seen that in the BDLK model, the O and S paradigms implicitly use savings for risk management. If the H policy is in effect, rewards in t are r t h where E(r t h ) ≥ E(r t ) ≥ r t min -otherwise, the society is bound to perish-and r t h − E(r t ) is added to savings in each period until a collapse occurs. If at any time, the resource is in the D state, savings are exhausted and the draw from the p distribution is no-recovery, the game is over for the present and potential future generations. How do savings relate to the good-faith bequest? In the BDLK "sweet spot", an ideal time trajectory of outcomes is one in which the resource is maintained in the P state forever. This surely suggests that if distinct generations had been modeled, the undiminished intergenerational bequest would be P always (see also [58]). However, with my amendments to their framework-the multi-period lifespan of each generation and the explicit role of savings, given the stochasticity of the system-the opportunity cost of Sa suggests the possibility that the resource might be bequeathed in the D state. A generation receiving the resource in the D state and carryover savings of at least r t min will be able continue the game for at least one more period, whereas a bequest of n.r t min in savings would support n more periods of frugal living in the absence of recovery. Therefore, the value of the bequest is not all about the condition of the resource. Accumulated savings matter, too. Each generation is managing the resource and savings for itself and the future. The H policy provides a chance of higher reward that can be used for consumption and/or to build savings. The amount of savings in hand at the beginning of a period is an important consideration in deciding policy-with substantial savings, the penalty for collapse is less daunting, which makes the H policy more attractive. Because BDLK did not explore the implications of scale, there remains a relatively big role for the Sa paradigm, as befits an isolated society dependent on a single resource. Nevertheless, explicit consideration of savings makes a difference: the need for Sa is reduced when the accumulation of savings is larger. What is the good-faith non-diminishing bequest in a BDLK world with stochasticity, savings, and distinct generations with multi-period lifespans? We know that certainty-equivalence alone is not good enough when survival is at stake, and savings are essential to maintain consumption in the event of ecosystem collapse. Yet, without additional value-assumptions addressing risk-attitudes, we cannot know the rate at which savings compensate for the degraded condition of the resource. We are left with only some rough guidelines: accumulated savings and the condition of the resource both count in evaluating a bequest; a hard-luck generation should be given some leeway to enhance its survival prospects; and a fortunate generation has a good-faith obligation to increase its bequest as well as its consumption. To this point, the discussion has emphasized the role of savings in cushioning welfare shocks, which raises a fair question-what about borrowing? With increasing scale, a broader range of horizontal financial transactions becomes feasible, including borrowing from regions and sectors not so hard-hit, and inter-regional and/or intersectoral transfers such as disaster relief. In reality, several generations populate the world at any given time, which permits vertical borrowing and gifting; for example, the young need working capital and may receive gifts and/or loans from their elders, and the elders eventually will need care provided by the young. Overlapping generation models more nearly capture the scope of intergenerational transactions and transfers [59], but I do not pursue that avenue here. Generalizing Beyond the BDLK Model With increasing scale come many or all of the following-greater geographic area and variety of natural resources, larger and more diverse human populations, more complex economies and societies, and greater openness to trade of raw materials, goods and services, financial and human capital. Other things equal, increasing scale reduces the need for Sa, for reasons already familiar. The thought-experiments with the BDLK model have revealed the inherent difficulty of specifying the non-diminishing bequest when it has two quite different components. In the real world, the intergenerational bequest has several categories of components-natural capital, built and manufactured capital, savings net of depreciation, human capital, social capital, and political capital-each of which is really a vast collection of things that are not quite comparable without invoking rules for comparing. Unfortunately, the standard WS theorems cannot be generalized to cases with multiple categories of capital [22], leaving us to formulate, construct, evaluate, and implement IW accounting rules that are condemned to remain arbitrary to some degree. The weak sustainability concept of inclusive wealth is a well-known example of a conceptual accounting system for comparing and aggregating different kinds of capital, and the adjusted net savings (ANS) and inclusive wealth (IW) accounts developed by the World Bank and UNEP-which began with a core group of countries in 1970 and now are published annually for more than 200 countries-are well-known and well-regarded applications. Yet, the IW accounts are not uncontroversial. They have conceptual limitations consistent with those of mainstream welfare economics-primarily the influence of ability to pay on value-and limitations that reflect incomplete and, in some cases, unobservable data. Examples of the latter include: economic values for environmental damage, which still tend to be implausibly small despite improvements in data and methods over the years; human, social, and political capital, which are not observable directly, but calculated indirectly from the unexplained residual in estimated equations relating output to measurable inputs. Nevertheless, ANS and IW are among the most credible attempts at measuring and monitoring the evolving sustainability status of most of the world's countries. The need for Sa is reduced in BDLK models when savings and intergenerational bequests are considered, and the role of safety in more general models is reduced at larger scales. Nevertheless, there remains a role for safety policies in support of WS, i.e., what we have called WS-plus, where WS is supported by SS provisions to protect critical resources and iconic natural entities, and to avoid inordinate risks. Intragenerational Burden-Sharing When Welfare Prospects Are Distributed Unevenly To this point the discussion has assumed homogeneity and commonality of interest within generations. To expand the dimensions of ethical sustainability policy, consider the case where these assumptions are challenged. On Heterogeneous Prospects Our generation includes rich and poor households, regions, and nations. In this context, "all lives are precious" has clear moral implications beyond self-preservation. At the very least, the well-off are morally obligated to avoid conscious acts that reduce the welfare of the badly-off [60]. Among the implications is compensation for poor people who are asked to bear burdens in service of sustainability that benefits a broader population. Furthermore, it is argued frequently that the well-off are obligated to maintain a safety net, i.e., provide decently for those unable to care for themselves. In addition, perhaps, the well-off are obligated to improve the prospects of the badly-off when feasible and, ideally, the increment would take the form of investments expected to deliver a long time-stream of welfare improvements. On Commonality of Interests Many well-off people living in temperate zone conurbations are deeply concerned about the need to conserve critical natural capital, and perhaps treasured tracts of nature, for the future. Globally, land-clearing and cropping for export, as well as to meet local needs, have resulted in continuing losses of wildlife habitat, while wildlife populations have diminished by 60 per cent between 1970 and 2014 [61]. Wilson [62] has argued that "nature needs half", i.e., fifty percent of the earth's land surface should be conserved or restored as necessary, to serve the needs of nature. Much of the critical and threatened natural capital is located in lower-income regions, where it provides sustenance for local residents. This suggests a conflict of interests, but arrangements leading to mutual gains may be feasible. On the Intersection of These Concerns As it happens, many of the richest tracts of nature and many of the poorest people are co-located in the tropics, which implies serious conflict of interest between relatively well-off temperate zone people and poor tropical residents earning meager livings from the bounty of nature. Now, we have a moral case for (over-)compensation of poor tropical residents asked to bear the burden of conserving critical natural capital for the future, and a practical case for incentivizing these people to perform the desired conservation. To elaborate a little, suppose that our generation includes rich and poor, and our resources include IW and nature; and that the future will also need IW and at least some nature. Some groups of poor people have, by virtue of their location and history, effective control of abundant stocks of nature that the world needs now and in the future, and it commonly is proposed that they restrain their exploitation thereof in order to endow the future. Insistence that these people bear the burden of restraint may impose intragenerational unfairness in pursuit of intergenerational fairness. Farmer and Randall suggest three principles to guide resolution of these kinds of conflicts [27]: (1) The existence and prospects of present humans are valued. (2) The existence and prospects of future humans are valued. (3) Moral agents have intragenerational obligations to each other, such that commitments to provide opportunities for the future must be negotiated in the context of intragenerational obligations to each other, here and now. How might these principles be applied, and what difficulties might arise in application? Imagine a well-funded global conservation authority: a "Common Heritage Fund" taxing world trade at 1% would generate revenues sufficient to fund the preservation of substantial portions of the world's unique ecoregions [63]. Such a fund makes sense for reasons of ability and fairness: local people often lack the capacity and political/governance capital to solve the problem and, in any event, they may not be the ones responsible for creating the problem ( [64]. How might the authority set conservation priorities? Armstrong [65] points out that it is important to be clear about whether a conservation/restoration proposal seeks first and foremost to defend the intrinsic value of nature, or the nature-related interests of humans. In practice, these two priorities often point in different directions regarding the location and size of tracts to be protected. Given that restoration and conservation are both costly activities aimed at augmenting and securing environmental services by maintaining and increasing the stock of natural assets, it is important also to consider the complementarities and trade-offs among prospective restoration and conservation projects. If the motivations are mostly instrumental and prudential, as implied by the notion of natural capital, it makes sense to consider the potential benefits and costs. Often, both of these considerations-substantial benefits and relatively modest costs-point in the same direction to particular tracts, many of them tropical, in the less developed world. Yet, tilting the burden of conservation toward the tropics would severely constrain the options open to the poor-who are, as it happens, least responsible for the massive loss of biodiversity [64]. Fairness would require compensation for loss of income, which does not seem too difficult; for loss of autonomy, which may be harder; for relocation of people, and perhaps communities, which would be still more difficult. The form of compensation really matters-ideally, it should provide a foundation for steadily increasing autonomy, social cohesion, and welfare over time. The obligations of temperate zone populations do not end with subsidizing conservation in the tropics. Temperate zone ecosystems are valuable, too, and often degraded, which raises issues of restoration as well as conservation. Wilson [62] argues for a patchwork of strongly protected areas including all of the main distinctive ecoregions of the world. There is an overwhelming case for including viable tracts of the temperate and polar zones in the nature conservation portfolio, despite the higher costs. Most of the issues in prioritization apply here, too, including fairness to those of whom we demand the greatest adjustments. Some Tentative Steps toward Win-Win Solutions A considerable variety of governments, NGOs, and international authorities have been developing and field-testing policy instruments designed to incentivize and compensate local and regional resource-dependent populations to protect, restore if necessary, and preserve tracts of nature. Many of these instruments involve payments for ecosystem services (PES) where payment, ideally, is predicated on achieved and observable enhancements in environmental services. The basic idea of payments that incentivize socially beneficial behavior while compensating for lost income from resource exploitation is sound, and policies to implement it have potential. However, the success of PES programs and projects to date has been limited for several reasons [66]; they tend to cover relatively small areas, coordination across projects is limited, incentives often are insufficient, and monitoring the achievements of participants is often suboptimal because effort is more readily observable than performance. The REDD+ program (reducing emissions from deforestation and forest degradation) is global in scope and is incentivized by payments for results. It emerged from the Kyoto Agreement on climate change and is aimed at reducing emissions of atmospheric carbon. It, too, has had its successes, but these have been limited by perhaps excessive bureaucracy as well as the standard PES issues of insufficient incentives and difficulties in monitoring performance [67]. Sustainable Development Goals, SDGs To this point, I have made no mention of sustainable development goals, which have been promulgated by the United Nations [68], various NGOs and some national and regional governments. For those who tend instinctively to situate sustainability within the WS and SS frameworks [44,69], the SDGs are a little disorienting because, while they include goals consistent with WS and SS, they also include goals such as reduced inequalities, gender equality, and competent and responsive governance that are worthy in their own merits but do not seem directly related to sustainability [70]. Perhaps this reflects strategic attempts to broaden the coalition supporting sustainability by attracting people who are motivated primarily by those additional goals. However, I think there is a more compelling answer. Consider the goal of reducing inequality. There is evidence that inequality, at both the national and global levels, plays an important role in driving global biodiversity loss ( [71,72]. This provides a pragmatic reason for proactive relief of the poverty that motivates excessive exploitation of nature. However, Principle 3 situates this pragmatic concern within a broader ethical framework-progress toward intergenerational equity is more likely and its ethical foundations are more coherent when it is linked explicitly with progress toward resolving intragenerational fairness issues. By this reasoning, the pursuit of global justice and the conservation of the natural environment ought to be closely connected goals. Discussion Sustainability is framed here so as to honor its essential forever context-" . . . without compromising the ability of future generations to meet their own needs" [1]-that is, as an intergenerational commitment. The weak sustainability, WS, concept is very much in the Brundtland tradition, emphasizing intergenerational bequests of non-diminishing inclusive wealth, IW. The essence of strong sustainability, SS, is in sustaining resources that might be critical to human thriving, economically, morally, and/or spiritually. That is, SS is a concession to the fear that the composition of the bequest, which is a concern not taken seriously in WS so long as the value of the bequest is non-diminishing, really matters [26]. WS-plus-a commitment to include a specified set of critical natural resources within the IW package-is WS with a modest concession to the SS motivations. Fundamental to sustainability is the sequential nature the bequest, i.e., keeping the commitment generation after generation. The BDLK model is introduced because it addresses an important case, a renewable resource subject to a tipping point, is expositionally simple and fruitful, and its authors make a sweeping claim-the ideal policy is optimal, sustainable, and safe-that requires further examination. My enquiry reveals that (i) safety may have a high opportunity cost and, in cases where the safe policy yields rewards that are little greater than subsistence, may hold the resource-dependent society in a poverty trap; (ii) in a BDLK world, where it is the expected present value of a stream of future rewards that is sustained, savings are essential to maintain consumption if the resource collapses; (iii) and scale really matters to sustainability because it enriches opportunities for risk management. The normative domains of sustainability and safety are quite distinct-sustainability always, but safety only when really needed, e.g., to avoid inordinate risks. The intergenerational bequest is only implicit in the BDLK model, since it abstracts from generations, but the implicit bequest is simple-pass on the resource in the P (prosperous) state. Yet, given the introduction of savings, and of explicit generations, the composition of the bequest becomes an issue-how should we assess the value of a bequest consisting of a resource-state and some accumulated savings? This turns out to be an instance of the more general problem of evaluating a bequest consisting of different kinds of capital: it is impossible to generalize Hartwick-type WS theorems when multiple kinds of capital are involved [22]. Nevertheless, the concept of non-diminishing IW is so compelling that it makes sense to continue improving the IW accounting rules, even when we concede that the last vestiges of arbitrariness are unlikely to be eliminated. Finally, intergenerational commitments in the real world must be negotiated in the context of intragenerational obligations to each other in the here and now, which suggests that intragenerational fairness is an essential component of policy to promote sustainability and helps explain the inclusion of equity in various dimensions among the sustainable development goals. Acknowledgments: I gratefully acknowledge helpful comments and suggestions from Elena Irwin and colleagues on the abovementioned research projects, audiences at the Ohio State University, National Chung-Hsing University (Taiwan), and the annual conference of the Taiwan Agricultural Economics Society, and this journal's referees. Conflicts of Interest: The author declares no conflict of interest.
2020-07-09T09:05:22.111Z
2020-07-03T00:00:00.000
{ "year": 2020, "sha1": "5cd3e30bd3db1515f19dbab7fb40a19d94e9349a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/13/5381/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a4cdf8655cc8deb84039308ffe49fe456101ee54", "s2fieldsofstudy": [ "Environmental Science", "Philosophy", "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
17915389
pes2o/s2orc
v3-fos-license
Growth and Endocrine Function in Long-term Adult Survivors of Childhood Stem Cell Transplant The number of long-term surviving stem cell transplant (SCT) recipients has increased steadily, and attention has now extended to the late complications of this procedure. The objective of this study was to investigate relationship among growth and endocrine functions in long-term adult survivors of childhood SCT. The inclusion criteria of this study were survival at least 5 yr after SCT and achievement of adult height. Fifty-four patients (39 males) fulfilled these criteria and were included in this study. Growth was mainly evaluated by height standard deviation score (SDS) and individual longitudinal growth curves. Among the 54 patients, those that received SCT before 10 yr of age showed significantly greater reductions in changes in height SDS (mean –1.75, range –4.80 to –0.10) compared with those that received SCT at or after 10 yr of age (mean –0.50, range –1.74 to 1.20; P<0.001). The mean loss of height for all patients who received SCT during childhood was estimated to be approximately 1 SDS/6.5 yr (r=0.517). Individual longitudinal growth curves indicated that a significant growth spurt was absent in severe short stature patients during the pubertal period without severe endocrine dysfunctions including GH deficiency. The incidence of growth disorder in long-term adult survivors depends on the age at SCT and whether they received radiation therapy. Life-long follow-up is necessary for survivors to detect, prevent and treat the late endocrine complications in SCT survivors. Introduction Late clinical complications resulting from intensive treatment for an underlying disease or from conditioning regimen before stem cell transplant (SCT) are a major concern in regard to the quality of life of long-term adult survivors after SCT. Late endocrine complications following SCT in the pediatric patients (1) include thyroid dysfunction (2)(3)(4)(5), gonadal dysfunction (6)(7)(8) and growth disorder (9)(10)(11)(12)(13)(14), most likely due to the effect of irradiation on these endocrine Vol.18 / No.1 organs (15,16). The long-term outcomes of childhood cancer survivors have mainly been investigated by cross-sectional studies (17). Therefore, necessity of a long-term follow-up after SCT was advocated. However, no longitudinal studies investigating growth disorder have been reported to date, and the mechanisms of these endocrine dysfunctions as late complications have not been completely elucidated. Linear growth is an intricate process affected by several systems including genetic, nutritional and hormonal factors. The intensive treatment related to SCT and these endocrine dysfunctions after SCT may affect all or some of these factors, resulting in decreased growth rate during childhood (18). In particular, irradiation of spinal and long bone cartilage and the epiphyseal growth plate as the target organ may influence growth considerably. In contrast, patients who are conditioned without irradiation regimens, such as with cyclophosphamide (CY) and/or busulfan (Bu), have been reported to grow normally (19,20). Although several previous studies have confirmed the relationship between growth disorder and SCT, the detailed mechanism for the delay of growth is still not fully understood. The aim of the present study was to investigate the relationships among growth and endocrine functions in long-term adult survivors after childhood SCT at a single institution and to elucidate detailed underlying mechanisms. Patients We reviewed the clinical records of 215 patients who received allogeneic SCT at Tokai University Hospital between 1982 and 1997. The inclusion criteria of the present study were survival at least 5 yr after SCT and achievement of adult height. Fifty-nine patients (42 males and 17 females) were eligible for this study, and all were older than 15 yr of age at the time of their last visit. Five patients (three Fanconi anemia, one Gaucher disease and another cancer with delayed bone age before SCT) were excluded from the present study because of pre-existing endocrinological disorders before SCT. The remaining 54 patients (39 males and 15 females) fulfilled the criteria and were included in the present study. At the time of SCT and at the beginning of follow-up, written informed consent was obtained from the patients and/or their parents for the treatment procedure and followup after SCT. Patient characteristics are summarized in Table 1. Transplantation procedure Conditioning regimens for SCT are shown in Table 1. The conditioning regimens for 48 patients consisted of irradiation combined with/ without CY and/or other drugs; 6-12 Gy of total body irradiation (TBI) was given to the malignant disease group in three to six fractions, and 3-10 Gy of thoraco-abdominal irradiation (TAI) was given to the nonmalignant disease group in one to five fractions. The remaining 6 patients received conditioning without irradiation. Prophylaxis against GVHD varied during the time period; methotrexate, cyclosporine or a combination of both drugs were used. Evaluation of growth and growth hormone secretion Height measurements were converted to height SDS standardized for age and sex using the reference standards of Japanese, and the growth data of each patient were evaluated by changes in height SDS. A patient was defined to have achieved adult height when the growth velocity became less than 1 cm/yr for two consecutive years. Short stature was defined as an adult height of <2.0 SD. Severe short stature was defined as an adult height of <2.0 SD and changes in height SDS of -2.0 and more. Height and growth velocity data based on the National Survey in Japan in 2000 were used as the reference standard. GH secretion was repeatedly assessed by insulin tolerance test before SCT and annually thereafter. Because of a remarkable variation in peak GH response to the insulin tolerance test in each individual, a median of 8 insulin tolerance tests (range [1][2][3][4][5][6][7][8][9][10][11][12] were performed during the follow-up period and were necessary to make a diagnosis of poor GH response. Regular insulin (0.1 U/kg) was injected intravenously in the morning after an overnight fast, and blood was obtained every 30 min for 120 min via an indwelling venous catheter. GH deficiency was defined as a GH level of <10 ng/mL in response to stimulation with regular insulin. GH was assayed by either radioimmunoassay or immunoradiometric assay. The plasma IGF-I concentration was determined in extracted plasma by immunoradiometric assay. Other endocrine functions In male patients, onset of puberty was defined as a testicular volume of ≥4 mL (21). Testicular volume was determined using an orchidometer, as described by Prader (22). Testicular measurement using the orchidometer was performed by a single investigator (S.K.). Testicular Leydig cell function and germinal epithelium damage were evaluated using the basal serum LH levels, basal serum FSH levels and serum testosterone levels. The normal basal serum LH and FSH levels at our institute are <5 mIU/mL and <9 mIU/mL, respectively. Partial Leydig cell dysfunction and partial germinal epithelium damage were defined as increased basal LH and basal FSH levels (>15 mIU/mL and >20 mIU/mL, respectively) with normal testosterone levels. In female patients, the time of the menarche and recurrence of menstruation after BMT were recorded. Ovarian function was evaluated using the basal serum LH levels, basal serum FSH levels and serum E 2 levels after BMT. Primary ovarian failure was defined as an increased basal FSH level (>10 mIU/mL). Thyroid function was evaluated before SCT and annually thereafter by serial measurement of basal serum thyroid stimulating hormone (TSH) levels, serum free triiodothyronine (FT3) levels and free thyroxine (FT4) levels. The normal values at our hospital are: 0.30-4.00 µU/ mL for TSH, 2.50-4.50 pg/mL for FT3, and 0.75-1.75 ng/dL for FT4. Subclinical compensated hypothyroidism was defined as elevated THS levels (4-10 µU/mL) with normal FT4 levels and no clinical symptoms. All measurements were performed in the central laboratory of our hospital. Endocrine tests were basically conducted in the morning fasting state to avoid diurnal variation of hormones. Bone age Bone age was assessed for bone maturation by the Tanner and Whitehouse (TW2) method modified for Japanese patients. Seventy-three bone age radiographs obtained at our institution between 1986 and 2002 were evaluated by two pediatric endocrinologists (H.I and Y.T). Because bone age evaluation can be influenced by related clinical information and by individual experiences, intrapersonal and interpersonal variation was assessed as follows. One pediatric endocrinologist (H.I) first evaluated bone age by chronological age order after familiarizing himself with the patient information; then evaluated bone age blindly in random order. The other pediatric endocrinologist (Y.T) read bone age blindly in by random order only. The three sets of results were analyzed by a statistical method for agreement degree. Completion of bone maturity was defined as a bone age of 17.0 yr in males and 15.3 yr in females according to the TW2 method modified for Japanese patients. Statistical analysis Medians and ranges are used throughout the text, tables and figures because the distribution of the data was skewed. The Wilcoxon signed rank test was used to compare the changes in height SDS before and after SCT. The Mann-Whitney U-test was used to compare the differences between groups. The Chi-square and Fisher's exact probability tests were used to assess the association between endocrine dysfunction and particular clinical features. Spearman's correlation coefficient by rank was used to examine the relation between two variables. This statistical analysis was carried out using the GraphPad PRISM statistical package. A p value of less than 0.05 was considered statistically significant. Changes in growth among patients who achieved adult height We focused on changes in growth according to the type of conditioning regimen for SCT and several other factors associated with SCT. The changes in height SDS of the patients who reached their final heights showed that the adult height SDS was significantly decreased (median -1.15, range -5.43 to 1.20) compared with the height SDS at SCT (median -0.41, range -2.84 to 1.93, p<0.001; Fig. 1A). According to sex, the changes in height SDS also indicated that the adult height SDS was significantly decreased (median -1.23 and range -5.43 to 1.20 for males; median -0.98 and range -2.68 to 0.66 for females) compared with the height SDS at SCT in both sexes (median -0.21 and range -2.48 to 1.27 for males; median -0.51 and range -1.18 to 1.93 for females; p<0.05). Therefore, the SCT procedure was considered to have caused growth disorder in these patients. The difference in growth outcome according to age at SCT indicated that patients who received SCT before 10 yr of age (9 yr of age or younger) showed significantly greater reductions in height SDS (median -1.75, range -4.80 to -0.10) compared with those who received SCT at or after 10 years of age (median -0.50, range -1.74 to -1.20; p<0.001; Fig. 1B). We also analyzed whether the difference in linear growth depends on the type of conditioning regimen including irradiation. In regard to the mode of irradiation, patients who received irradiation experienced a significantly greater decrease in adult height SDS (median -1.01, range -4.80 to 0.74) compared with those who received only chemotherapy (median -0.31, range -0.83 to 1.20; p<0.05; Fig. 1C). Although eight patients were treated with cranial irradiation prior to SCT, the changes in height SDS did not indicate any differences in these patients compared with those who did not receive cranial irradiation. Furthermore no statistical differences were observed regarding were treated with glucocorticoid hormone for treatment of chronic GVHD. However, no statistically significant difference in adult height was observed in regard to glucocorticoid hormone treatment. A 4-yr-old boy with severe aplastic anemia received SCT twice. His conditioning regimens consisted of TAI + cyclophosphamide + antilymphocyte globulin for the first SCT and no conditioning regimen for the second SCT. His height at SCT was 98.0 cm (-0.63 SD), and his adult height was 145 cm (-5.43 SD) with a wellproportioned stature. He did not experience GVHD. This serum IGF-I levels remained in the lower half of the normal range, and an insulin tolerance test indicated a normal pattern of GH secretion throughout the follow-up period. Although his bilateral testicular size was 10 mL, his serum testosterone levels remained in the lower half of the normal range. The changes in height SDS were re-sorted by age at SCT for a more detailed analysis. Performance of SCT at a young age was a strong risk factor for development of growth reduction compared with performance at an older age (Fig. 2). This early age effect was more pronounced in males than in females. As a whole, the mean loss of height in all patients who received childhood SCT was estimated to be approximately 1 SDS/6.5 yr (r=0.517). Age at the time of transplantation has a strong influence on height SDS after SCT. Changes in GH secretion after SCT GH secretion was evaluated by insulin tolerance test and serum IGF-I levels in 54 patients before SCT and annually thereafter. Fourteen patients experienced poor GH secretion (GH level of <10 ng/mL) at least twice, although permanent GH deficiency was not observed. Serum IGF-I levels also remained in the lower half of the normal range throughout the followup period (Fig. 3), although five patients had transiently low serum IGF-I levels at least once. There were no statistical differences between patients with poor (median -0.95, range -2.81 to 0.74) or appropriate GH responses (median -0.86, range -4.80 to 0.87; p=0.765) or between patients with low (median -2.27, range -2.81 to 0.42) or normal IGF-I levels (median -0.84, range -4.80 to 0.87; p=0.184). GH replacement therapy was not performed in these patients after SCT because the poor GH secretion was transient. Other endocrine functions We examined the relationships among growth and endocrine functions during the pubertal period. All patients had adult genitalia (Tanner stage V) at the last evaluation. In the male patients, puberty started spontaneously in all patients in accordance with the increase in testicular volume (≥4 mL). In all but three patients (UPN 1, 18 and 120), the serum testosterone level reached an adult level at some points after SCT (Fig. 4C). All patients, however, experienced increased basal LH and FSH levels with normal serum testosterone levels as they In female patients who received SCT before 10 years of age, all seven patients who had not manifested menarche before SCT entered puberty spontaneously and experienced menarche after SCT at a median age of 13.5 yr (range 12.8 to 14.5 yr) which is appropriate for healthy Japanese girls. They also had sustained rises in their gonadotropin levels before menarche (Figs. 4F and 4G). Their serum FSH levels, however, decreased towards the normal range after menarche. All patients who received SCT before 10 yr of age had normal E2 levels during the pubertal period without hormone replacement therapy (Fig. 4H), although partial ovarian failure was observed in all patients. On the other hand, 2 of the 8 patients who received SCT at or after 10 yr of age spontaneously manifested menarche after SCT. The remaining six patients were diagnosed as having primary gonadal dysfunction after SCT. Although three patients started hormone replacement therapy due to clinical symptoms of gonadal dysfunction, two other patients did not accept hormone replacement therapy for various reasons. In regard to thyroid function, subclinical compensated hypothyroidism was observed during the post-SCT period in all patients (Figs. 4D and 4E for males and Figs. 4I and 4J for females). Because they had no clinical symptoms, we did not treat them with levo-thyroxin. Thus, the peripheral hormone concentration acting on the target organs was appropriate for the pubertal age. Bone age The bone age scores of the pediatric endocrinologist who evaluated bone age in chronological order after familiarizing himself with the patient information and those of the other pediatric endocrinologist who evaluated bone age in random order were statistically correlated by Spearman correlation (r=1.0). The bone ages determined by the two endocrinologist without the use of patient information were also statistically correlated (r=0.968, p<0.001). Therefore, we utilized the later readings for evaluation of bone age in the present study. We examined bone age in 21 patients (14 males and 7 females) to investigate the causes of growth disorder in patients who received SCT before 10 yr of age (Table 1). Bone age tended to be delayed in females than in males, although the difference was not statistically significant. No differences regarding type of conditioning regimen, including irradiation, were observed. According to the endocrine function test, these patients did not exhibit endocrine dysfunctions, including permanent GH deficiency and precocious puberty. Therefore, bone maturity did not contribute as a cause of growth disorder in this series. Individual growth curves after SCT Among the 39 male patients, 8 of the 14 patients who received SCT before 10 yr of age were found to have a severely short stature, while no similar effect was found for the 25 patients who received SCT at or after 10 yr of age (p<0.001). On the other hand, among the 15 female patients, only 1 of the 7 patients transplanted before 10 yr of age was found to have a severely short stature. Therefore, the presence of severely short stature was more pronounced in males than females. Individual growth curves after SCT were evaluated in order to clarify the details of growth dynamics and growth disorder. According to longitudinal growth curves plotted on crosssectional growth charts, the male individual growth curves indicated that a significant growth spurt was not observed during the pubertal period in patients who received SCT before 10 yr of age compared with those who received SCT at or after 10 yr of age (Figs. 5A and 5D). Patients who received SCT before 10 yr of age tended to experience extreme decreases in changes in height SDS (median -2.5, range -4.8 to -0.2) compared with those who received SCT at or after 10 yr of age (median -0.8, range -1.5 to 0.9; p<0.01; Figs. 5B and 5E). No statistical differences were observed in growth velocity SDS in patients who received SCT before 10 yr of age (median -1.2, range -6.9 to -2.6) and those who received SCT at or after 10 yr of age (median -3.6, range -6.7 to 3.5; p=0.315). The growth velocities of patients who received SCT before 10 yr of age, however, dramatically decreased in the pubertal period compared with patients who received SCT at or after 10 yr of age (Figs. 5C and 5F) begining two years after SCT. In female patients, individual growth curves and growth velocity indicated loss of growth spurt during the pubertal period in patients regardless of age at SCT (Fig. 6). Discussion Growth disorder is one of the significant late complications among long-term survivors following SCT. The results of this study show that loss of growth spurt, which caused short stature after SCT, was observed during the pubertal period, but that it was not associated with endocrine functions, especially GH secretion, or IGF-I levels and bone maturation. The retardation of spinal and long bone growth caused by irradiation is thought to be closely related to impairment of growth during the pubertal period. The adult height achievements and decreased growth of patients who have received SCT during childhood have previously been reported (19,23,24), and the mean loss of height has been estimated to be approximately 1 height SDS compared with the mean height at the time of SCT (13,14,25). Our results, which showed a difference between the height SDSs before and after SCT of -0.76 SDS, are in agreement with this previously accumulated data. Therefore, SCT has a great influence on growth convalescence in patients who receive SCT during childhood. Sanders (26) also observed a greater impact of age on adult height in 15 children who were less Growth disorder is more pronounced in patients who receive SCT at a younger age and in those who receive irradiation. In contrast, patients who are conditioned with non-TBI regimens usually grow normally. Holm et al. reported an increased tendency of loss of adult height among children who received TBI at younger ages (27). Their finding is similar to our experiences with patients who have received TBI; they tended to experience a greater decrease in adult height SDS. The decrease in growth found in the irradiated group cannot be explained by impaired secretion of GH because TAI spares the hypothalamus and pituitary gland. The reduction in growth rate observed in the irradiated group, therefore, may be explained by the direct effect of irradiation on the spinal and long bone cartilage and epiphyses. TAI has an influence on spinal epiphysis and proximal epiphyses of the femoral bones, while TBI affects to whole body, including spinal and long bone epiphyses. Our data indicated that the changes in height SDS (median age at SCT) were -0.94 SDS (11.6 yr of age), -1.20 (7.9) and -0.31 (11.4) in the patients who received TBI, TAI and chemotherapy alone, respectively. According to our results, which showed a mean loss of height of approximately 1 SDS/6.5 yr in all patients who received SCT during childhood, the actual changes in height SDS in the patients who received TAI may be -0.57 because of the younger age at SCT of the patients who received TAI among the two remaining groups. Patients who received TAI, therefore, tended to experience a smaller reduction in adult height SDS compared with patients who received TBI. According to the growth charts, the individual growth curves indicated that loss of growth spurt occurred in patients who received SCT before 10 yr of age during the pubertal period; Brauner et al. also reported that continuous growth failure due to irradiation was indicated in the defective longitudinal bone growth (28). Therefore, irradiation strongly affects longitudinal growth in patients who receive SCT. It is well known that growth plate cartilage located mainly in the spinal and long bones is sensitive to irradiation. Many studies have suggested a close correlation between irradiation damage to growth plate cartilage and retardation of skeletal growth. Irradiation influences not only cell replication but also organization of the cartilage matrix and transition from cartilage to bone. Bakker et al. reported an in vivo study in the rat in which radiation resulted in growth delay and disorganization of the columnar structure of the growth plate, which could indicate impaired synchronization of the process of proliferation and differentiation in the growth plate (29). They suggested that growth delay is persistent and that there is certainly loss of growth spurt in the rat model. This is in line with the human situation after SCT, where damage to the growth plates also results in persistent growth retardation. If all epiphyses were damaged equally, the relative loss of growth potential should be equal at every epiphysis. There are a greater number of epiphyses per unit surface area in patients who receive TBI than in patients who receive TAI, and cumulative loss of growth may be a possible explanation in the patients who receive TBI. Although our preliminary data indicates that the sitting height / standing height ratios were not different among the groups, further clinical investigation and animal studies are required to clarify the underlying mechanism to elucidate this point. In the present study, the change in height SDS was greater in boys compared with girls (-1.01 SDS in boys vs -0.47 SDS in girls). In addition, in boys, most of the decrease in height SDS occurred during puberty, whereas in girls, the decrease in height SDS was slightly greater before puberty and much less during puberty. There are several possible explanations for these differences between boys and girls. First, the time between SCT and age at adult height was slightly greater in girls (median 9.3 yr vs 7.8 yr in boys). Furthermore, growth velocity is greater in healthy boys compared with girls; limiting growth velocity may have had a greater effect on boys. Finally, ovarian failure frequently occurred in the girls, whereas all but three male patients had serum testosterone levels that reached an adult level at some point after SCT. Delayed introduction of sex hormone replacement therapy in girls may have resulted in a prolonged period of prepubertal growth. Our study indicated that growth disorder after SCT was influenced mainly by age at SCT, irradiation as a conditioning regimen and loss of growth spurt during the pubertal period without GH deficiency. We, therefore, recommended that long-term survivors who have received SCT in childhood deserve life-long attention to detect, prevent and treat symptoms and disorders of endocrine function.
2018-04-03T05:06:29.608Z
2009-02-19T00:00:00.000
{ "year": 2009, "sha1": "829a992f1c79ba5387bc9835bcecc26fc93e7456", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/cpe/18/1/18_1_1/_pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "829a992f1c79ba5387bc9835bcecc26fc93e7456", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247420986
pes2o/s2orc
v3-fos-license
Effects of black garlic on the pacemaker potentials of interstitial cells of Cajal in murine small intestine in vitro and on gastrointestinal motility in vivo ABSTRACT Black garlic (BG) is a newly explored food stuff obtained via fermentation of raw, healthy garlic, especially in Asian countries. Interstitial cells of Cajal (ICC) are the pacemaker cells of gastrointestinal (GI) motility. The purpose of this study was to investigate the effects of BG extract on the pacemaker potentials of the ICC in the small intestines of mice and the possibility of controlling GI motility. The antioxidant activity of BG extract was also investigated. The whole-cell electrophysiological method was used to measure pacemaker potentials of the ICC in vitro, whereas GI motility was measured using the intestinal transit rate (ITR) in vivo. BG extract depolarized the pacemaker potentials of the ICC. Y25130 and RS39604 5-HT receptor antagonists could not inhibit the effect of BG extract on the pacemaker potentials of the ICC, whereas the 5-HT receptor antagonist SB269970 could. Pre-treatment with external Na+ (5 mM) or Ca2+-free solution inhibited the BG extract-induced depolarization of the ICC. With SB203580, PD98059, or c-jun NH2-terminal kinase II inhibitor pre-treatment, BG extract did not induce pacemaker potential depolarization. Moreover, the ITR values were increased by BG extract. Elevation of the ITR due to BG extract was related with increased protein expression of the 5-HT7 receptors. In addition, BG extract showed antioxidant activity. Collectively, these results highlight the ability of BG extract to regulate GI motility and the possibility of using it to develop GI motility modulators in the future. Moreover, BG showed immense potential as an antioxidant. Introduction Garlic (Allium sativum L.) has been used as a spice and traditional medicine for eons. Several studies have shown that garlic has beneficial effects on human health, such as anti-inflammatory, anti-cancer, lipid lowering, maintenance of blood pressure, and blood glucose regulation (Kimura et al. 2017). However, unprocessed, raw garlic has a characteristic odor and spicy taste, which can limit its use because of gastrointestinal (GI) problems when consumed (Kodera et al. 2002). Black garlic (BG) is obtained after garlic has been fermented for a certain duration under high humidity and temperature conditions (Kimura et al. 2017). BG does not produce a strong off-flavor caused by the reduction of allicin, which converts it to an antioxidant during processing (Yuan et al. 2016). BG extract has demonstrated several bioactivities, including anti-oxidative, anti-allergic, anti-diabetes, anti-inflammation, anti-carcinogenic, and GI emptying effects (Jeong et al. 2016;Chen et al. 2018). Interstitial cells of Cajal (ICC) are essential pacemaker cells that regulate the GI motility (Huizinga et al. 1995;Ward et al. 2000;Kim et al. 2005;Hwang et al. 2020. Thus, research on ICC plays a very important role in understanding GI motility regulation. However, little is reported on the effects of BG extract on ICC and GI motility. Thus, we investigated the effects of BG extract on the pacemaker potentials of ICC and GI motility. In addition, we assessed the antioxidant activity of BG extract. Materials and methods Preparation of the BG extract BG was purchased from Taewoo Food Co. (Daejeon, Korea). A total of 2 kg of BG was extracted in 70% ethyl alcohol (20 L) for 6 h at 80°C and filtered through a Whatman No. 4 filter paper. After concentrating the solvents using rotary evaporation at 50°C, the yield was approximately 19.8% on a dry weight (w/w) basis. Preparation of ICC Small intestines of mice were excised and after removing the mucous membrane, cut it into pieces. The cells were dispersed in a solution containing various enzymes, including collagenase, and were cultured in smooth muscle growth medium (Clonetics, San Diego, CA, U.S.A.) supplemented with a murine stem cell factor (Sigma-Aldrich, St. Louis, MO, U.S.A.) in a 95% O 2 incubator. Intestinal transit rate (ITR) Evans blue (5%, w/v) was administered to healthy ICR mice after administration of BG extract into the stomach. After 30 min of Evans blue administration, the ITR of the mice was checked. GI motility dysfunction (GMD) model mice We generated the GMD mouse models by using the acetic acid (AA, 0.6%, w/v, in saline)-induced peritoneal stimulation, as previously described (Lyu and Lee 2013). AA was injected intraperitoneally and the research process was carried out as described previously Wu et al. 2013). Animals Forty-nine mice (24 male and 25 female; 3-8-d-old) from ICR were used for the ICC experiments. In addition, 39 mice (male; 5-6-week-old) were used for the ITR experiments on heathy mice and mice with GI motility disease, whereas 10 mice (male; 5-6-week-old) were used for the protein expression experiments. The ICC and ITR experiments were completed within 12 h of culturing, respectively. We experimented according to The Institutional Animal Care and Use Committee at Pusan National University approved (approval no. PNU-2019-2462). Also, animals were handled according to the Guide for the Care and Use of Laboratory Animals. Reactive oxygen species (ROS) scavenging activity A 0.2-mM 1,1-diphenyl-2-picrylhydrazyl (DPPH) solution was prepared by dissolving DPPH reagent in EtOH. The prepared solution and BG extract were mixed in a 1:1 ratio and incubated for 30 min in a dark room. Absorbance at 517 nm was measured and DPPH radical scavenging activity was calculated using the following formula: where A is the absorbance value of the sample solution and B is the absorbance value of the control solution. Drugs PD98059 and SB203580 were purchased from Tocris Bioscience (United Kingdom), whereas c-jun NH 2 -terminal kinase (JNK) II inhibitor was purchased from Calbiochem (San Diego, CA, U.S.A.). All other agents were purchased from Sigma-Aldrich. Statistical analyses Data are expressed as the mean ± standard error of the mean. Significant differences were evaluated using oneway analysis of variance (ANOVA) or the Student's t-test. P values < 0.05 were considered as significant. Importance of extracellular Na + and Ca 2+ in BG extract-induced pacemaker potential depolarization of ICC Both external Na + and Ca 2+ play a key role in regulating GI motility (Ward et al. 2000). To investigate the importance of external Na + or Ca 2+ in the BG extractinduced responses, we used external Na + (5 mM) or Ca 2+ -free conditions. Pre-treatment with the external Na + or Ca 2+ -free solution suppressed the pacemaker potentials and inhibited BG extract-induced responses ( Figure 3A and 3B). The average degrees of depolarization were 1.4 ± 0.5 mV (n = 9; P < 0.0001) with Na + solution and 3.4 ± 0.6 mV (n = 13; P < 0.0001) with Ca 2 + -free solution ( Figure 3C). These results indicated that the BG extract-induced response was controlled by external Na + or Ca 2+ . Importance of mitogen-activated protein kinase (MAPK) in BG extract-induced pacemaker potential depolarization of ICC It has been reported that MAPK regulates the proliferation and differentiation of the GI tract (Jeong et al. 2012). Therefore, we assessed whether MAPK signaling affects the efficacy of BG extract on pacemaker potentials by treatment with PD98059 (a p42/44 inhibitor), SB203580 (a p38 inhibitor), and JNK II inhibitor. With PD98059 (n = 11), SB203580 (n = 10), or JNK II (n = 10) inhibitor treatment, BG extract did not induce the pacemaker potential depolarization (Figure 4). These results indicated that the BG extract-induced response was dependent on MAPK signaling. Regulation of BG extract-induced small intestinal 5-HT receptor expression 5-HT is mainly present in the GI tract, and an increase or decrease in the expression of 5-HT directly affects GI motility (Camilleri 2009). Moreover, 5-HT 3 , 4, and 7 receptors were found in ICC (Liu et al. 2011;Shahi et al. 2011). When BG extract was used, the expression of 5-HT 3 receptors decreased significantly, the expression of 5-HT 4 receptors did not change, and the expression of 5-HT 7 receptors increased significantly ( Figure 6A). The expression of 5-HT 3 receptors decreased by 69.3 ± 5.4% (n = 5; P < 0.01) whereas that of 5-HT 7 receptors increased by 228.9 ± 12.3% (n = 7; P < 0.0001) after BG extract treatment ( Figure 6B and 6D). However, the expression of 5-HT 4 receptors was unchanged (n = 6; Figure 6C). These results suggested that the ITR increase by BG extract was done by an increase in the expression of the 5-HT 7 receptors. Discussion Garlic has been used as a medicinal ingredient for a long time (Jeong et al. 2012). BG is a processed food, where fresh garlic is fermented at high humidity and high temperatures for 60-90 d ; Yang et al. 2019). Although BG has various activities, including influencing GI motility (Jeong et al. 2016;Chen et al. 2018), its effect on the regulation of ICC function has not been reported yet. The ingredients of BG extract may change for various reasons, such as the type of solvent used for extraction. However, in this experiment, instead of experimenting with extracts of several batches, the experiment was conducted with the extract of one batch. Although no analysis of the components of the BG extract was conducted, the results of previous studies showed that, as compared to regular garlic, BG contained large amounts of diallyl trisulfide and allyl methyl trisulfide and a small amount of epicatechin (Martínez-Casas et al. 2017). Furthermore, it has been reported that lactic acid is a major organic acid component of BG extract (Lu et al. 2017). Another study also showed that BG contains S-allyl-L-cysteine, Figure 6. Effects of BG extract on the protein expression of 5-HT 3 , 4 , and 7 receptors in mice. (A) 5-HT 7 receptor expression increased considerably, but 5-HT 4 receptors was unchanged. However, the expression of 5-HT 3 receptors decreased. (B-D) Band density is showed relative to CTRL. Mean ± SEs. **P < 0.01. ****P < 0.0001. BG: Black garlic. CTRL: Control. β-Actin was the loading control. Figure 7. ROS scavenging activity was measured using the DPPH reagent. BG extract was treated with 0.001, 0.01, 0.1, 1, and 10 mg/mL DPPH reagent, and 100% EtOH was used as a negative control (N.C). The results are from three independent experiments. Mean ± SEs. ****P < 0.0001. BG: Black garlic. DPPH: 1,1-diphenyl-2-picrylhydrazyl. S-allylmercaptocysteine, pyruvate, and amino acids (Kim et al. 2015). In this study, we checked the efficacy of BG extract but not the efficacy of the individual ingredients. We plan to conduct a study to reveal the effective ingredients of BG in the future. We found that BG extract modulated the ICC pacemaker potentials. BG extract depolarized the ICC pacemaker potentials (Figure 1). External Na + (5 mM) or Ca 2 + -free solution inhibited BG extract-induced pacemaker depolarization of ICC (Figure 3). BG extract increased the ITR. It also recovered the loperamide-induced decrease in ITR in vivo ( Figure 5A). Moreover, BG extract recovered the ITR in AA-induced GMD in mice ( Figure 5B). Therefore, it is thought that BG may control GI motility through the adjustment of the pacemaker potential of the ICC. ICC generate spontaneously active pacemaker potentials (Huizinga et al. 1995). 5-HT is secreted from enterochromaffin cells present mostly in the gut. Liu et al. (2011) showed that the ICC pacemaker activity was controlled through 5-HT 3 receptors, but Shahi et al. (2011) suggested that it was controlled through 5-HT 3 , 4 , and 7 receptors. In addition, Wouters et al. (2007) stated that 5-HT 2B receptors regulate the growth of ICC. However, in this study, the 5-HT 7 receptor antagonist SB269970 inhibited BG extract-induced responses, whereas the 5-HT 3 and 5-HT 4 receptor antagonists Y25130 and RS39604, respectively, did not. This shows that BG extract modulates pacemaker potentials due to the 5-HT 7 receptors ( Figure 2). In addition, BG extract-induced ITR increase was mediated by 5-HT 7 receptors ( Figure 6). Therefore, we hypothesize that 5-HT 7 receptors play a vital role in the regulation of GI motility by BG extract. 5-HT 7 receptors are present in lymphoid tissues, smooth muscle cells, ICC, and neurons within the gut (Tonini et al. 2005;Kim and Khan 2014). Various studies have demonstrated the relevance of 5-HT 7 receptors in GI motility regulation (Tonini et al. 2005). In the future, we will study the mechanisms and roles of 5-HT 7 receptors in GI motility and ICC in detail. In addition, MAPK signaling is also a major target for new treatments with GI motility disease (Ihara et al. 2011). In this study, MAPK inhibitors suppressed the effects of BG extract. It was observed that p38, p42/44, and JNK signaling is involved in the BG extract-mediated control of pacemaker potentials. The human body has a variety of complex antioxidant defense mechanisms to counter the harmful effects of free radicals or other oxidants (Alam et al. 2013). Antioxidants are known to be very effective in preventing degenerative diseases and improving the quality of life (Alam et al. 2013). Compared to garlic, BG has approximately 10-fold stronger superoxide dismutase-like activity and antioxidant effects against hydrogen peroxide (Sato et al. 2006). In this study, we showed that the potent antioxidative effects of BG by using the DPPH radical scavenging assay (Figure 7). Recently, natural herbal medicine has been attracting increasing attention as an alternative medicine with few side effects (Ekor 2014). Since many people have benefited from natural herbal medicine, we hope that research on the development of new treatments for GI diseases will be more active in the future. Collectively, the results from the present study showed that BG extract depolarized the pacemaker potentials of the ICC via the 5-HT 7 receptors, extracellular Na + and Ca 2+ concentration regulation, and MAPK pathways ( Figure 8). Furthermore, BG extract increased the ITR in normal and GMD model mice. BG extractinduced ITR increase was mediated through the 5-HT 7 receptors. In addition, BG extract showed significant antioxidative effects. Therefore, BG might be a prokinetic agent that can cure or prevent GMD, and herbal medicine may become a very important strategy for the treatment of GI tract disorders. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This study has been worked in part with the support of a research grant of Kangwon National University in 2018 and also was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2021R1I1A3042479).
2022-03-14T15:11:21.004Z
2022-01-02T00:00:00.000
{ "year": 2022, "sha1": "2b3aae416bc4a97818bda6ab1f6f48780c55903d", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19768354.2022.2049640?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ebbdc5e6a7503548a509b39ab3d9f0a766f41447", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
211204774
pes2o/s2orc
v3-fos-license
The Directional Optimal Transport We introduce a constrained optimal transport problem where origins $x$ can only be transported to destinations $y\geq x$. Our statistical motivation is to describe the sharp upper bound for the variance of the treatment effect $Y-X$ given marginals when the effect is monotone, or $Y\geq X$. We thus focus on supermodular costs (or submodular rewards) and introduce a coupling $P_{*}$ that is optimal for all such costs and yields the sharp bound. This coupling admits manifold characterizations -- geometric, order-theoretic, as optimal transport, through the cdf, and via the transport kernel -- that explain its structure and imply useful bounds. When the first marginal is atomless, $P_{*}$ is concentrated on the graphs of two maps which can be described in terms of the marginals, the second map arising due to the binding constraint. Introduction We study a constrained Monge-Kantorovich optimal transport problem between marginal distributions µ and ν on the real line where the couplings are required to be "directional" in the sense that an origin x can only be transported to destinations y with y ≥ x. While one can think of several natural transport or matching problems with such a constraint, our initial motivation comes from the statistical analysis of treatment effects. There, one compares a (treated) experiment group of patients with an (untreated) control group. A fundamental problem is that any potential outcome that treated patients would have received without treatment is not observed, and vice versa. While the marginal distributions µ and ν of the performance evaluations X and Y of the two groups can be estimated from experiment data, the joint distribution cannot, as the two groups are non-overlapping by design-Neyman noted as early as 1923 (cf. [4]) that there are no unbiased or consistent estimators for the covariance. The improvement of the performance measure due to treatment, Y −X, is known as treatment effect. To test the hypothesis of substantial treatment effect, it is important to understand bounds on Var(Y − X) or more generally the joint distribution P of (X, Y ). Crude (yet popular) bounds can be obtained by mapping one group to the extremes of the support of the other. The classical Fréchet-Hoeffding (or Hardy-Littlewood) mechanism gives better bounds and is often used in the literature (see, e.g., [4,14], and [26,27] for mathematical background). The lower bound for Var(Y −X) over all couplings is attained by the comonotone (or Fréchet-Hoeffding) coupling. The upper bound over all couplings leads to the antitone coupling, which may be unrealistic in the context of many treatment effects: this coupling corresponds to the idea that the healthiest untreated subject would have become the least healthy patient if treated, and vice versa, which seems exceedingly pessimistic, e.g., in a study on the impact of physical activity on obesity. As proposed in [22], this issue can be alleviated by the assumption of monotone treatment effect when suitable, postulating that the treatment effect is nonnegative: Y ≥ X means that an untreated individual's performance would not have been worsened by the treatment, and vice versa. Of course, this assumption is only made after verifying that ν stochastically dominates µ in the data. Under the assumption of monotone treatment effect, the sharp upper bound of Var(Y − X) corresponds to a coupling P * that we call optimal directional coupling. 1 More generally, P * yields the sharp upper bound for E P [g(X, Y )] whenever g is supermodular. The lower bound remains trivial in that it still corresponds to the comonotone coupling (which satisfies Y ≥ X in view of the necessary stochastic dominance), whence our focus on the upper bound. In the next section we introduce P * for general marginals µ, ν in stochastic order and provide manifold characterizations that resemble familiar properties of the antitone coupling while also taking into account the constraint. Globally, the geometry is significantly richer than in the classical antitone case. At a local level, the interaction between supermodularity and con- straint is much more transparent, and each of our characterizations clarifies that interaction from a different angle. The construction of P * is best explained in the simple case µ = 1 n n i=1 δ x i and ν = 1 n n i=1 δ y i where both marginals consist of a common number of atoms of equal size at distinct locations, and moreover x 1 > · · · > x n are numbered from right to left. The transport P * processes these atoms x i in that order, sending each origin to the minimal (left-most) destination y = T (x i ) that is allowed by the constraint y ≥ T (x i ) and has not been filled yet (Figure 1). That is, starting with the set S 1 = {y 1 , . . . , y n } of all destinations, we iterate for k = 1, . . . , n: (i) T (x k ) := min{y ∈ S k : y ≥ x k }, A less formal description is to imagine a left parenthesis "(" at each location x i and a right parenthesis ")" at each y i . Then T agrees with to the usual rule of matching a left with its corresponding right parenthesis in a mathematical statement. The antitone coupling would be obtained omitting the inequality in (i) above, making apparent how the constraint creates the difference with the classical coupling at the local level. Further properties provided in the next section include a geometric characterization through the support of the coupling and of course the optimality as transport for all supermodular costs (or submodular rewards, including variance of treatment effect); here the notion of cyclical monotonicity plays a key role. In particular, we provide sharp conditions under which P * admits a Monge map. Finally, one can also describe P * through its joint cdf. The constraint is responsible for qualitative differences with the antitone coupling. Assuming that the first marginal is atomless, the latter coupling always admits a Monge map, in other words, it is concentrated on a graph. By contrast, the constrained coupling is concentrated on two graphs. The two maps can be described in detail: one is the identity function and appears when the constraint is locally binding, the other admits a graphical interpretation and a semi-explicit formula based on the difference of the marginal cdf's. The appearance of the identity is clearly reminiscent of the unconstrained transport problem for costs like c(x, y) = |y − x| p , 0 < p < 1 that combine concavity away from the origin with convexity at the origin, and was first observed in [17] in that context. See also [29, Section 3.3.2] for a discussion. Another difference is the behavior under marginal transformations. The antitone coupling is invariant with respect to arbitrary monotone transformations of the coordinate axes; more precisely, the copula corresponding to the coupling is the same for all marginals. This is no longer true for the constrained version, the reason being that the underlying constraint Y ≥ X is not invariant. Instead, the copula depends on the marginals and an invariance property holds only when a common transformation is applied to both axes. Several constrained optimal transport problems have been of lively interest in recent years. One related problem is the optimal transport with quadratic cost c(x, y) = |y − x| 2 in R d studied in [19] (see also [10,11]) under a convex constraint: transports have to satisfy y − x ∈ C for a given convex set C. It is shown that this problem admits an optimal transport map (Monge map) in great generality. The specification y − x ∈ C accommodates our constraint, but minimizing the quadratic cost (rather than maximizing) yields the comonotone coupling in our setting. Indeed, [19] details that the comonotone coupling is the optimal solution for general C in the scalar case-the constraint is not binding as soon as an admissible coupling exists. In our problem, the constraint is typically binding and the optimal coupling typically does not admit a Monge map but instead requires a randomization between two maps. (See also Section 6.3 for a generalization of P * to cone constraints that may simplify the comparison with [19].) A different constrained problem is the martingale optimal transport introduced in [6,16,33], corresponding to the constraint E[Y |X] = X as motivated from financial mathematics (see [1,7,8,9,12,18], among many others). In particular, the Left-and Right-Curtain couplings of [7] correspond to the constrained versions of the comonotone/antitone couplings. It is worth noting that these couplings are also concentrated on the graphs of two maps in typical cases, like P * . (However, the appearance of a randomization is more obvious: only a constant martingale is deterministic.) The supermartingale constraint E[Y |X] ≤ X in [24] resembles the current situ-ation in being an inequality constraint. Compared to all of these examples, the present case yields by far the most explicit and detailed results. In hindsight, the directional transport is arguably the most canonical and simplest nontrivial example of a constrained optimal transport problem. For general transport problems in Polish spaces, cyclical monotonicity and duality theory with constraints (or equivalently cost functions with infinite values) were studied by [2,5,13,21,31], among others. The literature on copulas features several directly related results; these works seem to be mostly unaware of one another and of the results in the optimal transport literature. The earliest related contribution that we are aware of, [32], features a bound on the cdf of any directional coupling (see also Remark 4.4 below). It is not investigated if or when that bound corresponds to a coupling. Almost two decades later, [28] was interested in coupling random walks "fast" and determined a directional coupling which maximizes a cost of the form ϕ(y − x) with ϕ strictly convex, nonnegative and decreasing. It is clear from Theorem 2.2 below that this coupling is P * ; the decrease of ϕ is irrelevant as convexity alone implies submodularity. In [28], the application to random walks is successful only when the difference of the marginal distributions is unimodular, and in that case, P * has a trivial structure as the sum of an identity and an antitone coupling between disjoint intervals (see Example 4.5 below)-that may explain why [28] did not investigate the coupling further. The recent work [3] characterizes all directional dependence structures of marginals in stochastic order and derives several related bounds, in particular one on the cdf which gives exactly the cdf of P * . (In fact, the same cdf was previously stated in [28], in a slightly more implicit form.) The structure of the coupling, and more generally the point of view of optimal transport, are not highlighted in these works. While we hope that this paper is a fairly complete study of the scalar case with inequality constraint (or, more generally, one-dimensional cone constraint; cf. Section 6.3), we mention that the multidimensional case is wide open. To stick with the above motivation, consider a treatment which affects two (or more) separately measured qualities-e.g., the impact of physical exercise on blood pressure and body mass index. Control and experiment group now give rise to distributions in R 2 , and the assumption of monotone treatment effect for both performance measures corresponds to a cone constraint y − x ∈ [0, ∞) 2 . It is worth noting that even if a scalar quantity is used to aggregate the two performances, the cone constraint is typically more stringent than what would be obtained by constraining the aggregated performances. The remainder of the paper is organized as follows. Section 2 formalizes the problem and presents the main results. The subsequent Sections 3-5 provide the proofs and some required tools, as well as examples and additional consequences. Section 6 gathers three discussions that we omitted in the main results: another decomposition of P * , optimality properties in unconstrained transport problems, and an extension to general (random) cone constraints. Main Results Let µ and ν be probability measures on R and denote by X(x, y) = x, Y (x, y) = y the coordinate projections on R 2 . A coupling, or transport, of µ and ν is a probability P on R 2 with marginals P •X −1 = µ and P •Y −1 = ν. We call a coupling P directional if it is concentrated on the closed halfplane above the diagonal, meaning that µ-almost every origin x is transported to a destination located to the right of x (or to x itself). Denoting by D = D(µ, ν) the set of all directional couplings, we have D = ∅ if and only if µ and ν are in stochastic order, denoted µ st ν, meaning that their cdf's satisfy F µ ≥ F ν . Indeed, µ st ν if and only if the comonotone coupling is directional. More generally, we indicate by θ 1 st θ 2 two subprobabilities with common mass θ 1 (R) = θ 2 (R) and F θ 1 ≥ F θ 2 . The other notions also have obvious generalizations. The following theorem corresponds to a general version of the discrete construction of P * in the Introduction. We write θ ≤ ν for a subprobability θ with θ(A) ≤ ν(A) for all A ∈ B(R). Theorem 2.1. Let µ st ν. There exists a unique directional coupling P * = P * (µ, ν) which couples µ| (x,∞) to ν x for all x ∈ R, where the subprobability ν x is defined by its cdf The measure ν x is the unique minimal element of S x for the order st . The coupling P * differs from the antitone coupling except in the trivial case where all couplings are directional; that is, when µ((−∞, x]) = ν([x, ∞)) = 1 for some x ∈ R. Indeed, this is the only case where the antitone coupling is directional. We make µ st ν a standing assumption in all that follows. The above theorem is one of several equivalent characterizations of P * that we detail next. The most important for our analysis is geometric, describing the support of P * based on the idea that we would like any two trajectories of the transport to cross whenever that is allowed by the constraint. We say that the pair ((x, y), (x ′ , y ′ )) ∈ H 2 is improvable if x < x ′ ≤ y < y ′ . This means that (x, y) and (x ′ , y ′ ) do not cross, but they could be rearranged ("improved") into the configuration ((x, y ′ ), (x ′ , y)) which forms a cross and remains H 2 ( Figure 1). A set Γ ⊆ H satisfies the constrained crossing property if it contains no improvable pairs. Stated differently, any two trajectories in Γ either cross, or they cannot be rearranged into a cross without exiting H. This property is closely related to a characterization of P * through optimal transport with specific reward functions. A Borel function g : and strictly submodular if the inequality in (2.1) is strict; two examples are g(x, y) = (x − y) 2 and g(x, y) = − |x − y|. If g is differentiable, the Spence-Mirrlees condition −g xy > 0 is a sufficient condition. We say that g is (µ, ν)-integrable if |g(x, y)| ≤ φ(x)+ψ(y) for some φ ∈ L 1 (µ) and ψ ∈ L 1 (ν). This implies uniform bounds on g dP for any coupling P and in particular that the optimal transport problem sup P ∈D g dP or equivalently, inf is finite as soon as D = ∅. Finally, P ∈ D is optimal for g if it attains the supremum. To see the connection with the constrained crossing property, observe that for any strictly submodular g, The following result also contains a third (straightforward) characterization in terms of the so-called concordance order in (i). (iv) P is supported by a set Γ ⊆ H with the constrained crossing property. The geometric characterization in Theorem 2.2 (iv) implies that the optimal coupling P * is invariant with respect to common transformations of both coordinate axes as follows. In particular, copulas of P * (µ, ν) are precisely those of P * (µ•φ −1 , ν •φ −1 ), and thus these copulas are invariant under common, strictly increasing transformations of the axes. The strict increase of φ is necessary to retain the constrained crossing property. Similarly, it is clear that the same transformation must be applied to both axes-in contrast to the unconstrained transport problem, as highlighted in the Introduction. Theorem 2.2 (i) yields an implicit description of the optimal cdf which, by a result of [3], implies the following representation. A proof by direct computation will be sketched in Section 4, as well as resulting bounds. Corollary 2.4. The cdf of P * is given by See also Figure 2 for a graphical representation. As a first consequence, we observe the continuity of P * with respect to weak convergence ( w →) of the marginals. Corollary 2.5. Consider marginals µ n st ν n , n ≥ 1 with µ n w → µ and ν n w → ν, and suppose that µ and ν are atomless. Then P * (µ n , ν n ) We will see in Example 4.2 that the continuity can fail in the presence of atoms. The subsequent results describe the finer structure of the optimal transport. The common part µ ∧ ν of µ and ν is the measure defined by Alternately, µ ∧ ν is the maximal measure θ satisfying θ ≤ µ and θ ≤ ν, and we can note that µ, ν are mutually singular if and only if µ ∧ ν = 0. Importantly, P * always transports µ ∧ ν according to the identity coupling, similarly as in [17, Main Theorem 6.4] for unconstrained transport with cost l(|y − x|) and l strictly concave (see Figure 3 for two simple examples). Proposition 2.6. The optimal coupling P * (µ, ν) satisfies is a deterministic function T of X which is then called a Monge map or transport map of P . Equivalently, the stochastic kernel κ in the decomposition P = µ ⊗ κ has the form κ(x, dy) = δ T (x) (dy) µ-a.s. Proposition 2.6 suggests that the constrained nature of our transport problem may render P * randomized (i.e., not of Monge-type) even in the absence of atoms. Example 2.7. Let µ = Unif[0, 1] and ν = Unif[0, 2]. Then µ st ν and there are no atoms, yet P * has non-deterministic kernel κ(x) = 1 2 (δ x + δ 2−x ); cf. Figure 3. This can be seen, e.g., from the constrained crossing property. The next results show that this example is representative: the "coinflip" randomization into two maps is the only randomization in P * when µ is atomless, and it occurs if and only if µ ∧ ν and µ − µ ∧ ν are not mutually singular. The second transport map can also be analyzed in detail. To that end, suppose first that µ ∧ ν = 0, so that (µ, ν) is already in the reduced form (µ ′ , ν ′ ) of Proposition 2.6. Moreover, suppose for the moment that the marginals are atomless-we discuss later how to reduce atoms to diffuse measures. With the convention inf ∅ = ∞, we have the following (see Figure 2 for the graphical interpretation). Theorem 2.8. Let µ, ν be atomless and µ∧ν = 0. Then P * is of Monge-type with transport map T given by The proof proceeds by showing that T couples µ and ν and that the graph of T satisfies the constrained crossing property. Some of our considerations regarding the local regularity of F may be of independent interest. Combining the last two results and noting that F µ − F ν = F µ ′ − F ν ′ in Proposition 2.6, we deduce the aforementioned assertion on the coin-flip. Corollary 2.9. Let µ, ν be atomless. Then where µ ′ = µ − µ ∧ ν. In particular, P * is of Monge-type if and only if µ ′ and µ ∧ ν are mutually singular. This result immediately extends to the case where ν has atoms, essentially by "filling in" vertical lines in the graph of F where there are jumps (cf. Figure 2). Using a simple transformation detailed in Section 5.4, it also generalizes to atoms in both marginals, but then T is replaced by a (possibly randomized) coupling; see Theorem 5.5. We remark that the invariance property in Corollary 2.3 translates im- While we consider the above the main results, three further considerations are presented in Section 6. We discuss when and how P * can be decomposed as a sum of antitone couplings of sub-marginals, remark that P * occurs as optimizer in specific unconstrained transport problems, and finally offer an extension to cone constraints more general than Y ≥ X. Equivalent Characterizations of P * In this section we prove Theorems 2.1-2.2 and Proposition 2.6, the latter being a consequence of the former. The first step is to show that ν x in Theorem 2.1 is well-defined. We write M for the set of finite measures on R and recall that θ 1 , showing that F is right-continuous. As the remaining properties of a cdf are immediate, we can introduce θ * as the measure associated to F . In view of F = sup θ∈S F θ , we have that µ 0 st θ * and θ * st θ for every θ ∈ S. It remains to see that θ * ≤ ν, or equivalently that F ν−θ * is nondecreasing. Next, we show that the map µ 0 → θ ν (µ 0 ) of Lemma 3.1 is "divisible", which is important for its iterated application: mapping µ 0 = µ 1 + µ 2 into ν produces the same cumulative result as first mapping µ 1 and then mapping µ 2 into the remaining part of ν. We can now construct P * . is clearly nondecreasing and right-continuous in y. Moreover, Lemma 3.2 implies that The total mass of the right-hand side equals µ(x 1 , x 2 ] and thus converges to zero as x 2 ↓ x 1 , showing that x → F (x, y) is right-continuous. Relation (3.2) also implies that F is supermodular (or nondecreasing on R 2 ): for x 1 ≤ x 2 and y 1 ≤ y 2 , As F has the proper normalization, we conclude (e.g., [20, p. 27]) that F induces a unique probability measure P * on B(R 2 ). It remains to observe that P * ∈ D(µ, ν). Indeed, the second marginal of P * is clearly ν. The first marginal is equal to µ as for each x, Finally, P * is directional since ∞) ) by the definition of θ ν (·). We now turn the the equivalent characterizations in Theorem 2.2; here the most important tool is the notion of cyclical monotonicity in optimal transport (e.g., [17,34]). Proof of Theorem 2.2. Given two probability measures P, Q on R 2 with the same marginals, it is known that the concordance order F P ≤ F Q is equivalent to g dP ≥ g dQ for all (suitably integrable) supermodular g; cf. [23, Theorem 3.8.2, p. 108]. The implication (i)⇒(ii) is a direct consequence of that fact, and (ii)⇒(iii) is trivial. Noting that c(x, y) ≥ φ(x) + ψ(y) for some φ ∈ L 1 (µ) and ψ ∈ L 1 (ν), it follows from [5, Theorem 1(a)] that any optimal transport P is concentrated on a Borel set Γ ⊆ R 2 that is c-cyclically monotone. As no transport with finite cost charges the complement H c , we may replace Γ with Γ∩H to ensure that Γ ⊆ H. Cyclical monotonicity then states in particular that 2 Thus, if g is strictly submodular, Γ cannot contain improvable pairs. (iv)⇒(v): Suppose for contradiction that P = P * . In view of Theorem 2.1, there exists x ∈ R such that P maps µ| (x,∞) to a measure ν ′ x = ν x , and ν x st ν ′ x by the minimality property of ν x . It follows from Lemma 3.4 below that there exist z > y ≥ x such that Using also that µ((x, y]) ≥ ν x ((x, y]) due to µ| (x,∞) st ν x , we deduce By the constrained crossing property, this implies P ((−∞, x]) × [y, z)) = 0 and thus contradicting (3.3). (v)⇒(i): Let x, y ∈ R; we show F P * (x, y) ≤ F Q (x, y) for Q ∈ D(µ, ν). As P * and Q have the same second marginal, this is equivalent to Recalling ν x from Theorem 2.1 and denoting by θ the measure that µ| (x,∞) is transported to by Q, the above can be stated as ν x ((−∞, y]) ≥ θ((−∞, y]), and that clearly follows from the formula for F νx in Theorem 2.1. The following was used in the preceding proof of (iv)⇒(v). Remark 3.5. The integrability condition in Theorem 2.2 can we weakened to the positive part g + being (µ, ν)-integrable and the negative part satisfying g − dP < ∞ for some P ∈ D, so that the value function is not trivial. The final task of this section is to deduce the decomposition in Proposition 2.6 from Theorem 2.2. Joint Distribution Function As mentioned in Section 2, the formula for the joint distribution function F * of P * in Corollary 2.4 can be deduced from Theorem 2.2 (i) and [3, Theorem 6] which uses arguments from copula theory. Below, we sketch a direct derivation and some consequences. Proof of Corollary 2.4. As P * is directional, y ≤ x implies so we can focus on y > x. Denote c = inf z∈[x,y] (F µ (z) − F ν (z)) and recall that X, Y are the coordinate projections. We first consider an arbitrary P ∈ D(µ, ν). Then as X ≤ Y P -a.s., we have for z ∈ [x, y] that (4.1) In view of Theorem 2.2 we have F * (x, y) = inf P ∈D(µ,ν) F P (x, y). Thus, to complete the proof, it suffices to show that some P ∈ D(µ, ν) attains equality in the above inequality. otherwise. Another consequence are simple bounds on F * . A right-continuous function on R is unimodal if it is nondecreasing on (−∞, x 0 ) and nonincreasing on [x 0 , ∞) for some x 0 ∈ R. for all x < y. This is equivalent to F being unimodal. Turning to (ii), we first recall from Theorem 2.2 (i) that P * has the minimal cdf in D(µ, ν). On the other hand, H ∨ is the cdf of the comonotone coupling, which is the maximal cdf among all couplings and in particular in D(µ, ν). Thus, F * = H ∨ if and only if all directional couplings have the same cdf, showing the first claim. Now let F be continuous and suppose for contradiction that µ = ν. In view of Proposition 2.6, we may assume that µ ∧ ν = 0. By Lemma 5.1, µ(I) > 0 for the set I of strict increase of F . In particular, there exists x ∈ I, which implies that F µ (x) > F ν (x) and µ((x, z]) > 0 for any z > x. As P * is the comonotone coupling, µ| (x,∞) is transported to ν| (y,∞) for some y > x. On the other hand, ν((x, y]) > 0 due to µ((x, ∞)) = ν((y, ∞)) < ν((x, ∞)), which by minimality implies that ν x charges (x, y], contradicting ν x = ν| (y,∞) . Conversely, µ = ν clearly implies that the identity is the only directional coupling. D(µ, ν). The latter result was first obtained in [32]. See also [30] for a lower bound on a different coupling in a similar spirit. Both upper and lower bound were noted in [3], where it was also observed that the lower bound holds in the case of unimodality. The sharpness conditions are novel, to the best of our knowledge. (b) The continuity assumption in (ii) is clearly important for the last conclusion: if µ is a Dirac mass, all couplings of µ and ν coincide and in particular F * = H ∨ , but of course µ and ν need not be equal. The following is a standard example satisfying the condition in Corollary 4.3 (i) and covering, for instance, two normal or exponential marginals in stochastic order. The appearance of an antitone coupling is a particular case of a phenomenon that will be discussed in detail in Section 6.1. Example 4.5 (Single-crossing Densities). Suppose that µ and ν have densities f µ and f ν which cross exactly once; that is, there exists a point Then F is unimodal and hence F * = H ∧ . By Proposition 2.6 and the fact that the measures µ ′ and ν ′ (defined therein) are supported on disjoint sets, we see that P * (µ, ν) is the sum of an identity coupling Id(µ ∧ ν) and an antitone coupling P * (µ ′ , ν ′ ). The Transport Map The aim of this section is to prove Theorem 2.8 on the optimal transport map T . The analysis rests on a specific Hahn decomposition that holds for arbitrary signed, diffuse measures on R and is provided in the first subsection. We then return to our transport problem, showing in Sections 5.2-5.3 that T induces a coupling with the constrained crossing property, and thus is optimal. Section 5.4 explains how marginals with atoms can be reduced to the continuous case by a simple transformation. Sets of Increase and Decrease Let F : R → R be a continuous function of bounded variation. We recall that the signed measure ρ associated to F admits a unique Jordan decomposition ρ = µ−ν into mutually singular nonnegative measures, and then τ = µ+ν is the total variation measure of ρ. (In this section, µ and ν are arbitrary finite measures-not necessarily of the same mass or even µ st ν.) Similarly to ρ, the function F can be uniquely decomposed as F = F µ − F ν into continuous nondecreasing functions that are mutually singular; that is, If F is of class C 1 , the sets {∂F > 0} and {∂F < 0} clearly form a Hahn decomposition. Moreover, the two sets are countable unions of intervals where F is monotone. Our purpose is to provide a similar Hahn decomposition for bounded variation functions-here the sets will merely be Borel, as it is well known that a function can be absolutely continuous without being monotone on any interval (e.g., [15, p. 109, Exercise 41]). Consider a function F : R → R and x ∈ R. We call x a point of strict increase if there is a neighborhood of x in which x 0 < x < x 1 implies F (x 0 ) < F (x) < F (x 1 ). The set of all such points is called the set of strict increase of F and denoted I F . Points of strict decrease are defined analogously, and their set is denoted D F . Step 1. Let µ, ν, τ and F µ , F ν , V be as introduced above. Clearly µ, ν admit densities f µ , f ν with respect to τ , and these can be chosen to be indicator functions of complementary sets by the Hahn decomposition theorem. That is, Next, we claim that (with z/0 := 0, say) the limit exists for τ -a.e. x ∈ R and defines a version of the Radon-Nikodym derivative dµ/dτ -existence meaning particular that the limit is the same along any sequence 0 = ε n → 0. Let V −1 be the right-continuous inverse of V . where µ Fµ•V −1 is the Lebesgue-Stieltjes measure of F µ • V −1 and λ is the Lebesgue measure. By Lebesgue's differentiation theorem [15, Theorem 3.21, p. 98], F µ • V −1 is λ-a.e. differentiable and the derivative φ defines a density dµ Fµ•V −1 /dλ. (In fact, F µ • V −1 is even Lipschitz.) That is, there exists a Lebesgue-nullset N λ such that for y / ∈ N λ and y ′ → y, exists and satisfies f (x) = φ(V (x)). By the change-of-variable formula we see that f is a density of µ with respect to τ . It now follows that f = f µ τ -a.e. As a result, for all x outside a τ -nullset and any sequence ε n → 0, Step 2. Let I = I F , D = D F . The set (I ∪ D) c consists of three types of points. First, the strict local minimum and maximum points; this subset is countable and hence a τ -nullset as V is continuous. Second, the points which are contained in an interval of constancy of F . There are countably many such intervals and each one is clearly a τ -nullset. Third, the points of oscillation: If x ∈ (I ∪ D) c is not in an interval of constancy of F and if 0 = ε n → 0, then for all n large we have either . Combining these two properties, for all n large. In particular, In view of Step 1, the set of all such x must be a τ -nullset. This completes the proof that (I ∪ D) c is τ -null. It is easy to see that I and D are disjoint Borel sets. Noting also that {f = 1} ⊆ I and {f = 0} ⊆ D, it follows that I, D form a Hahn decomposition. Basic Properties of T We return to our setting with given marginals µ st ν. Throughout this section we assume that µ ∧ ν = 0, or equivalently, that µ and ν are mutually singular. For simplicity of exposition, we first focus on the case of diffuse marginals µ and ν; the extension to measures with atoms is then simple and carried out in Section 5.4. We consider F = F µ − F ν , a nonnegative continuous function of bounded variation with F (−∞) = F (∞) = 0, its graph G and its hypograph H, Recall from Theorem 2.8 that T (x) = inf{y ≥ x : (y, F (x)) / ∈ H} To see that T is bimeasurable-i.e., also satisfies T (B(R)) ⊆ B(R)-it suffices to show that there are at most countably many points y whose preimage T −1 (y) is uncountable; see for instance [25,Main Theorem]. Let y be such that T −1 (y) contains more than one point. The construction of T shows that all elements x ∈ T −1 (y), except possibly one, are local minima of F , and they have the common value F (x) = F (T −1 (y)). Any real function f only has countably many local minimum values f (x) (because each local minimum is minimal within a rational interval, yielding an injection of the minimum values into Q 2 ), so it suffices to show that for fixed y, T −1 (y) contains at most countably many points x which also have the property that T −1 (x) has several elements. If x 0 < x is such that T (x 0 ) = x, it follows that . Thus we can associate with x an interval of positive length in which it is unique with the property in question, and that implies the claim. Proof. We show that µ{T ≤ y} = ν((−∞, y]) for y ∈ R. Define the continuous function Marginals and Geometry of T For x ∈ I with x ≤ y, M (x) > 0 is equivalent to the existence of z ∈ (x, y] such that F (z) < F (x), thus equivalent to T (x) ≤ y. As µ is concentrated on I and T is directional, it follows that In particular, the graph of T has the constrained crossing property. is contained in the hypograph H, and similarly for the rectangle R ′ defined with x ′ instead of x (cf. Figure 4). To see that T (x ′ ) ≥ T (x), it suffices to We now have all the ingredients for the main result on T . Proof of Theorem 2.8. In view of Lemma 5.3 and T (x) ≥ x, we have that P := µ ⊗ δ T ∈ D(µ, ν). Lemma 5.4 shows that P is supported on a set with the constrained crossing property and then Theorem 2.2 yields P = P * . Reduction of Atoms Let µ st ν satisfy µ ∧ ν = 0 as before, but consider the case where µ and ν may have atoms. We still write F = F µ − F ν , now this function is rightcontinuous rather than continuous. The idea is to reduce to the atomless case by a transformation which inserts an interval at the location of each atom, with its length corresponding to the atom's mass. The atom is then replaced by a uniform density (cf. Figure 5). Let τ = µ + ν be the total variation and let be the sum of the identity function and the cdf of the jump part of τ . Clearly j is strictly increasing and right-continuous; we denote its rightcontinuous inverse function by j −1 : j(R) → R. Moreover, let be the interval representing the jump of j at x. In particular, J x is an interval of length τ ({x}) and a singleton {j(x)} if x is not an atom of µ or ν. Define an auxiliary measure µ ′ on R through its cdf as follows: for z ∈ j(R) we set F µ ′ (z) = F µ (j −1 (z)), whereas on the complement of j(R) we define F µ ′ (z) by linearly interpolating from its values on j(R). In other words, µ ′ is defined by the two properties that F µ ′ (j(x)) = F µ (x) for x ∈ R and if τ has an atom at x, then µ ′ is uniform on the interval J x with total mass µ ′ (J x ) = µ({x}). It follows that j is measure-preserving in the sense that µ ′ (j(B)) = µ(B) for any B ∈ B(R). A second measure ν ′ is defined analogously from ν. Proof. If µ({x}) > 0, then κ(x) is well defined by Lemma 5.2 and has the proper normalization as . Among the points x with µ({x}) = 0, it suffices to consider those with j(x) ∈ I ′ , the set of points of strict increase of F ′ -indeed, as j is measure-preserving, it follows from Lemma 5.2 that the complementary set is µ-null. For j(x) ∈ I ′ , Lemma 5.2 shows that κ(x) = δ j −1 (T ′ (j(x))) is well defined. As T ′ defines a coupling in D(µ ′ , ν ′ ) and j is strictly monotone and measure-preserving, it follows that κ defines a coupling in D(µ, ν). Moreover, we know that the graph Γ ′ of T ′ has the constrained crossing property (Lemma 5.4). The strictly monotone transform j does not invalidate that property (Corollary 2.3), hence Γ := j −1 (Γ ′ ) has the same property, and Γ carries µ ⊗ κ, as noted above. We conclude by Theorem 2.2. We note that P * can still be of Monge-type when µ has atoms: by Theorem 5.5, that happens precisely if j −1 (T ′ (J x )) is a singleton whenever µ({x}) > 0. This requires very specific atoms in ν, as κ must transport each upward jump point of F to a downward jump point, and moreover the downward jump must have at least the same size as the upward jump. One example of such a match-up is given in (a) below. (a) If the x i are distinct and n µ = n ν =: n, then P * is Monge and the transport map T is as constructed in the introduction: considering the destinations S 1 = {y 1 , . . . , y n } as a multi-set (i.e., distinguishing the y i even if they have the same value), we iterate for k = 1, . . . , n: (b) The case n µ = n ν is natural when µ and ν are empirical distributions of observed data-in the study of treatment effects, data are often not observed in pairs and hence the two marginals may not have the same number of observations; see Section 1. The above algorithm immediately extends to the case where n µ = mn ν for an integer m, by redefining the y i . If n µ and n ν are arbitrary, and/or the atoms have possibly different, rational weights, we can still write the marginals in the form µ = 1 n n i=1 δ x i and ν = 1 n n i=1 δ y i after by choosing a suitable n, now with the x i not necessarily distinct. The principle of the above algorithm to find P * still applies, but when several x i are at the same location, it will typically deliver a randomized coupling since an atoms of µ may be mapped into multiple atoms of ν. 6 Further Properties Antitone Decomposition As seen in Example 4.5, P * is the sum of an identity coupling and an antitone coupling when the marginal densities satisfy a single-crossing condition. In this section, we analyze to which extent such a decomposition generalizes to other marginals. The first result (together with Proposition 2.6) shows that P * is always the sum of an identity coupling and countably many antitone couplings. We will see that in certain cases, the marginal measures for those antitone coupling are simply restrictions of µ and ν to specific intervals, as in the aforementioned example. In general, however, the decomposition remains more implicit as the marginal measures do not admit such a simple description. Proof. In view of Theorem 5.5, we may assume that µ, ν are atomless. For any continuous, nonnegative, nonconstant function G of finite variation with G(−∞) = G(∞) = 0, we define x G = min(arg max G) as the smallest global maximum point and set whereas if G ≡ 0, we use x G := −∞ instead. Note that G ′ is continuous, increasing on (−∞, x G ] and decreasing on [x G , ∞), with 0 ≤ G ′ ≤ G and max G ′ = max G. Thus G ′ can be decomposed as G ′ = F µ ′ − F ν ′ where the singular measures µ ′ and ν ′ can be coupled by a directional antitone coupling. This coupling, while equal to P * (µ ′ , ν ′ ), will be denoted by P (G) for brevity. Moreover, µ ′ ≤ µ and ν ′ ≤ ν. Finally, the total variation Define F 1 := F and Using the above notation, P (F k ) is the directional antitone coupling between the singular measures µ ′ k , ν ′ k forming a decomposition for F ′ k . To see that On the other hand, V (F ′ k ) ≥ 2 max F k , so that max F k → 0; that is, F k uniformly decreases to zero and in particular F = k F ′ k . This shows that k P (F k ) is a coupling of µ and ν. Clearly this coupling is directional, and thus equal to P * (µ, ν) by Theorem 2.2 if it satisfies the constrained crossing property. To verify the latter, let x be a point of strict increase of F k and suppose that the transport map T k of P (F k ) maps x to y. Then F k (x) = F k (y) and F k (z) ≥ F k (x) > 0 for all z ∈ [x, y]. It follows for any j < n that F ′ j (z) < F j (z) for all z ∈ [x, y], which in turn implies that F ′ j is constant over the interval [x, y]. In other words, the couplings P (F j ) for j < k cannot transport any mass into the interval or out of the interval. This shows the constrained crossing property, and in addition that the marginals µ ′ j (resp. ν ′ j ) of P (F j ), j ≤ k are supported on disjoint sets which are finite unions of intervals. In particular cases, we can obtain the antitone couplings in P * explicitly as antitone couplings between disjoint intervals. Example 6.2 (Multiple-crossing Densities). Assume that µ and ν are atomless and that F = F µ − F ν is piecewise monotone (with finitely many pieces). Then by inspecting the proof of Proposition 6.1, we see that P * is the sum of the identical coupling of µ ∧ ν and finitely many antitone couplings between pairs of disjoint intervals. As an important special case extending Example 4.5, suppose that µ and ν have continuous densities that cross finitely many times. Then F = F µ − F ν = F µ−µ∧ν − F ν−µ∧ν is piecewise monotone and the optimal coupling between µ − µ ∧ ν and ν − µ ∧ ν is the sum of finitely many antitone couplings between disjoint intervals. In contrast to the above example, the following shows that a decomposition into antitone couplings between intervals is not possible in general. Example 6.3 (Absence of Antitone Intervals). Let µ be the Cantor distribution on [0, 1] and ν be uniform on [0, 2]. Clearly µ ∧ ν = 0. We first verify that µ st ν, or equivalently D(µ, ν) = ∅. Each element x ∈ C can be represented in base 3 as x = 2 ∞ n=1 x n 3 −n where x n ∈ {0, 1}. The comonotone transport T C given by T C (x) = 2 ∞ n=1 x n 2 −n is directional and transports µ to ν. Hence, µ st ν. Next, we show that P * ∈ D(µ, ν) does not contain any antitone couplings between intervals. Assume for contradiction that there exists an interval [a, b] ⊆ [0, 1] such that µ([a, b]) > 0 and T | [a,b] is the antitone mapping between µ| [a,b] and its image. This implies that there exists c such that µ([a, c]) > 0 and T transports µ| [a,c] to a distribution supported by (c, ∞). However, by Theorem 2.1, T transports µ| (a,∞) to a distribution ν a whose minimality property together with ν([a, c]) > 0 imply that ν a charges [a, c], a contradiction. Optimality as Unconstrained Transport The optimal directional coupling P * is also the optimizer for certain classical transport problems (unconstrained and with finite cost function) where the constraint is "not binding," although only for specific marginals. We confine ourselves to giving one example. Consider µ st ν and the transport problem inf P c(|y − x|) P (dx, dy) (6.1) over all couplings P of µ and ν. Suppose that c : R → R + is increasing and concave, so that c(|y − x|) is supermodular on H but (typically) not on R 2 . Proposition 6.4. If F = F µ − F ν is unimodal, then P * (µ, ν) is an optimal coupling for the unconstrained problem (6.1). If c is strictly concave, the optimizer is unique. This follows from the general results stated in [17,Part II]. A direct argument is sketched below. Proof. We know from Theorem 2.2 that P * is optimal among all directional couplings. To rule out that a non-directional coupling has a smaller cost, the key observation is that if P is an optimizer, it is concentrated on a c-cyclically monotone set Γ, which implies that Γ cannot contain pairs (x, y), (x ′ , y ′ ) with y < x and either (i) x ′ ∈ [y, x) and y ′ ≥ y or (ii) y ′ ∈ [x, y) and x ′ ≤ x. Together with the unimodality condition, this can be seen to imply the result. We omit the details in the interest of brevity. Other Constraints The directional constraint Y ≥ X naturally generalizes to Y ≥ X + D for a measurable function D : R → R such that x → x + D(x) is strictly increasing. For instance, if D ≡ d is constant, this means that the transport must travel as least a distance d to the right (or at most distance |d| to the left, if d < 0). While Y ≥ X is equivalent to P (H) = 1, the generalized constraint is expressed as P (D) = 1 for the epigraph D of x → x + D(x). We denote by D D (µ, ν) the set of all such couplings P of µ, ν. The construction of P * naturally extends to this constraint. Indeed, let Z(x) = x + D(x) and consider arbitrary distributions µ and ν on R. We define the transformed marginal µ ′ = µ•Z −1 and define µ D ν to mean that µ ′ st ν. Then µ D ν if and only if D D (µ, ν) = ∅, and more generally, the transformation Z induces a bijection between D D (µ, ν) and the set D(µ ′ , ν) of directional couplings between µ ′ and ν. If we define the analogues of the constrained crossing property, constrained submodularity, etc., for D, this bijection preserves the crossing/optimality properties and we find that P D * (µ, ν) := P * (µ ′ , ν) • (Z, Id) has the properties analogous to the optimal directional coupling for the constraint D. We omit the details in the interest of brevity.
2020-02-21T02:01:00.449Z
2020-02-20T00:00:00.000
{ "year": 2022, "sha1": "4750373acfae41869ff361fddc5b51638326c72c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "799cc75521a76ccc7f99025c77f8bb132003f56d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
237544902
pes2o/s2orc
v3-fos-license
Lateral minimal approach to the terrible triad of the elbow: a treatment protocol in Beijing Jishuitan Hospital Background This study aimed to report the surgical techniques and results of treating coronoid process and radial head fracture combined with dislocation of the elbow (terrible triad of the elbow) using a single lateral incision, known as the extensor digitorum communis (EDC) split approach. Methods A retrospective analysis was performed of 109 patients with terrible triad of the elbow who had been treated by the authors from January 2013 to December 2019. The participants included 67 males and 42 females, with a mean age of 42.2 years (14–71 years). All participants were treated via a single lateral approach. The coronoid process was fixated with Kirschner wires combined with anterior capsule suture lasso fixation. For the radial head fracture, 58 cases were fixated by AO headless cannulated screw (AO HCS) and 51 cases by acumed radial head replacement. In repair of the lateral collateral ligament (LCL) complex and the common extensor tendon, 28 cases used ETHIBOND suture through bone holes at the humeral lateral epicondyle, and the other 81 cases used suture anchors. No medial collateral ligament was repaired. A total of 46 participants were fixated with a Stryker dynamic joint distractor (DJD) II hinged external fixator to protect the bone and soft tissue. Results All participants were followed up from 6 to 60 months (mean, 36.1 months). Their elbow range of flexion and extension averaged 123.4°±20.7°, forearm rotation 151.0°±25.6°, and Mayo elbow performance score (MEPS) 92.3±8.8. There were 22 participants (19.5%) with ulnar nerve symptoms, 16 (14.7%) who had elbow stiffness, and 7 underwent secondary surgery, including 6 removals of internal fixation, 5 arthrolyses of the elbow, and 2 ulnar neurolyses. Conclusions Coronoid fractures, radial head fractures, and LCL injuries of the terrible triad of the elbow can be treated satisfactorily through a lateral minimal incision, combined with a hinged external fixation if necessary. Introduction The terrible triad of the elbow, which was first described by Hotchkiss in 1996, is usually characterized by elbow dislocation combined with fractures of the radial head and coronoid process of the ulna (1). Typically, the elbows display obvious posterolateral rotational instability and associated severe soft tissue injuries, especially the lateral collateral ligament (LCL) complex (2,3). Therefore, the treatment of this complex injury has posed a great challenge for orthopedic surgeons and is often associated with devastating complications including elbow stiffness (10.3%), failure of osteosynthesis (6.7%) and ulnar neuropathy (6.2%) etc. (2,(4)(5)(6). Recently, with more in-depth studies regarding elbow anatomy and biomechanics as well as the Original Article advancement of surgical techniques, the prognosis of a such complex injury has been greatly improved. However, the sample sizes of previous clinical studies have been relatively small and the surgical procedures for treating the terrible triad have remained controversial, especially concerning the specific surgical approaches (7)(8)(9)(10). Regarding the surgical approaches, a posterior approach (7,8) or a combined medial and lateral approach (9,10) have been the preferred choices for most surgeons to restore elbow stability. However, these 2 approaches have been found to increase intraoperative soft tissue disruption to the elbow, and thus, they may increase the risk of developing postoperative elbow stiffness, heterotopic ossification, or ulnar nerve symptoms. In this study, we proposed a treatment protocol for the terrible triad of the elbows using an extensor digitorum communis (EDC) split approach through a single lateral incision, which is generally 4 cm long with very few cases extending to 6 cm. Here, we aimed to introduce this method in detail and further evaluate the clinical outcomes of patients with terrible triad treated via this surgical approach. We present the following article in accordance with the STROBE reporting checklist (available at https:// dx.doi.org/10.21037/atm-21-2542). Methods Patients in Beijing Jishuitan Hospital Department of Orthopedic Trauma with acute terrible triad of elbows who required surgical treatment due to severe instability or displaced fracture fragments after closed reduction between January 2013 and December 2019 were included in this study. Retrospective clinical research was conducted with the following inclusion criteria: (I) treated by single lateral incision and EDC split approach; (II) complete perioperative information and postoperative follow-up data; (III) minimum 1 year follow-up time. The exclusion criteria were as follows: (I) age <14 years old; (II) pathologic fractures; (III) unclosed epiphysis; and (IV) lost to follow-up. A total of 136 patients with the terrible triad of the elbows underwent surgical treatment at our institution. After applying the inclusion and exclusion criteria, 109 patients were included in the study. For the patients who were excluded, there were 2 patients who were <14 years old, and there were 25 patients who were lost to follow-up. The mean age of all participants was 42.2±13.8 . The study was approved by Beijing Jishuitan Hospital Institutional Review Board (No. 201708-04) and written informed consent was obtained from all patients. All procedures performed in this study involving human participants were in accordance with the Declaration of Helsinki (as revised in 2013). All participants underwent closed reduction and immobilization with plaster in the emergency room to maximally restore joint congruence. All participants were treated via a single lateral incision, which was generally 4 cm (with very few cases extending to 6 cm), and EDC split approach to expose the injured LCL complex, the origin of common extensor tendon, radial head fracture, coronal process fracture, and anterior joint capsule, respectively ( Figure 1). Usually, a bare area was to be found on the posterior aspect of the lateral epicondyle of the distal humerus, due to the origin of the LCL complex having been torn apart. Through the "entrance" just distal to the bare area, the common extensor muscle was split open along the lateral epicondyle and the midline of the radial head. In this way, it was easier to avoid further disruption to collateral ligaments and surrounding muscles and preferable for the exposure and fixation of fractures of the radial head and coronoid process. However, the anterior aspect of the lateral epicondyle was usually intact and required additional stripping to enhance exposure. We firstly dealt with fractures of the coronoid process. The anterior joint capsule attached to the coronoid fracture fragment was sutured using ETHIBOND#2 (Ethicon Inc., Johnson and Johnson, New Brunswick, NJ, USA) and further reduced using the "lasso technique" without tightening the suture tails. Then, a pair of K-wires (diameter 1.5 mm) were inserted from the dorsal side of the proximal ulna into the center of the base of the coronoid fragment. After adequate reduction of the coronoid process using the lasso, the K-wires were further inserted into the fragment for stabilization. The suture tails were tensioned and tied on the subcutaneous posterior border of the ulna. Then, the tail of K-wires was bent, shortened, and buried beneath the skin. The sequential order of K-wire fixation and suture tail tightening is crucial for preventing the secondary fragmentation and loosening of the lasso suture. If the fragment was too small or comminuted to be fixed by K-wires, we just performed a "lasso technique" to stabilize the coronoid process. Secondly, we dealt with the radial head fractures. A total of 58 participants were fixed by 2.4 mm headless cannulated screws (HCS), and the remaining 51 participants were treated by radial head replacement due to severe comminution. Thirdly, we dealt with the soft tissue. The LCL complex and the origin of common extensor tendon were repaired using either suture anchor (81 participants) or transosseous braided suture through the lateral epicondyle using ETHIBOND#2 (28 participants). In all participants, the medial collateral ligaments (MCL) were not surgically treated and left as they were. Lastly, the elbow joint stability must be carefully examined after open reduction and internal fixation. Physical examination revealed mild "drop sign" on lateral radiographs when flexing the elbow joints in 46 participants, and they were further treated by Stryker dynamic joint distractor (DJD) II hinged external fixator (Stryker Corp., Kalamazoo, MI, USA) in order to achieve better joint stability and protect the fixation of bony structures and repair of soft tissue ( Figure 2). This supplementary intervention was performed to enable patients to achieve full range of motion exercises in early stage rehabilitation. A total of 63 participants achieved adequate stability when flexing and extending the elbow joint during surgery, and they were protected by braces or plaster casts (for less than 1 week) as precautionary measures in order to maintain joint congruence ( Figure 3). The humeroulnar joints of all our participants were not fixed by K-wires. Drainage was removed when there was less than 30 mL during the first 24-48 hours after surgery. Glucosamine was routinely administered 100 mg TID for 6 weeks postoperatively. The K-wires for the fixation of coronoid process fracture were usually removed after 2-3 months. Regarding postoperative rehabilitation, for participants A C E D F B treated by external fixators, full range of motion (ROM) active and gentle passive exercises were initiated on the second day after surgery. Participants without external fixators were protected by braces or casts for a week without active extension exercises. Moreover, elbow extension was limited to within 30 degrees during the first month of rehabilitation. Violent passive massage or stretching conducted by others were strictly forbidden. Functional outcomes were collected and documented in our database after routine follow-ups in outpatient clinics. Clinical results of the latest follow-up were extracted from our database and evaluated using the parameters including chief complaints, ROM, Mayo Elbow Performance Score (MEPS) (11,12), visual analogue scale (VAS), complications, and secondary operations. The ROM and MEPS was measured by a doctor who wasn't the surgeon. Statistical analysis The software SPSS 24.0 (IBM Corp. Armonk, NY, USA) was used for statistical analysis of all follow-up data. For the quantitative variables, the descriptive statistics included means, medians, standard deviations, and ranges. Results A total of 109 patients with the terrible triad of the elbows were collected from our database. The average follow-up duration was 36.1±11.1 months (6-60 months). The baseline characteristics were as follows ( Table 1). The average ROM of flexion and extension was 123.8°±20.5°, ROM of rotation was 151.1°±25.1°, MEPS was 92.4±8.8 (68.8% were excellent), and VAS was 0.9±1.5. There were 22 participants (19.5%) who experienced ulnar nerve symptoms, which is defined as local sensory abnormality or weakened muscle strength after surgery. There were 16 participants (14.7%) with elbow stiffness, which is generally defined as an elbow ROM less than 100° either in flexion-extension or pronation-supination. There were 7 participants who underwent secondary surgery, including 6 removals of internal fixation, 5 arthrolyses of the elbow, and 2 ulnar neurolyses. None of the participants experienced infection or bone nonunion ( Table 2). Discussion The terrible triad of the elbow refers to complex fracture and dislocation of radial head fracture, coronoid fracture, and elbow dislocation, which often leads to complications such as elbow stiffness, failure of osteosynthesis, ulnar neuropathy and recurrent instability (2,6). The coronoid process and the radial head are the primary constraints against posterior translation of the forearm. After a fall on the outstretched hand with the elbow extended or slightly bent, the forearm translates posteriorly, which leads to transverse shearing fractures of the coronoid, Figure 3 The postoperative X-ray without an external fixator. and the "anterior rim" of the radial head hits against the capitulum causing a radial head fracture. As a result, the terrible triad injury occurs. With thorough understanding of elbow anatomy and biomechanics and the advancement of surgical techniques, orthopedists have gradually established systematic treatment protocols and rehabilitation strategies for the terrible triad, which has significantly improved its prognosis. The goal of surgical management of the terrible triad is to achieve concentric reduction and stability of the elbow joint, which allows early mobilization to maximally restore elbow function (13). For less severe injuries, closed reduction and hinge external fixation can be used to achieve satisfactory results. Most instances of the terrible triad will lead to bony block and difficulty in maintaining the stability of the elbow, so treatment is usually via surgery. Common surgical approaches to the terrible triad include the posterior approach, combined medial and lateral approach, and anterior approach (7-10). Many surgeons may choose the first 2 approaches to reconstruct the anatomy. Lindenhovius et al. (14) followed 18 terrible triad patients through the posterior approach, achieving good functional outcomes with a flexion-extension arc of 119°and a rotational range of 141°, and the average MEPS of these patients was 88 points, after a mean follow-up of 24 months. Zhang et al. (10) followed 21 terrible triad patients undergoing internal fixation through the combined medial and lateral approach for an average of 32 months, achieving good functional outcomes with a flexion-extension arc of 126.0°±4.8° and a rotational range of 139.0°±4.1°. The average MEPS of these patients was 95.2 points. Although these 2 approaches can more satisfactorily restore the stability of the elbow joint, they significantly increase surgical trauma, which adds to the probability of joint stiffness, ectopic ossification, and ulnar nerve symptoms. The anterior approach exposes the coronoid process more optimally, but also damages the anterior joint capsule. Once stiffness occurs, it will be difficult to improve the extension of the elbow. Therefore, the anterior approach is not recommended. Pugh et al. (13) proposed a single lateral approach to the terrible triad, which achieved satisfactory outcomes. A year later, McKee et al. (15) suggested a standard surgical protocol for the terrible triad of the elbow, indicating that a single lateral approach was enough for most cases, and the combined medial approach was needed only when the coronoid process was difficult to expose by the lateral approach, preoperative ulnar nerve injuries existed, and MCL needed to be repaired. At Beijing Jishuitan Hospital, we have treated the terrible triad of the elbow through a lateral minimal approach. Compared with the other abovementioned approaches, the single lateral minimal approach damages less soft tissue, maintaining the stability of the injured elbow, which may lead to better functional outcomes. Through the lateral minimal approach, the injured lateral ulnar collateral ligament (LUCL), radial head fracture, coronoid fracture, and anterior joint capsule were revealed from shallow to deep, and then the coronoid process, anterior joint capsule, radial head, LCL complex, and extensor tendon origin were repaired from deep to shallow. The lateral epicondyle was used as an anatomical landmark. A 4 cm incision was made along the line of the supracondylar crest-lateral epicondyleradial head. Soft tissue avulsion was usually found behind the midline of the lateral epicondyle, forming a bare area. Through the original rupture, the EDC was split along the lateral epicondyle of the humerus and the midline of the radial head. Dissection did extend distally to the radial neck, to avoid damage to the deep branch of the radial nerve below the radial tubercle. Proximal dissection of the brachioradialis and extensor carpi radialis longus and brevis anterior to the midline of the lateral epicondyle was performed, and the anterior joint capsule was retracted forward. Even if the radial head fracture was relatively intact with small fragments, the coronoid process could be well exposed and fixed through this approach. In cases treated with radial head arthroplasty, the coronoid process was more clearly exposed. Anatomical studies (16) also confirmed that this method could better expose the radial head and coronoid process. Our team believe that the terrible triad is caused by posterolateral rotational injury; as for other radial head fractures, elbow dislocations combined with anteromedial coronoid process compression fractures caused by varus stress, the articular surface of the coronoid process needs to be reduced and fixed with a buttress plate from the medial side, so they are diagnosed as varus posteromedial instability instead of the terrible triad. The most common coronoid fracture in terrible triad is a tip fracture (17), which is mostly the anterolateral part of the coronoid process, generally not exceeding the sublime tubercle; the ulnar attachments of MCL are often intact, so a medial buttress plate is not usually needed. Due to the coronoid fracture fragments in the terrible triad usually being small and comminuted, the use of screws can easily cause the fragments to break again, increasing the difficulty of the operation and risk of ineffective fixation. The coronoid process can be fixed with 2 Kirschner wires from the dorsal side of the proximal ulna anteriorly and sutured to the anterior joint capsule through 2 bony holes to maintain the anterior stability of the elbow. In cases with small coronoid fragments, a simple fixation with sutures is performed. In patients with severe elbow injuries, the anterior soft tissue is torn from the anterior side of the proximal ulna, resulting in a dislocation tendency of the humeroulnar joint. In these cases, the tension of the anterior joint capsule is more important than the osseous stability of the coronoid process, and this method can achieve satisfactory stability of humeroulnar joint. It is not recommended to resect fragments of radial head in terrible triad, which often leads to postoperative instability. The radial head may be fixed with 2 crossed countersunk screws other than plates after anatomical reduction when possible, so as to reduce implant irritation. If the radial head is severely comminuted or has poor bone density, radial head arthroplasty should be considered. Attention should be given to the height and diameter of the prosthesis in the operation to avoid postoperative instability or "overstuffing" syndrome. Two studies have focused on the clinical results of ORIF versus replacement of the radial head regarding the terrible triad injuries. Watters et al. (18) did not observe any significant differences between groups in terms of ROM and DASH at a minimum of 18 months follow-up. Leigh et al. (19) found that revision surgery was more common in the ORIF group (5/13) than in the radial head replacement group (2/11) after a mean follow-up of 41 months. The repair of the attachments of the LCL complex and the common extensor tendon is critical to postoperative stability (8). They can be repaired by drilling and suturing on the lateral epicondyle or using anchors. It is generally suggested that reconstruction of LUCL should be performed at 40°-50° elbow extension, but when repairing fresh injuries, reconstruction at 90° elbow flexion and a neutral rotational position of forearm is more convenient (20). Whether to repair MCL is still controversial. Most surgeons believe that it is not necessary to repair MCL, because the extra medial incision will further increase surgical trauma to the soft tissues and cause postoperative complications, especially elbow stiffness (3,7,15,21). The elbow is a triangular stable structure composed of medial, lateral, and anterior parts. After the anterior joint capsule is repaired, flexion-extension stability is established; after the lateral structures are reconstructed, rotational stability reappears. The MCLs of patients with the terrible triad often suffer from incomplete injuries, and valgus is generally forbidden during the rehabilitation process, which allows MCLs to heal gradually over 2 months. Therefore, it is usually not necessary to deliberately add a medial incision for repair. After the repair of fractures and soft tissues, the stability of the elbow joint must be verified. An elbow with a full flexion-extension arc under anesthesia without dislocation is considered stable and no stress test is required (22,23). If the elbow is stable intraoperatively, short-term plaster immobilization for within 1 week is sufficient. When the elbow is still unstable after reconstructing the osseous structures and ligaments, a hinged external fixator is applied to protect the repaired bones and soft tissues, maintain the stability of the joint, and enable the patient to mobilize early. Pugh et al. (13) followed 36 terrible triad patients undergoing internal fixation through the single lateral approach for an average of 34 months, achieving good functional outcomes with a flexion-extension arc of 112°±11° and a rotational range of 136°±16°. The average MEPS of these patients was 88 points. Gong et al. (24) compared the outcomes of the single lateral approach with the combined medial and lateral approach for terrible triad of the elbow, and found that the single lateral approach provided better functional results and a lower incidence of postoperative heterotopic ossification. From 2013 to 2019, 109 patients with terrible triad were treated through a lateral minimal approach at Beijing Jishuitan Hospital. At a mean time of 36.1±11.1 months postoperatively, the flexion-extension ROM of the elbow averaged 123.4°±20.7° degrees and forearm rotation averaged 151.0°±25.6° degrees. The mean MEPS was 92.4±8.8 points, with a low reoperation rate of 6.4%, demonstrating satisfactory short-term functional outcomes of the lateral minimal approach. This study has the following limitations: (I) as a retrospective study, the result is prone to have selection bias; (II) the size of this study, though comparable to or even larger than similar studies, may not be large enough to show the functional outcomes of the lateral minimal approach; (III) the measurement of ROM of elbows was performed by the same doctor, and there may have been favour detection bias. Conclusions Coronoid fractures, radial head fractures, and LCL injuries in the terrible triad of the elbow can be treated satisfactorily through a lateral minimal incision, combined with a hinged external fixation if necessary. The method described above can restore the normal anatomy of the elbow and provide sufficient stability, which may lead to a reduction in the incidence of elbow stiffness by reducing surgical trauma and promoting early mobilization.
2021-09-09T20:38:58.369Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "cbe114a52d09e9bc574095f74702b31bcb9f4c20", "oa_license": null, "oa_url": "https://doi.org/10.21037/atm-21-2542", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "37f75a046b3affc3f192ffcb1185d42118df9cd5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256546553
pes2o/s2orc
v3-fos-license
MULTIPLE ORGAN DYSFUNCTION SYNDROME AND DISSEMINATED INTRAVASCULAR COAGULATION CAUSING EXTENSIVE MULTIPLE AMALRIC TRIANGULAR CHOROIDAL INFARCTIONS Multiple organ dysfunction syndrome and disseminated intravascular coagulation may lead to extensive bilateral choroidal infarctions. From the *West Virginia University Eye Institute, Morgantown, West Virginia; and †Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania. 4][5][6] In this report, we present a case of multiple organ dysfunction syndrome (MODS) and disseminated intravascular coagulation (DIC), leading to bilateral multiple and extensive choroidal infarcts. Case Report A 49-year-old nonpregnant white woman with hypertension and otherwise unremarkable medical and surgical history presented to the emergency department after a suicidal attempt; the patient had ingested a whole bottle of oxycodone-acetaminophen, 20 pills of oxycontin of unknown dosage, six pills of 12.5-mg zolpidem, two pills of hydrochlorothiazide, and a large amount of vodka over a 24-hour period.At presentation, the patient was febrile, tachycardic, and hypotensive (82/42 mmHg).She was subsequently intubated for acute hypoxic respiratory failure with presumed septic shock caused by aspiration pneumonia in the setting of acute opioid intake. The patient further developed MODS, including liver and kidney failure, as well as a high troponin level because of demand ischemia.An elevated international normalized ratio (INR, 2.5), a raised fibrin degradation product D-dimer (.30.0 mg/L, ref ,0.59), a low platelet count (43,000), and slightly reduced fibrinogen count (121) indicated DIC secondary to sepsis and organ failure.The patient survived, but 1 week after extubation, she complained of blurred vision in both eyes.Visual acuity was 20/ 200 in the right eye and 20/100 in the left eye and improved with pinhole to 20/60 and 20/40, respectively.The initial fundus examination revealed multiple dot-blot hemorrhages and cotton wool spots.These resolved, and subsequently, she was noted to have multiple areas of triangular granular pigmentation consistent with choroidal infarcts (Figure 1).Optical coherence tomography (OCT, Heidelberg Engineering, Heidelberg, Germany) demonstrated diffuse thinning of the choroid with overlying retinal pigment epithelium atrophy and scattered pigment deposits (Figure 2).Fluorescein angiography (FA, Heidelberg Engineering) indicated delayed patchy filling followed by late staining (Figure 3).Goldmann visual field (GVF, Octopus perimetry-Haag-Streit, Koeniz, Switzerland) test displayed discrete areas of visual field deficits in both eyes (Figure 4).The best-corrected final visual acuity at the most recent visit remained at 20/100 in the right eye and 20/30 in the left eye. Discussion The choroid is one of the most highly perfused tissues in the human body.Its vascular supply is subserved by the short PCA and the long PCA supplying posterior and anterior portion of the choroid, respectively.Both short and long PCA are terminal arterioles supplying the end tissue of choroid in a wedge-shaped pattern whose apex points toward the posterior pole while the base points toward the periphery. 3Animal model research revealed that triangular grayish pigmentation appeared within 18-24 hours after initial PCA occlusion. 7On fluorescein angiography, delayed choroidal filling was noted in the early phase, followed by late hyperfluorescence, 4 likely from transmission defects because of overlying retinal pigment epithelium atrophy as demonstrated in our patient (Figure 3).Optical coherence tomography demonstrated outer retinal necrosis corresponding to the Amalric triangular area of choroidal infarction along with subsequent retinal pigment epithelium reconstitution and clumping (Figure 2). 3 Goldmann visual field revealed wedge-shaped patchy areas of visual field defect corresponding to the Amalric triangles in both fundi (Figure 4). Although the Amalric sign has been previously reported in patients with vaso-occlusive diseases, including giant cell arteritis, 1,4 Raynaud disease, 1 polyarteritis nodosa, 6 globe trauma, 2 retrobulbar hemorrhage, 4 sickle cell hemoglobinopathy, 8 and cocaine use, 5 Amalric triangle choroidal infarction in the setting of DIC is rare.Disseminated intravascular coagulation presents as an acquired systemic activation of the coagulation cascade from either infectious or noninfectious insults, including trauma, hypotension, and sepsis. 9Vascular endothelial dysfunction and microvascular thrombosis from excessive cytokine-initiated coagulation may lead to multiorgan hypoperfusion and dysfunction, including the choroid. Ocular involvement with DIC has been reported; Cogan 10 reported a series of seven DIC patients with ophthalmic manifestations including serious retinal detachment, choroidal hemorrhages, and pigment epithelium atrophy over the foci of choriocapillary occlusion.Another case reported a 27-year-old woman with DIC of unknown etiology with scattered yellow-grayish plaques in the fundi in the posterior choroid.Ocular histopathologic analysis obtained at autopsy using a phosphotungstic acidhematoxylin stain revealed fibrin clots in both the larger choroidal vessels and the choriocapillaris. 9In each of those cases, the extent of the choroidal infarcts was not as strikingly large, triangular and impressive as in our patient, suggesting that DIC most likely involves microinfarcts at the level of the choriocapillaris but rarely of the larger choroidal vessels. 10n this report, our patient had multiple hypogenic insults to the choroid including DIC, MODS, and severe hypotension requiring exogenous adrenergic support, as well as hypertensive crisis that followed from acute kidney injury, all of which contributed to choroidal infarction, not only at a microvascular but also at a macrovascular level, including the larger choroidal vessels.In our case, the DIC was initiated by the MODS and presumed septic shock.Once DIC is initiated, it is self-propagating and cascades into a systemic dysregulation that further worsens the underlying MODS. The multiple large infarcts in our patient were a manifestation of wide systemic, deranged hemostasis.The close association between DIC and severe hypertension or hypotension as evidenced in cases such as pre/ eclampsia, amniotic fluid embolism, renal failure, or septic shock likely led to the Amalric choroidal triangular sign in our patient.No direct association with choroidal infarction has been reported related to excessive ingestion of opioid and gamma-aminobutyric acid (GABA) agonists, such as alcohol and zolpidem. In conclusion, we present a case of extensive, large and bilateral Amalric triangular choroidal infarctions occurring as a result of multiple hypogenic insults causing MODS and DIC. Fig. 1 . Fig. 1.A color fundus photography at the most recent visit illustrating triangular choroidal infarction bilaterally with areas of overlying retinal pigment epithelium atrophy. Fig. 2 . Fig. 2.An optical coherence tomography at 1 month after initial assessment (top row) and at 10 years (bottom row).Pigment excrescence and outer retinal atrophy overlie the area of choroidal infarction. Fig. 3 . Fig. 3.The early phase (top row) of fluorescein angiography demonstrates delayed choroidal filling in the areas of choroidal infarction followed by late hyperfluorescence (bottom row) from window defects through the areas of retinal pigment epithelium atrophy.
2023-02-04T06:17:21.656Z
2022-10-29T00:00:00.000
{ "year": 2022, "sha1": "7ade943d788b9cd2072e1269222b2ada6d3a6d03", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/icb.0000000000001360", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b0d2356ebfced3462f5c2f6caa73365431b74d29", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248211032
pes2o/s2orc
v3-fos-license
Development and Functional Characterization of a Versatile Radio-/Immunotheranostic Tool for Prostate Cancer Management Simple Summary In previous studies, we described a modular Chimeric Antigen Receptor (CAR) T cell platform which we termed UniCAR. In contrast to conventional CARs, the interaction of UniCAR T cells does not occur directly between the CAR T cell and the tumor cell but is mediated via bispecific adaptor molecules so-called target modules (TMs). Here we present the development and functional characterization of a novel IgG4-based TM, directed to the tumor-associated antigen (TAA) prostate stem cell antigen (PSCA), which is overexpressed in prostate cancer (PCa). We show that this anti-PSCA IgG4-TM cannot only be used for (i) redirection of UniCAR T cells to PCa cells but also for (ii) positron emission tomography (PET) imaging, and (iii) alpha particle-based endoradiotherapy. For radiolabeling, the anti-PSCA IgG4-TM was conjugated with the chelator DOTAGA. PET imaging was performed using the 64Cu-labeled anti-PSCA IgG4-TM. According to PET imaging, the anti-PSCA IgG4-TM accumulates with high contrast in the PSCA-positive tumors of experimental mice without visible uptake in other organs. For endoradiotherapy the anti-PSCA IgG4-TM-DOTAGA conjugate was labeled with 225Ac3+. Targeted alpha therapy resulted in tumor control over 60 days after a single injection of the 225Ac-labeled TM. The favorable pharmacological profile of the anti-PSCA IgG4-TM, and its usage for (i) imaging, (ii) targeted alpha therapy, and (iii) UniCAR T cell immunotherapy underlines the promising radio-/immunotheranostic capabilities for the diagnostic imaging and treatment of PCa. Abstract Due to its overexpression on the surface of prostate cancer (PCa) cells, the prostate stem cell antigen (PSCA) is a potential target for PCa diagnosis and therapy. Here we describe the development and functional characterization of a novel IgG4-based anti-PSCA antibody (Ab) derivative (anti-PSCA IgG4-TM) that is conjugated with the chelator DOTAGA. The anti-PSCA IgG4-TM represents a multimodal immunotheranostic compound that can be used (i) as a target module (TM) for UniCAR T cell-based immunotherapy, (ii) for diagnostic positron emission tomography (PET) imaging, and (iii) targeted alpha therapy. Cross-linkage of UniCAR T cells and PSCA-positive tumor cells via the anti-PSCA IgG4-TM results in efficient tumor cell lysis both in vitro and in vivo. After radiolabeling with 64Cu2+, the anti-PSCA IgG4-TM was successfully applied for high contrast PET imaging. In a PCa mouse model, it showed specific accumulation in PSCA-expressing tumors, while no uptake in other organs was observed. Additionally, the DOTAGA-conjugated anti-PSCA IgG4-TM was radiolabeled with 225Ac3+ and applied for targeted alpha therapy. A single injection of the 225Ac-labeled anti-PSCA IgG4-TM was able to significantly control tumor growth in experimental mice. Overall, the novel anti-PSCA IgG4-TM represents an attractive first member of a novel group of radio-/immunotheranostics that allows diagnostic imaging, endoradiotherapy, and CAR T cell immunotherapy. Introduction While localized prostate cancer (PCa) is well manageable, treatment options for metastatic disease are still limited [1]. In the last decade, significant progress has been especially made towards PCa theranostics aiming to design antibody (Ab)-or small moleculebased radiopharmaceuticals for both tumor imaging and targeted endoradiotherapy [2]. Despite advances in imaging with [ 68 Ga]Ga-PSMA-11 or [ 18 F]F-DCFPyl for PCa detection/staging [3][4][5][6], and encouraging results achieved with novel radiotherapeutic agents, such as the bone-targeting radionuclide radium-223 [7] or [ 177 Lu]Lu-PSMA-617 [8], eventually all patients with advanced disease will progress to metastatic castration-resistant prostate cancer (mCRPC). The low five-year survival rate of 30%, together with a median survival of only 10 to 21.7 months [1,9,10], underline that new treatment options for highrisk mCRPC patients are urgently needed. An emerging therapeutic modality comprises targeted immunotherapies with chimeric antigen receptors (CARs) T cells that revolutionized in particular the therapeutic landscape of hematologic malignancies [11][12][13], and thus hold also great promise for solid tumor treatment. However, successful clinical translation towards solid cancers faces some obstacles, as reflected by clinical results with both prostate stem cell antigen (PSCA)specific (NCT03873805, NCT02744287, NCT03198052) and prostate-specific membraneantigen (PSMA)-specific CAR T cells (e.g., NCT01140373, NCT01140373, NCT03089203, NCT04053062), which are rather suboptimal in efficacy. In particular, CAR T cells have to overcome an immunologically cold tumor microenvironment (TME) [14] and efficiently traffic to and infiltrate distinct tumor sites that are located in the bones in >80% of cases [15]. Moreover, all PCa-specific tumor-associated antigens (TAAs) described so far are also expressed on healthy tissues, increasing the risk for on-target/off-tumor effects. Taken together, CAR T cell therapies must adapt to the challenging nature of PCa to augment their efficacy and safety in patients. One possibility to achieve this goal is the use of adaptor CAR platforms [16], which provide the required flexibility. In our group, we have established the switchable CAR platform "UniCAR" [17][18][19][20][21][22]. In contrast to conventional CAR T cells, UniCAR T cells do not recognize a surface molecule but bind a short peptide epitope (UniCAR epitope) [23][24][25][26][27]. Therefore, cross-linkage of UniCAR T cells with target cells and thus anti-tumor activity is mediated via the target module (TM) carrying the UniCAR epitope and a tumor-specific binding domain [28][29][30]. With regard to management of adverse reactions including on-target/off-tumor effects, the UniCAR system offers the possibility of therapy control via the separated TMs. As shown by several preclinical and a first clinical study (NCT04230265) [31], permanent infusion of TMs in a tumor patient turns anti-tumor activity of UniCAR T cells "ON". After elimination of the TM from the body, UniCAR T cells automatically return to a "switch OFF" mode [32][33][34][35][36][37]. Besides the problem of steering, the UniCAR platform may also overcome further challenges in PCa therapy by repurposing TMs as theranostic compounds. In this regard, TMs can be labeled with radionuclides suitable for positron emission tomography (PET) or single photon emission computed tomography (SPECT) and used for diagnostic imaging of the primary tumor and metastases prior and during therapy [20]. This would allow not only the staging of the disease but also the assessment of the response to therapy. Based on the acquired information, more accurate treatment decisions can be taken, e.g., whether the treatment must and can be extended. Alternatively, TMs could be radiolabeled with a therapeutic radionuclide such as lutetium-177 or actinium-225 for targeted radioimmunotherapy. Besides their own therapeutic effects, such TMs should increase local inflammation, which might help to attract and activate immune effector cells, including CAR T cells. Altogether, the application of one molecule for diagnosis, endoradiotherapy, and CAR T cell therapy could lead to a novel, beneficial, combinatorial cancer therapy option for PCa patients. To show first proof-of-concept for this idea, we developed a TM for the UniCAR system with pharmacological features that allow its specific accumulation at the tumor site for both diagnostic PET imaging and radioimmunotherapy. For this purpose, a novel recombinant IgG4-based TM directed against the PSCA was constructed. The novel anti-PSCA IgG4-TM was functionalized with the chelator DOTAGA for radiolabeling with either the PET radionuclide 64 Cu or the therapeutic radionuclide 225 Ac. In this study, we evaluated its use for UniCAR T cell therapy, after radiolabeling for diagnostic imaging, as well as targeted alpha therapy (TAT). Cell Lines All cell lines were obtained from American Type Culture Collection (ATCC, Manassas, VA, USA). The PCa cell lines PC3 and LNCaP were genetically modified to overexpress human PSCA according to a previously published protocol [34]. PC3 wildtype (wt), LNCaP-PSCA, and PC3-PSCA cells were used for in vitro experiments. For in vivo stud-ies, PC3-PSCA tumor cells were further genetically engineered via lentiviral transduction to overexpress the genes encoding firefly luciferase and PSMA (see [34]). In all in vivo experiments, the resulting PC3-PSCA/PSMA Luc+ cell line was transplanted. Besides the original PCa cell line, all genetically modified PCa cell lines were authenticated by genetic genotyping (ATCC). Tumor cell lines or Ab-producing 3T3 cell lines were cultured in RPMI complete media [38] or DMEM complete media [38], respectively. All cell lines were incubated at 37 • C with 5% CO 2 . Generation and Cultivation of UniCAR T Cells T cells were isolated from buffy coats of healthy donors (German Red Cross, Dresden, Germany) by density gradient centrifugation and subsequent magnetic isolation via the Pan T Cell Isolation Kit (Miltenyi Biotech GmbH, Bergisch Gladbach, Germany). The study received approval by the institutional review board of the Faculty of Medicine of the TU Dresden (EK138042014). T cells were maintained in RPMI complete medium with 50 U/mL human IL-2 (Miltenyi Biotec GmbH) until UniCAR T cell production. Lentiviral transduction of human T cells was performed as described previously [39,40]. Following activation with T Cell TransAct™ (Miltenyi Biotec GmbH), T cells were infected 2-3 times with lentiviral particles encoding for the UniCAR. During genetic modification and subsequent expansion, T cells were kept in TexMACS™ medium (Miltenyi Biotec GmbH) supplemented with human IL-2, human IL-7, and human IL-15 (all Miltenyi Biotec GmbH). Transduction efficiency was monitored by expression of the co-translated EGFP marker protein using flow cytometry. Experiments were conducted with unsorted UniCAR T cells that were cultured without cytokines for 24 h. Cloning of Recombinant Antibodies All recombinant PSCA-specific antibodies (Abs) were generated based on the sequences of the fully human anti-PSCA IgG1 monoclonal antibody (mAb) Ha1-4.121 given in the patents EP 2,428,522 A1 and US 8,013,128 B2. To clone the single-chain fragment variables (scFvs) anti-PSCA Ha1-4.121c.5 and anti-PSCA Ha1-4.121c.26, V L c.5 or V L c.26, respectively, were fused in silico to V H via a flexible peptide linker ((G 4 S) 2 -GASAA-(G 4 S) 2 ) in V L -V H orientation. For secretion into the cell culture supernatant, the scFvs were Nterminally linked to the murine Ig kappa leader sequence. At the C-terminus of the scFvs, we fused the UniCAR epitope (E5B9). Downstream of the UniCAR epitope, we added a myc-and hexahistidine (His)-tag for Ab purification and detection. The scFv sequences were ordered from Eurofins Genomics (Ebersbach, Germany) and subsequently cloned into the lentiviral expression vector p6NST50 via the restriction sites XbaI and KspAI. After verification of the anti-PSCA reactivity, the functional anti-PSCA scFv sequence was fused to an IgG4-Ab backbone according to previously published constructs [36,41]. For this purpose, the anti-PSCA scFv Ha1-4.121c.26 was amplified by PCR using the primers scFvHa1-4.121c.26-SfiI for (5 -GGCCCAGCCGGCCGGTTCCGACATCGTCATG-3 ) and scFvHa1-4.121c.26-MreI rev (5 -CGCCGGCGCAGAGCTCACTGTCACGAG-3 ) to insert the restriction sites SfiI and MreI. The PCR product was subcloned in pGEMTeasy vector (Promega GmbH, Mannheim, Germany) and subsequently inserted into the lentiviral expression vector p6NST50_huIgG4-Fc via SfiI/MreI restriction sites. Finally, the UniCAR epitope E5B9 and a His-tag were introduced at the 3 end of the PSCA-IgG4-Fc sequence. Therefore, a double-stranded DNA sequence with 5 and 3 overhangs compatible with XbaI and KspAI restriction sites was prepared by annealing the oligos E5B9-His for (5 -ctagaGGCGGCGGAGGGTCTGCAGCTGCCAAACCTCTGCCCGAAGTGACAGACGA-GTATGGGCCAGGCGGTGGTGGAAGCCACCATCATCACCACCATTGA-3 ) and E5 B9-His rev (5 -TCAATGGTGGTGATGATGGTGGCTTCCACCACCGCCTGGCCCAT-ACTCGTCTGTCACTTCGGGCAGAGGTTTGGCAGCTGCAGACCCTCCGCCGCCT 3 ). Using Thermo Scientific™ Buffer R (Thermo Fisher Scientific GmbH, Schwerte, Germany) as reaction buffer, 200 pmol of each of the oligos were incubated at 95 • C for 5 min. After cooling the reaction mixture for 15 min at room temperature, a 1:10 dilution of the annealed oligos was used for ligation into the vector p6NST50_PSCA-huIgG4-Fc via XbaI/KspAI restriction sites. All restriction enzymes and buffers were purchased from Thermo Fisher Scientific GmbH (Schwerte, Germany). Expression, Purification, and Biochemical Characterization of Ab Constructs To generate permanent Ab-producing cell lines, murine 3T3 cells were modified via lentiviral transduction [38,42] with the lentiviral vectors encoding the anti-PSCA scFv Ha1-4.121c.5 (Mw = 31 kDa), anti-PSCA scFv Ha1-4.121c.26 (Mw = 32 kDa) or anti-PSCA IgG4-TM (Mw = 112 kDa). Purification of recombinant Abs from cell culture supernatants was performed according to already described procedures: The scFvs were purified by Ni-NTA affinity chromatography [38], whereas the purification of the anti-PSCA IgG4-TM was achieved via protein A affinity chromatography [41]. After dialysis against 1 × PBS, purity and concentration of recombinant proteins were determined in the elution fractions by SDS-PAGE under reducing conditions and subsequent staining with Coomassie Brilliant Blue G250 [43] or Quick Coomassie ® Stain (Serva, Heidelberg, Germany). For immunoblotting, proteins were transferred to nitrocellulose membranes and detected via the C-terminal His-tag as previously described [38,42]. Dimerization of the anti-PSCA IgG4-TM was further analyzed via size exclusion high-performance liquid chromatography (SE-HPLC) as detailed in Albert et al. [44]. Flow Cytometric Binding Analysis Binding properties and affinities of the novel PSCA-specific recombinant Abs were assessed by flow cytometry according to previously published protocols [45]. Briefly, PC3-PSCA tumor cells were incubated with recombinant Abs at indicated concentrations for 1 h at 4 • C. Ab binding was either detected via the His-or E5B9-tag using a PE-labeled anti-His Ab (Miltenyi Biotec GmbH) or 10 µg/mL of the anti-La (5B9) mAb [46], in combination with the Goat anti-mouse IgG Fc Cross-Adsorbed Secondary Ab, PE (Thermo Fisher Scientific GmbH) or PE Goat anti-mouse IgG (minimal x-reactivity) Ab (Biolegend, San Diego, CA, USA), respectively. Stained cells were analyzed with MACSQuant Analyzer 10 and MACSQuantifiy Software (Miltenyi Biotec GmbH). Activation Assay and Enzyme-Linked Immunosorbent Assay (ELISA) 5 × 10 4 UniCAR T cells were incubated alone or in the presence of 1 × 10 4 tumor cells in a 96-well U-bottom plate. Recombinant Abs were added at a concentration of 5 nM. After 24 h, cell-free supernatant was harvested and analyzed for secreted cytokines by ELISA as previously described [45]. Human TNF ELISA Set, Human IFN-Gamma ELISA Set, Human IL-2 ELISA Set, and BD OptEIA Reagent Set B were purchased from BD Biosciences (Becton Dickinson GmbH, Heidelberg, Germany) and used according to the manufacturer's instructions. After 24 h of co-cultivation, UniCAR T cells were further examined with respect to their activation status. For this purpose, cells of one triplicate were pooled and stained with anti-human CD3-PE-Vio ® 770 or CD3-VioBlue and anti-human CD69-APC Abs (all Miltenyi Biotec GmbH) for 15 min at 4 • C. To allow exclusion of dead cells during analysis, counterstaining with 1 µg/mL propidium iode/PBS solution was performed. Flow cytometric data were acquired with the MACSQuant Analyzer 10 (Miltenyi Biotec GmbH). Chromium Release Assay Standard chromium release assays were conducted as described in detail by Feldmann and colleagues [38]. In brief, 5 × 10 3 chromium-labeled tumor cells were incubated with UniCAR T cells at an effector-to-target cell (E:T) ratio of 5:1 in the presence or absence 5 nM or decreasing concentrations of recombinant Ab. After 24 h, radioactivity in coculture supernatants was measured with a MicroBeta Microplate Counter (PerkinElmer LAS GmbH, Rodgau, Germany). Radiolabeling of the TM with 64 Cu and 225 Ac All solvents were purchased from Sigma-Aldrich, Merck KGaA (Darmstadt, Germany). The anti-PSCA IgG4-TM was conjugated with a 10 molar excess of p-NCS-Bz-DOTAGA (C 27 H 38 N 60 O 9 S; Mw = 622.69 g mol −1 ; molecular weight 622.69) in 0.1 M borate buffer (pH 8.5) at 4 • C for 12 h, resulting in the anti-PSCA IgG4-TM conjugated with DOTAGA (DOTAGA-TM). Then, it was washed 4 times with PBS via spin filtration at 4 • C using Amicon Ultra-0.5 centrifugal filter devices with a molecular weight cutoff of 50 K (Amicon Ultra 3 K 1 device, Merck-Millipore, Merck KGaA, Darmstadt, Germany). The no carrier added (n.c.a.) [ 64 Cu]Cu 2+ was produced at the Helmholtz-Zentrum Dresden-Rossendorf on a TR-Flex (Advanced Cyclotron Systems Inc., Richmond, BC, Canada) by 64 Ni(p,n) 64 Cu nuclear reaction and prepared as reported previously [47]. 64 Cu 2+ , and purified via spin filtration as described. Labeling yield and radiochemical purity were determined using radio instant thin-layer chromatography (radio-ITLC). The radiolabeled conjugate was retained at the origin, Rf = 0.0, while unbound radioactivity moved with the solvent (10 mM sodium citrate and 0.1 mM EDTA, Rf = 0.8-1.0). The developed chromatograms were analyzed by autoradiography using an in vivo Multispectral Imaging System (Bruker Daltonik GmbH, Bremen, Germany) [48]. 225 Ac (1 MBq After development, the chromatography strip was stored for at least 1 h until radiochemical equilibrium was obtained between 225 Ac (half-life [T 1⁄2 ], 9.9 d) and its daughter nuclide 221 Fr (T 1/2 4.8 min). The percentage of complexed 225 Ac was determined by ITLC on silica gel-impregnated glass fiber sheets (Agilent Technologies Deutschland GmbH, Waldbronn, Germany) using 0.1 M citrate buffer at pH 4.0. The radiolabeled conjugate was retained at the origin, Rf = 0.0, while unbound radioactivity and the [ 225 Ac]Ac-DOTA moved with the solvent, Rf = 0.8-1.0. Briefly, the radiosynthesis was carried out by utilizing a cassette-based AllInOne synthesizer with built in radio-HPLC, conducted by a synthesis sequence developed by Trasis S.A. (Ans, Belgium). The cassettes and reagent kits necessary for the radiosynthesis of 18 F-JK-PSMA-7 were purchased ready-to-use from Trasis S.A. as well. The irradiated 18 O water was passed through an ion exchange cartridge, where the 18 F-was trapped, then it was eluted with a water/acetonitrile solution of tetrabutyl ammonium bicarbonate. Afterwards, the solution was evaporated to dryness, and the trimethyl ammonium salt precursor (1) dissolved in acetonitrile was added. The fluoride anion displaced the trimethyl ammonium triflate leaving group in a SNAr reaction, resulting in the fluorinated protected intermediate (2). The crude compound of 18 F-JK-PSMA 7 was obtained by acidic hydrolysis of the intermediate at elevated temperature carried out in the same reactor vessel by the addition of ortho phosphoric acid. To obtain nearly 100% radiochemically pure product, semi-preparative HPLC purification was performed by injecting the whole volume of the reaction mixture onto a XBridge BEH Shield RP18 5 µm 10 × 250 mm (Waters Corporation, Milford, MA, USA) HPLC column. The pure radiopharmaceutical (3) was eluted with a diluted phosphoric acid to acetonitrile mixture ratio of 80:20. The HPLC eluent was removed by solid phase extraction of 18 F-JK-PSMA-7 on a C18 SepPak cartridge followed by saline wash and elution with a small amount of ethanol. The residual radioactivity in the tubing was collected by rinsing with isotonic saline into a vial containing sodium ascorbate. After filtration through a 0.2 µm sterile Supor Membrane filter (Pall GmbH, Dreieich, Germany), the 18 F-JK-PSMA-7 solution was ready for i.v. injection. Animals, Feeding, Husbandry, Preparation, and Animal Experiments Animals were allowed free access to food and water and maintained under temperature, humidity, and light-controlled conditions. Tumor size and body weight were regularly measured. To analyze the immunotherapeutic potential of the anti-PSCA IgG4-TM, eight week old, male, NXG-immunodeficient mice (NOD-Prkdc scid -IL2rg Tm1 /Rj, JANVIER LABS, Le Genest-Saint, France) were used. Mice were divided into three groups of five animals each. Animals of Group 1 were subcutaneously injected into the right thigh with 1 × 10 6 PC3-PSCA/PSMA cells. Group 2 received a mixture of 1 × 10 6 PC3-PSCA/PSMA cells and 1 × 10 6 UniCAR T cells, while 100 pmol TM/mouse was additionally applied subcutaneously to the treatment group (Group 3). All mixtures were adjusted to a total volume of 100 µL per mouse (in PBS). Tumor growth was monitored over two days using the reporter luciferase, overexpressed by PC3-PSCA/PSMA Luc+ cells. Ten minutes prior to the optical imaging, mice were anesthetized [34,44] and injected intraperitoneally with 150 µL XenoLight D-Luciferin Potassium Salt (15 mg/mL) (PerkinElmer LAS GmbH, Rodgau, Germany). Detection of luminescence signal was performed using the In Vivo Xtreme Imaging System (Bruker, Bremen, Germany). For luminescence imaging, exposure times were set to 10 min. Data were analyzed with the MI 5.3 and MS 1.3 software (Bruker, Bremen, Germany). For TAT and PET imaging, five week old, male, Rj:NMRI-Foxn1 nu/nu mice (mutant outbred mouse with thymic aplasia causing a T cell deficiency; JANVIER LABS, Le Genest-Saint-Isle, France) were subcutaneously injected into the right flank with 2 × 10 6 PC3-PSCA/PSMA Luc+ tumor cells. Four weeks after inoculation of the tumor cells, TAT experiments were started. At this time, the tumors had reached a diameter of 100-400 mm 3 . The xenografted mice were divided into two groups each of five mice. The five control mice were injected with 0.04 nmol DOTAGA-TM (4.5 µg per 100 µL PBS) i.v. via tail vein. In the treatment group, mice were injected with 5 kBq (135 nCi) of 225 Ac-TM (0.04 nmol, 4.5 µg DOTAGA-TM). Animals were sacrificed at day 60 or before that time point when they appeared to suffer, in accordance with animal welfare regulations. The effect of the TAT on tumor response was assessed by comparison of the Specific Growth Rates (SGR). Tumor diameters were measured twice a week using a caliper. The tumor volume was calculated for each time point as π/6·a·b 2 , where a is the longest and b is the perpendicular shorter tumor diameter. Additionally, after the PET measurements the animal bed with the anesthetized mice was translated to the CT and a whole-body CT was measured. From the data sets, the tumors were delineated with software package ROVER (ABX GmbH, Dresden, Germany) and the volumes calculated. Tumor growth kinetics were evaluated from the growth curves of individual tumors. Starting point was the time of injection of DOTAGA-TM (control) or the 225 Ac-TM. The tumor growth kinetics were evaluated by an equation that describes the growth with a constant doubling time (DT) Y = Y 0 ·exp(k·t). Y 0 is the Y value when t (time) is zero. It is expressed in the same units as Y; k is the rate constant, expressed in reciprocal of the t axis time units. If t is in days, then k is expressed in inverse days. k is equal to the SGR. The DT is calculated as DT = ln(2)/SGR [50]. The tumor SGR were compared by an unpaired t-test. Statistical significance was determined using the Holm-Sidak method, with alpha = 0.05, without assuming a consistent SD. The specific tumor volumes, calculated as Y/Y 0 , were also compared for each time point with t-test. The statistical calculations were performed using GraphPad Prism version 7.00 for Mac OS X, GraphPad Software (La Jolla, CA, USA.) 64 Cu-TM Positron Emission Tomography Anesthetized, spontaneously breathing animals were allowed to stabilize for 10 min after preparation. The animals were positioned on a heated bed to maintain the body temperature at 37 • C. The PET studies were carried out with a microPET P4 ® (Siemens Healthcare GmbH, Erlangen, Germany). The activity of the injection solution was measured in a well counter (Isomed 2000, Dresden, Germany) cross-calibrated to the PET scanners. The PET acquisition of 120 min emission scan was started, and the infusion of 15 MBq (0.4 mCi) 64 Cu-TM was initiated with a delay of 10 s. An amount of 0.1 mL of solutions of 64 Cu-TM were infused over 1 min (with a Harvard apparatus 44 syringe pump) into a lateral tail vein. Additional PET scans were carried out over one hour, after 31 h and 44 h. X-ray CT Imaging The animals undergoing 18 F-JK-PSMA-7 PET imaging were also imaged in a Mediso nanoX-CT TM (Mediso Medical Imaging Systems Ltd., Budapest, Hungary), a dedicated small animal X-ray system, with helical whole body scanning of 360 projections and 45 kV of source voltage. Animals were positioned in the same anesthesia and temperature maintaining bed during the same imaging session with PET. CT images were reconstructed using the built-in filtered back projection algorithm of the device and coregistered to PET using ROVER. Statistical Analysis Values are expressed as mean ± SEM and were compared using ANOVA or the unpaired Student's t-test with Welch's correction and an F-test to compare the variances (GraphPad Prism 7.0 from GraphPad Software, La Jolla, CA, USA). The Kruskal-Wallis test was performed to identify the differences between groups. All statistical testing was performed using GraphPad Prism 7.0 software. Significant difference was set at * p < 0.05; ** p < 0.01; *** p < 0.001. Figure 1A). Subsequently, both scFvs were eukaryotically expressed (data not shown) and compared regarding their binding activity towards PSCA ( Figure 1B,C). As shown by flow cytometry analysis ( Figure 1B,C), the anti-PSCA scFv Ha1-4.121c.26 was able to specifically bind PC3-PSCA cells with high affinity (K D = 45.9 nM), while the anti-PSCA scFv Ha1-4.121c.5 showed no binding towards PSCA-expressing target cells. According to these data, the V L c. 26 To generate the multimodal anti-PSCA IgG4-TM, the V L c. 26 and V H of the human anti-PSCA Ab Ha1-4.121 were connected to the hinge and Fc domain (C H 2 and C H 3) of human IgG4 molecules, respectively. For immunotherapeutic application and detection, the Ab construct was additionally equipped with the UniCAR epitope E5B9 [51] and a His-tag (Figure 2A,B). For expression of the resulting anti-PSCA IgG4-TM, a permanent 3T3 production cell line was established. The recombinant PSCA-specific TM was successfully purified from cell culture supernatants of this production cell line via protein A affinity chromatography with high yield and purity ( Figure 2C-E). Due to the presence of cysteine residues in the hinge region, the protein should be able to form a dimer stabilized by disulfide bonds resulting in the formation of homodimers with a theoretical molecular weight of 112 kDa. Under reducing conditions, the homodimer should separate into single polypeptide chains with an expected size of 56 kDa. Indeed, comparison of the native molecular weight obtained by SE-HPLC ( Figure 2C) with the molecular weight obtained by SDS-PAGE under reducing conditions ( Figure 2D,E) confirms that the native anti-PSCA IgG4-TM exists as a homodimer. In a next step, binding properties were analyzed by flow cytometry using PC3-PSCA cells. As shown in Figure 2F, the anti-PSCA IgG4-TM specifically binds to PSCAoverexpressing PCa cells. Binding was confirmed via both the E5B9-and His-tag. With an estimated apparent K D value of 4.4 nM, the bivalent anti-PSCA IgG4-TM had a 10-fold increased affinity/avidity ( Figure 2G) towards PSCA when compared to the monovalent anti-PSCA scFv Ha1-4.121c.26 ( Figure 1C). UniCAR T Cell Immunotherapy with the Anti-PSCA IgG4-TM Due to incorporation of the E5B9-tag, the anti-PSCA IgG4-TM carries all typical features of a TM that can be used in the well-established UniCAR approach [51]. As summarized in Figure 3A, the UniCAR system comprises two main components: (i) T cells modified to express UniCARs and (ii) tumor-specific TMs (here directed against PSCA). As UniCAR T cells specifically recognize and bind to the E5B9 peptide epitope [51], E5B9tagged TMs can function as bridging molecules between tumor and T cells. This TMmediated cross-linkage finally results in efficient tumor cell lysis. To prove that the novel anti-PSCA IgG4-TM can be utilized as a TM in the UniCAR system, various co-cultivation assays were conducted. UniCAR T cells were incubated with PSCA-positive or PSCA-negative tumor cells in the presence or absence of the novel TM. As exemplified by the upregulation of the early activation marker CD69, UniCAR T cells were activated in a target-specific and strictly TM-dependent manner ( Figure 3B and Figure S2A). Upon anti-PSCA IgG4-TM mediated cross-linkage with PC3-PSCA tumor cells, UniCAR T cells were engaged for significant secretion of the pro-inflammatory cytokines TNF and IFN-γ, as well as the growth-promoting cytokine IL-2 ( Figure 3C and Figure S2B ). As shown in Figure 4A, the TM-mediated cross-linkage of tumor and UniCAR T cells finally resulted in specific tumor cell lysis. Neither in absence of tumor cells nor in the presence of PSCA-negative PC3 wt cells, an unspecific release of cytokines or tumor cell killing could be observed. As depicted in Figure 4B, the anti-PSCA IgG4-TM successfully engaged UniCAR T cells for efficient tumor cell killing with an EC 50 (half-maximal effective concentration) value of 7.5 or 30.3 pM for PC3-PSCA or LNCaP-PSCA cells, respectively. According to previously published data, the anti-PSCA IgG4-TM showed comparable killing efficiency to other PCa-specific or IgG-based TMs [34,36,41,52]. In vivo functionality of the anti-PSCA IgG4-TM within the UniCAR system was analyzed in an immunodeficient mouse model. For this purpose, animals were subcutaneously injected with mixtures of (i) PC3-PSCA/PSMA Luc+ tumor cells, (ii) PC3-PSCA/PSMA Luc+ tumor cells plus UniCAR T cells, or (iii) PC3-PSCA/PSMA Luc+ tumor cells, UniCAR T cells, and TM. As summarized in Figure 5, the anti-PSCA IgG4-TM efficiently activated UniCAR T cells for tumor cell killing in mice. According to the luciferase signal, tumors significantly regressed when compared to the control groups and were not detectable in three out of five mice already after two days. Differences in tumor growth (not statistically significant) between control Group 1 (tumor) and control Group 2 (tumor + UniCAR T cells), can be most likely attributed to T cell-dependent rejection reactions against alloantigens expressed on tumor cells. Such effects are strongly donor-dependent and can be even more pronounced in mice due to introduction of human (genetically modified) T cells into a murine environment. For each mouse, net intensities (P/s/mm 2 ) were normalized to net intensities measured at day 0. Mean luminescence net intensities ± SEM for five mice per group are shown (* p < 0.05, ** p < 0.01, compared to the control group "Tumor + UniCAR T", Student's t-test). Radiolabeling of the Anti-PSCA IgG4-TM with Cu-64 and Ac-225 Having proven the efficacy of the novel anti-PSCA IgG4-TM for UniCAR T cell immunotherapy, its suitability for diagnostic tumor imaging and endoradiotherapy was evaluated. For this purpose, the TM was modified with DOTAGA and subsequently labeled with either 64 Cu as a diagnostic radionuclide or 225 Ac as a therapeutic radionuclide. Both the diagnostic 64 Cu-TM and the therapeutic 225 Ac-TM could be synthetized with comparable radiochemical purity (>97%) according to the ITLC. The 64 Cu-TM had a radiochemical purity of 97.1% with a molar activity of 2.7 GBq/µmol (9 TBq/g). The 225 Ac-TM showed a radiochemical purity of 97.6% (one experiment) and a molar activity of 0.019 GBq/µmol (0.17 GBq/g). Imaging with 64 Cu-TM in Xenotransplanted Mice First, we tested the ability of the TM-DOTAGA conjugate to target PSCA-expressing tumors in vivo. Therefore, PC3-PSCA/PSMA Luc+ tumor-bearing mice were injected with 15 MBq 64 Cu-TM, which is equivalent to 3 MBq of a pure positron-emitting isotope. This relatively large amount allowed PET imaging over two days with good statistics. As shown in Figure 6, the 64 Cu-labeled anti-PSCA IgG4-TM is localized to and is extensively retained in the PC3-PSCA/PSMA-Luc+ tumors. The activity distribution was evaluated as a function of time (0-2, 31, 44 h) (see Figure 6). In the distribution phase, the 64 Cu-TM cleared from the blood with a biological half-life of 6.0 h converting to a calculated plateau of 32.5% of the starting activity concentration. The maximal activity concentrations in the tumors were reached after 31 h. At this time, the tumors could be clearly delineated from the background in the PET images. This can be attributed to the high number of PC3-PSCA/PSMA-Luc+ cells being targeted in the tumors. In the first phase (0-2 h), the heart and the venous blood vessel system were clearly visible. The high activity at this time in the neck region was caused from the venae jugularis. At later time points, only the circulation and highly perfused organs such as heart, liver, kidneys, and the tumor were visible. No evidence of any 64 Cu-TM accumulation in the salivary glands was found. Tumor Growth Delay after 225 Ac-TM Treatment In a next step, we examined whether the novel anti-PSCA IgG4-TM could be easily repurposed as a tool for TAT. Therefore, NMRI-Foxn1 nu/nu mice were subcutaneously injected with PC3-PSCA/PSMA Luc+ tumor cells. When tumors reached 100-400 mm 3 in size, mice were treated with either DOTAGA-TM (control) or 225 Ac-TM. In comparison to the control group, the 225 Ac-TM showed significant potency against PC3-PSCA/PSMA-Luc+ tumors in a xenograft mouse model (Figure 7). At day 43 post-treatment, tumor growth was significantly reduced (p = 0.05) in the therapy group ( Figure 7A,C). SGR values for tumors of the control (0.0558 ± 0.0025, n = 4) and treatment group (0.0217 ± 0.0052, n = 5) correspond to doubling times of 12.4 and 32.0 days, respectively. Data are presented as means ± SEM. For each time point, the SGR of the single animals was analyzed with an unpaired t-test without assumption of consistent SD; the calculations were corrected for multiple comparisons using Holm-Sidak method; alpha of 0.05 was defined as 'statistically significant'. The SGR of the relative tumor volumes calculated with the exponential growth equation were significantly (p = 0.0009) different with 0.0558 ± 0.0025 (n = 4) for the control animals and 0.0217 ± 0.0052 (n = 5) for the treatment group, corresponding to doubling times of 12.4 and 29 days, respectively. The body weights of the animals of both groups ( Figure 7B,D) didn't differ over the observation time. As mice were injected with a low amount of activity (5 kBq 225 Ac-TM/animal corresponding to 0.15 µg TM/mouse or 1.3 pmol TM/mouse), the visualization of 225 Ac-TM gamma-emitting isotopes in the mice by SPECT was not possible [51]. However, the biodistribution and kinetics of the 225 Ac-TM is expected to be very close to the biodistribution of the 64 Cu-TM. Imaging of the Tumors with 18 F-JK-PSMA-7 To (i) identify living tumor cells, (ii) to get an independent metabolic response parameter of the 225 Ac-TM-based alpha therapy, and (iii) to show the radiotracer distribution in the tumors, animals of the TAT study (see Figure 6) were additionally imaged with the PSMA targeting compound 18 F-JK-PSMA-7. 18 F-JK-PSMA-7 is a fluorine-18 labeled PSMA ligand [49]. Thus, it can bind to PSMA, which is also overexpressed on the selected model cell line PC3-PSCA/PSMA Luc+. 18 F-JK-PSMA-7 was synthetized using a simplified one-step automated method by a direct radiofluorination of (S)-2-[3-((S)1-Carboxy-5-((6trimethylammonium-2-methoxypyridine-3-carbonyl)-amino)-pentyl)-ureido]-pentanedioic acid trifluoroacetate salt (1) as a precursor compound (Figure 8). The non-decay-corrected radiochemical yield of 18 F-JK-PSMA-7 was 20% after isolation by HPLC. The 18 F-JK-PSMA-7 accumulation in the PC3-PSCA/PSMA-Luc+ tumors was evaluated on day 43 after treatment start in the control and 225 Ac-TM-treated animals (Figure 9). The tumors were clearly delineated. Time-activity curves were exemplarily analyzed for two animals and are shown for peripheral and central tumor areas. The peripheral tumor part accumulated approximately five times more 18 F-JK-PSMA-7 than the central part. The other activity localizations in the mice correlate to the normal distribution of this imaging agent known from humans, except for the accumulation in the kidneys. In one animal of each group, a lymph node or small metastasis became visible. The quantitative comparison of the 18 F-activity distribution in the control and 225 Ac-TM group showed a significant decrease of the activity amount ( Figure 10A) in the total tumor, and a significant decrease of the activity concentration in the peripheral tumor ( Figure 10B). The activity concentrations (SUV) in the central, low perfused, and necrotic part of the tumors ( Figure 10C) did not differ between both groups. Discussion Despite multiple treatment options encompassing a variety of disciplines, late-stage mCRPC remains a difficult to treat disease. Along with advances of theranostic radiopharmaceuticals, the therapeutic landscape is currently being shaped by emerging immunotherapies. Non-invasive imaging techniques that accompany therapy play an important role in providing a patient-centered, timely optimized treatment regimen. Therefore, in this study, we attempted to combine the three main pillars of PCa management into one approach by using the flexible adaptor CAR platform UniCAR and described a novel IgG4-based TM for radio-/immunotheranostics of PCa. Considering that PSMA radiophar-maceuticals currently dominate the theranostic landscape but also have some limitations, particularly in patients with low or heterogeneous PSMA expression profiles [52,53], we developed a multimodal TM directed against an alternative PCa-specific target, PSCA, that could complement existing PSMA-directed theranostic strategies. PSCA is overexpressed in more than 80% of PCa tissues and in the majority of bone metastasis [54][55][56][57][58][59], while showing restricted expression in normal tissues [38,42,[54][55][56][57]60]. Elevated PSCA levels further correlate with increased tumor stage, grade, and progression to androgen independence [38,42,[54][55][56][57]60] making it interesting and suitable for Ab-based immunotherapy and imaging of late-stage prostate cancer. The here-described novel anti-PSCA IgG4-TM represents the molecular basis for novel multimodal PCa applications. It can be used (i) for PSCA-directed UniCAR T cell immunotherapy, and after radiolabeling, (ii) for diagnostic imaging or (iii) targeted endoradiotherapy. Considering that the TM is administered multiple times for distinct applications in patients, a key criterion for Ab engineering was to minimize its immunogenicity. Monoclonal Abs or Ab fragments from foreign species were shown to trigger undesired anti-drug immune reactions in humans that may cause loss of therapeutic efficiency, altered pharmacokinetics, or even serious adverse reactions [61][62][63]. To circumvent this risk and allow repeated applications, the novel anti-PSCA IgG4-TM was designed based on the V L and V H of the fully human anti-PSCA mAb Ha1-4.121 and the Fc-domain of human IgG4 molecules. As IgG4 molecules show little or no binding to classical Fcγ receptors or complement C1q [64][65][66], unwanted activation of complement and antibody-dependent cellular cytotoxicity can be further limited. Owing to its high affinity for the neonatal Fc receptor [67], IgG4 retains the characteristic prolonged in vivo kinetics of IgG molecules [66]. With respect to UniCAR immunotherapy, the novel anti-PSCA IgG4-TM shows functionality and efficiency. UniCAR T cells were activated for significant secretion of proinflammatory cytokines and tumor cell killing both in vitro and in vivo in a highly TMdependent and antigen-specific manner. As published for conventional CAR design, optimal synapse distances are critical for proper signaling and cytolytic activity of CAR T cells [68]. Altering the extracellular spacer length, and thus the space between tumor and target cells, can considerably influence CAR T cell effectiveness [68]. Despite its larger size and increased affinity, the anti-PSCA IgG4-TM allows the formation of a functional immune synapse and induces highly effective tumor cell lysis with an EC 50 value of 7.5 or 30.3 pM. The TM-efficacy is thus comparable to previously described, smaller, PCa-specific TMs (EC 50 = 12 pM), underlining the plasticity and flexibility of the UniCAR system for different TM formats [16,34]. The same observations could not only be made with previously developed GD2-or STn-specific, IgG4-based UniCAR TMs [18,32,36,41], but are also consistent with other adaptor CAR platforms successfully applying adaptor molecules of different size and pharmacokinetics (summarized in [16]). Moreover, we could demonstrate that simultaneous targeting of PSCA and PSMA is feasible by applying the anti-PSCA IgG4-TM together with the previously established PSMA-L TM [69] (see also Figure S3). The application of dual-targeting strategies in PCa patients might be important for future clinical translation, as it might help to overcome problems such as inter-and intra-patient heterogeneities and tumor escape due to antigen loss. In terms of imaging, a combination of both TMs might further improve PCa detection and monitoring of molecular responses to therapy as discussed above. Theranostics in PCa commonly involves the consecutive application of different small molecule-based radiotracers for diagnostic imaging and endoradiotherapy [3]. If molecules with different coordination chemistry are used, pharmacokinetics may differ, possibly hampering the theranostic purpose [3]. To achieve comparable in vivo kinetics, small molecule inhibitors using one chelating moiety for both applications, such as MIP-1095 [70], PSMA I&T [71], or PSMA-617 [72], were developed. Accordingly, functionalization of the novel, multimodal anti-PSCA IgG4-TM also required the conjugation of one appropriate chelator that allows the complexation with both diagnostic and therapeutic radionuclides such as 68 Ga, 64 Cu, 111 In, 177 Lu, and 225 Ac [73][74][75][76][77][78][79]. For PET-based imaging of PCa, we selected 64 Cu. Due to its low positron-energy and relative long physical half-life, it seems to be a promising radionuclide for diagnostic imaging. In this study, the alpha-emitting radionuclide 225 Ac was employed for TAT. Alpha-particles are potent therapeutic effectors when directed to cancer targets [80][81][82]. The short effective range (several cell diameters) and high particle energies (5-8 MeV) make 225 Ac effective for targeting of small tumor lesions, including bone metastases commonly found in mCRPC patients. When compared to beta-emitters, alpha-particles show higher efficacy at lower off-target toxicity [82,83]. To enable labeling of the anti-PSCA IgG4-TM with both 64 Cu and 225 Ac, we selected the chelator DOTAGA. One difficulty in the application of this chelator is the high temperature necessary for radiolabeling. To circumvent this problem, a temperature of 38 • C was used in combination with extended incubation times for TM labeling. After removal of free and non-specifically bound radionuclides, the achieved molar activities for 64 Cu and 225 Ac were comparable to literature data [77,[84][85][86]. Conjugation of DOTAGA to the anti-PSCA IgG4-TM does not compromise its PSCA binding, diagnostic imaging, or TAT effect. PET experiments showed a high contrast tumor accumulation of the 64 Cu-TM. The intact 64 Cu-TM increasingly accumulated in the tumor with maximum enrichment after 31 h. Activity slowly cleared from the blood and after two days was only found in the blood and the tumor resulting in optimal target-to-background ratios that did not significantly change over time. The absence of activity accumulation in other organs, including excretory organs and the reticuloendothelial system, proves TM stability, absence of TM aggregates, and excretion of radioactive metabolites. Although the physical half-life of the 64 Cu only allows imaging over two days, our data clearly underline suitability of the TM for high contrast PET imaging of PSCA-expressing PCa. Pharmacokinetic properties of the anti-PSCA IgG4-TM also play an important role for UniCAR immunotherapy. When compared to previously described PCa-specific TMs [33,38,39,45,61,87], the anti-PSCA IgG4-TM has a considerably prolonged serum halflife. In clinical practice, this would allow reduction of TM infusions and could intensify elimination of residual PCa cells, which is particularly important in highly aggressive and rapidly progressing mCRPC. Although TM elimination from the blood is delayed, switchability of the UniCAR system remains as an important control mechanism, which is especially relevant to avoid long-term destruction of PSCA-expressing healthy tissues. The extended serum half-life of the anti-PSCA IgG4-TM in combination with its stable tumor accumulation could enhance local tumor radiation, and thus its therapeutic effectiveness in the context of radioimmunotherapy. Half-life extension proved to be also advantageous for small molecule inhibitors, showing increased local irradiation [88]. The 225 Ac-TM has demonstrated therapeutic efficiency in a PCa xenograft mouse model. Single injection of 5 kBq 225 Ac-TM in tumor-bearing mice resulted in significant tumor control after 23 days until the end of the study. Complete tumor elimination was probably not achieved, as tumors were potentially too large at therapy initiation, limiting the effectiveness of alpha-emitters that possess uneven micro-distribution and a short effective range. Due to low 225 Ac activity concentrations applied (5 kBq/animal), therapy accompanying SPECT imaging of the daughter radionuclide was not possible. Nonetheless, the exchange of 64 Cu for 225 Ac should not significantly affect the biodistribution, as this is dominated by the large size of the DOTAGA-TM (112 kDa). In addition to determination of the relative tumor volumes, 18 F-JK-PSMA-7 imaging was performed at day 43 to assess tumor viability and to confirm therapeutic effects. 18 F-JK-PSMA-7 PET analysis revealed a typical distribution of the co-expressed marker PSMA on tumor cells. In line with tumor volume measurements, significantly larger total PSMA levels and thus tumor cell amounts were found in the control group when compared to the 225 Ac-TM-treated group. The generally observed differences in 18 F-JK-PSMA-7 activities between peripheral and internal tumor areas can be explained by an uneven perfusion of the tumors, whereby necrotic areas and internal interstitial pressure decrease radiotracer uptake [89]. The body weight change is an important parameter for safety assessment of endoradiotherapy. During 225 Ac-TM TAT, no acute effects on the body weight were visible, underlining that the applied amount of activity (5 kBq per animal; approximately 200 kBq/kg) was safe and caused no considerable toxicities. According to biodistribution data with the 64 Cu-TM, which showed no unspecific radiotracer accumulation, adverse reactions should be limited to the hematopoietic system. Consequently, one would expect that the 225 Ac-TM should have less side effects than the 225 Ac-conjugated PSMA ligand, which shows longterm toxicities and late radiation nephropathy as a dose-limiting toxicity [82,[90][91][92]. Favorable safety profiles were also observed with other Ab-based, theranostic compounds. 117 Lu-J591 [93,94] or 225 Ac-J591 mAbs [95,96] show fewer side effects on kidneys, small intestine, and salivary glands but higher hematologic toxicity, which was manageable by fractionated drug administration [94]. Conclusions Overall, our data show that the novel anti-PSCA IgG4-TM can be used multimodally for diagnostic and therapeutic purposes: (i) highly specific and efficient UniCAR T cell immunotherapy, (ii) in vivo molecular imaging with the 64 Cu-TM, and (iii) PSCA-targeted radioimmunotherapy using the 225 Ac-TM. Considering future perspectives, it is an excellent candidate for an image-guided, combinatorial treatment approach that is expected to have synergistic effects. Radiation is known to induce a pro-inflammatory TME, remodel the tumor vasculature, and enhance expression of chemoattractant targets, MHCI, and adhesion molecules [97][98][99]. This in turn improves tumor infiltration with both endogenous and genetically engineered immune effector cells [100], as well as soluble factors (e.g., theranostic TMs). Thus, local tumor irradiation by the 225 Ac-TM could potentially counteract the immunosuppressive TME found in PCa and promote UniCAR T cell effectiveness. This assumption is supported by first preclinical studies demonstrating synergistic effects of radiotherapy and (CAR T cell) immunotherapy [100][101][102][103]. In terms of future clinical implementation, treatment-emergent imaging with the 64 Cu-TM could enable more accurate treatment decisions leading to an individually tailored PCa management for each patient. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of the Faculty of Medicine of the TU Dresden. All in vivo procedures were conducted in accordance with the ARRIVE guidelines, the guidelines set by the European Communities Council Directive (86/609 EEC) and approved by the Animal Care and Use Committee of the IEM and the Semmelweis University (XIV-I-001/29-7/2012) and in accordance with the guidelines of German Regulations for Animal Welfare and approved by the Landesdirektion Sachsen (DD24.1-5131/449/67). Informed Consent Statement: Not applicable. Data Availability Statement: The authors confirm that the data supporting the findings of this study are available within the article and its Supplementary Materials.
2022-04-17T15:15:16.016Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "efdc940aecf9f80523d6bed75ec8264fcddb44a2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/14/8/1996/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04c054f6148357cc835576edb4bd4dd7461da435", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
115195303
pes2o/s2orc
v3-fos-license
FMADM S YSTEM FOR MANET E NVIRONMENT A BSTRACT MANET environment was represented by a combination of node position, mobility speed, node type, and number of nodes. In this paper, a novel system for MANET environment evaluation is proposed by involving fuzzy multi-criteria decision maker (FMCDM) to reflect the importance of the MANET environment on the overall protocols performance. The proposed system combined with another system that previously suggested for MANET protocol evaluation. the outputs of these systems are merged to produce one single crisp value in interval [0 1]. Then, a case study for an office is implemented using OPNET 14.5 simulator to test the proposed system. results proved that MANET environment could be used to enhance the QoS of the protocol. in another world, factors along with inherent characteristics of Ad-hoc networks may result in unpredictable variations in the overall network performance. INTRODUCTION QoS level for each Ad-hoc routing protocol based on three main issue, the parameters obtained from network, type of application applied in the network and other potential factors (implied factors) that cannot be controlled by the protocol designer like nodes mobility speed, keeping in mind that as the mobility speed increases the probability of hand-off and packets loss increases too. Number of nodes also an important factor that effect on network protocols performance. Due to factors importance many works tried to predict or model nodes movement like RWP in [1], mobility prediction in [2] or propose algorithms to reduce the number of hops between source and destination for providing better QoS. Generally, most algorithms that determine the least number of hops and best bandwidth available had a negative effect on network protocol performance, especially, in large network or network that supports multimedia (real time application) since it will increase the average time delay in network. However, The type and the number of MANET's nodes, manner of nodes distribution, and type of mobility had a great effect on QoS parameter (throughput, delay, jitter and PDR) which effects on protocols QoS performance as result. Dmitri D. Perkins et. al. [3] concentrated their study on investigating and quantifying the effects of various factors on the overall performance of Ad-hoc networks, while nodes cooperation is sufficient to achieve performance improvement is discussed in [4]. In addition to the QoSHFS for MANET application evaluation that detailed [5], MANET environment evaluation system will be design in this paper, as a result of its effective role on the overall protocols performance. Environment evaluation will be done by FMADM, due to MANET characteristic as previously explain. Fuzzification is the process of changing a real scalar value into a fuzzy value. This is achieved by different types of fuzzifiers. Fuzzification of a real-valued variable is done with intuition, experience, rules set analysis and conditions associated with the input data variables. There is no fixed set of procedures for the fuzzification. Fuzzification of MANET environment factor will base on discrete membership function. As Zadeh has stated that a fuzzy set induces a possibility distribution on the universe, meaning one can interpret the membership values as possibilities. How are then possibilities related to probabilities? First of all, probabilities must add up to one, or the area under a density curve must be one. Memberships may add up to anything (discrete case) or the area under the membership function may be anything (continuous case). Secondly, a probability distribution concerns the likelihood of an event to occur, based on observations, whereas a possibility distribution (membership function) is subjective. The word 'probably' is synonymous with presumably, assumably, doubtless, likely, presumptively. The word 'possible' is synonymous with doable, feasible, practicable, viable, workable. Their relationship is best described in the sentence: what is probable is always possible, but not vice versa [6]. MANET ENVIRONMENT FACTORS The following factors have a great effect on the protocols QoS performance, these factors combined to represent the environment where the protocol applied, and as a consequence where the applications used. this section contains the description and the Fuzzification process of each. Number of Nodes Number of node have a considerable impact on network scalability [3], it is expected that if nodes density is increased the throughput of the network will increase. But beyond a certain level, if nodes density is increased, some protocols will face degradation in performance [7]. Authers in [8] specified how the number of nodes parameter influences on Adhoc protocol performance. Byungjin Cho [9] proves that for sparse networks, various distributed model tends to be similar to the result of Poisson Point Process (PPP). For dense case, various point processes lead to different interference distributions. M. Hammoshi [10] classifies networks according to its nodes number in the aim of measuring the performance of some Ad-hoc routing protocols by considering network with 120 nodes is a largest Ad-hoc network. Number of nodes Fuzzificatin process will assign a membership value 1 to networks contain 120 or more nodes while networks with nodes smaller than 120 will have value between [0 1], as show in figure 1. Nodes Position One of the most important factors that effects on the protocol performance is the node location relative to the other nodes. The distributions of nodes have strongly noticeable effects on analytical and simulation-based work. In all the works mentioned so far, the common underlying assumption considers nodes in the network are uniformly randomly distributed. However, this assumption does not hold for real networks and could be considered only as an approximation for conducting simplified studies. Jakob Hoydis et. al [11], studies the effects of different node distributions on the throughput and shows that non-uniform random node distributions have a strong impact on the local throughput, which is related to the network capacity and performance Guleria, A., & Singh, K. [12] indicates that the underlying point distributions of nodes have clear effects on the wireless network properties. This means that the node topology should be taken into account in more detailed analyses and simulations of Ad-hoc, wireless sensor and mesh networks. The fuzzification of opinions can use Linguistic Variables (LV). In this approach, due to considerations regarding the numerical efficiency of the computational process, the LV terms were assumed as a discrete fuzzy set [25]. In order to fuzzfy the effects of the nodes spatial distribution in wireless Ad-hoc networks an ideal perfect point process will assumed in addition to the most widely point process model used. The following node distribution models were arranged according to its compliance with QoS. Ideal Point Process (IPP) This model contains a few numbers of nodes with uniform distribution. IPP represents an ideal case where nodes move in a very slow and uniform speed in small area. This model always gives best result for any Ad-hoc protocol applied. According to this assumption, fuzzification of this model will have a largest membership value which equal to 1. Poisson Point Process (PPP) Generally, it's the most commonly used point process. Specially, it had been used extensively in the modelling of Ad-hoc networks [14]. The homogeneous PPP is one of the most fundamental point process models because it reflects a complete spatial randomness with no regularity or density trends. Its characteristic is statistical independence. Furthermore, PPP is often used as a basis for comparison with other point processes and it is also used as a general 'building-block' for other point process models. Inhomogeneous Poisson point process, as the name indicates, the mean number of points in a given area depends on the location of this area. As shown in Figure 2, the homogeneous Poisson point process (left) with intensity equal to 100 and inhomogeneous Poisson point process (right) with linearly varying intensity. Matérn Point Process (MPP) PPP is not always suitable for modelling all cases, in some situations, the nodes are independently distributed. MPP is a point process which captures this phenomenon. It belongs to the family of hard core point processes, where the points are forbidden to be closer than a certain minimum distance. Hard-core processes are widely used in the wireless communication domain. It makes sense to define a certain minimum distance between any two communicating devices. The Matérn hard-core point process is obtained by removing overlapping spheres from a point Poisson process through a procedure called dependent thinning. As show in Figure 3. Thomas Point Process (TPP). TPP in general constructed by using a PPP as a distribution of clusters, and then generate center for each cluster. In the case of the Thomas point process (TPP), the number of points in each cluster has Poisson distribution with parameter µ (giving the average number of points (nodes) in each cluster). Fuzzification of the nodes position will depends on the results in [14], where MPP showed result almost the same as PPP, whereas TPP shows different result (Due to the clustered nature of the distribution). The values in interval [0 1] will be divide into four pulses, the best distribution IPP (which is the virtual state) will have a membership value 1, followed by PPP (0.75), then MPP with membership value (0.5), TPP with (0.25), finally (0) which means no network connection. As shown in Figure 4. To make the fuzzification more general, for any new point process or modifying one of the processes that discussed above, after comparing its performance with PPP, could assign a membership value based on the result obtain. For example the inhomogeneous PPP in [14] is a special case from PPP, it's also suitable for modeling all the nodes of an Ad-hoc networks and can used to evaluate the capacity, connectivity and performances of routing protocols with the intensity measure. Therefore, it could assign a membership value close to (0.75). Mobility Speed and Models One of the most important factors in Ad-hoc network that represent a vital aspect on the QoS support is the mobility of the nodes. Since, nodes mobility may cause link failures which will negatively impact on routing and QoS support. The main reason for degradation in network performance as a result of node mobility is traffic control overhead required for maintaining an accurate routing table in the case of table-driven protocols and maintaining routes in the case of on-demand protocols [3]. M. Benzaid et. al. [15], define four classes of mobile node. Each class defined by its maximum speed as show in Table 1. Bhavyesh Divecha et. al. [7], study the impact of MANET nodes mobility on the performance of Ad-hoc wireless networks. The obtained results were based on different range of speeds. First speed, is the average pedestrian speed of 1m/s. Second speed; represent the vehicular speed of 30 m/s. Also they had showed that, when nodes are moving at a higher speed, the probability that they move apart from each other is larger which leads to a degradation of the end to end parameter. Since all nodes act as a router to the other nodes, nodes movement from its vital position in network will degrade the overall network performance or cause temporary service loss. Said El Kafhali et al [16] and Tracy Camp et. al. [17], had studied the effects of various mobility models on the performance of different routing protocols and illustrated that mobility of the nodes affects the number of average connected paths, which in turn affects the performance of the routing algorithm as shown in figure 5. They also explained the most four mobility models used with detailed explanation for how they emulate real world scenario, these mobility models Figure 6 shows the topography examples of node movement in these four mobility models. In order to fuzzify the membership function of the mobility speed, two limited value will be considered (0) m/sec for the fix node, which has a membership value zero, and (30) m/sec for the very fast node, which has membership value (1). Figure 6. Types of mobility model [7]. Figure 7 shows the membership function of the mobility speed. When mobility model has different speeds, the average value will be taken as membership degree. For instance, nodes move in ranged [5][6][7][8][9][10] m/sec will have average mobility speed 7.5 m/sec. If there is a pause time, assume 5 sec in this model then average speed will be (7.5/5) m/sec. Velocity, the rate at which an object changes its position, of nodes in MANET also effect on routing protocol performance, for example delay will be vary according to the direction and speed of this node with respect to the destination node. D. Agarwal [18], analyzes the effect of nodes velocity on the performance of various routing protocols in MANET. Speed, velocity and acceleration have great effect on the MANET environment and network QoS parameters (PDR, end-to-end delay, energy consumption, etc.) which could cause large changes on the QoS of Adhoc protocols [16]. Type of Nodes Type of nodes also have a great effect on the protocol performance, by its processor speed, buffer size, transmission range, battery life time and its size. For example, [19] use a jitter buffer (adaptive jitter buffer) in nodes to smooth out changes in the arrival times of voice data packets. Heterogeneous networks are substantially impacted by terrain and environmental effects, these two issues may cause dramatic changes in link capacities and consequently on end to end measures of performance, throughput, and other QoS parameters. Another issue that must be considered, that low power nodes could receive transmissions from higher power nodes, but not vice versa. This posed challenges at the routing layer, and increased number of collisions at the MAC layer [20]. Node types will be categorized, according to their performance capabilities, into; cell phone or PDA with low capability, PCs or laptops with better capability, workstations and finally servers with best capability. According to these categories membership functions were formulated as show in figure 8 (where larger capability has larger membership value). Most networks may contain more than one type on nodes, in this case it will represent by Mix and its membership value represented by the average between these types. Notice that server could be act as workstation or PC or mobile node but with very small mobility probability. As example if network contain 25 % from each type of nodes, its membership value will be: Membership-degree = (0.2 * 0.25 + 0.4 * 0.25 + 0.8 * 0.25 + 1 * 0.25) = 0.6 (2) Figure 8.Type of nodes MFs. FMADM MANET ENVIRONMENT EVALUATIONS The fuzzy approach allows the use of fuzzy operators to numerically aggregate the different fuzzy attributes that characterize the criteria of the rule and assess the degree of truth. For instance, fuzzy operators ANDs or ORs can be formulated using different intersection or union operators, according to desired aggregation behavior (t-norm and t-conorm). MANET environment were represented by a combination of (node position, mobility speed, node type and number of nodes), because each of these factors alone does not represent the environment by itself, but potentiates each other. In another word, it is represented as attribute which obtain by the fuzzification process that done before. This attribute will use by FMADM. The problem of making decisions in a dynamic environment has been the object of study in many different fields. In case of MANET environment, the Fuzzy Operator Tree (FOT) in the aggregation process will be used because it's completely general, widely applicable [21], and its very convent to the MANET environment behaviour. Since nodes position and nodes type effect positively on the overall QoS (as it's indicated by its membership function) whereas mobility speed and number of nodes have a negative effect, nodes positions and type of nodes factors will aggregate separately from node mobility and number of nodes factors. In the literature, several parameterized families of t-norms (t-conorms) have been proposed for factors aggregation proposes. Dubois-Prade had been used in the proposed system mainly because of its simplicity and the fact that a bounded parameter range [0 1] is quite convenient when it comes to calibrating the model. This operator is widely used in previous evaluation works like [25][22] [21] The Dubois and Prade union operator is an operator with compensation, which is controlled by the α parameter (parameterized operators). The use of this operator allows the simulation of the synergistic effect that resulting from the simultaneous presence of several potentiating factors. It's defined as the following expression: Since this is a parametric operator, the specific behaviours can be simulated by tuning the value of α, in the proposed system α = 1, to allow various attributes values aggregation. Best QoS could be achieved from Ad-hoc protocol when (node position and node type) attributes value equal to 1 (in virtual case) or near to 1 in perfect cases with (mobility speed and number of nodes) attributes are zero (in virtual case), as it illustrated by its fuzzification process. This property made the aim of protocol designer to maximize node position and node type values as much as possible and reduce the mobility speed and number of node values as much as possible at the same time. Due to this trade-off between these attributes, a complement operator for the result of the aggregation between mobility speed and number of node should be taken, before its aggregate (using average operator) with the result obtained from the aggregation between node position and node type, as shown in Figure 9. It should be noticed that IPP case is a result of all other three factors (attributes), where IPP represent a MANET with very few number of nodes (number of nodes attribute (factor) triggered with very small value), IPP nodes move very slow with uniform speed (mobility speed attribute (factor) triggered with a very small value) which made MANET environment is very suitable for Ad-hoc protocol performance. It's obvious now, why that model has membership value equal to 1. SYSTEM PERFORMANCE AND EVALUATION The Adhoc protocol evaluation system that we proposed in [5] is extended to include the environment evaluation by appended the FMADM system. MATLAB is used in the implementation process. To make the proposed system general, network parameters will be imported to the system from a spreadsheet in (.xls) form, this feature enables the network designer to collect the parameters obtained either manually from actual (real) network or automatically from network simulators to the Microsoft spreadsheet. The system is able to collect the data from any network simulator (OPNET, NS, OMNET, GloMoSim, etc.) that will make the system more general, as the network designer preferred. On the other hand, the environment factors should be providing manually to the system. Case Study In order to check the performance of the system, an Adhoc network has been simulated according to a camp requirement with the most well-known Ad-hoc routing protocols (OLSR, AODV, TORA and DSR). Camp Network Equipment A camp in area (800x800) m2, needs Ad-hoc network to serve its employers and its work field. Its depends mainly on E-Mail application to serve all employers requirements, E-Mail is installed on static and mobile PCs. Voice application installed on PDA or cell phone. Voice application used by security guards to support connectivity between them or between them and the mayor cell. This camp has 41 nodes, 30 nodes for E-Mail application and 6 nodes for voice application, while the other 3 nodes are used for both voice and E-Mail application, the camp has two server nodes. OPNET Modeler 14.5 used in this study as simulator tool. OPNET is one of the most extensively used commercial simulators based on Microsoft Windows platform, which incorporates most of the MANET routing parameters compared to other commercial simulators available [23]. Generally, there are four main scenarios in this simulation, one for each Ad-hoc protocol. All scenarios have the same values assigned to each node in the network except the protocol type where: Scenario 1 OLSR. Scenario 2 AODV. Scenario 3 TORA. Scenario 4 DSR. The parameters that have been used in this network are summarized in Table 2. All nodes (PCs and cell phones) were assumed, have the same capability (workstation), supported with high power wireless LAN module 802.11g [24] and simulation time was set to 1200 sec, since simulation needs more than 1000 sec of time to reach the steady state [25]. There are many factors that affect the transmission distance; particularly the combination of transmission power and antenna gain. There are large variety of voice coding scheme, Global System for Mobile communications (GSM) will be used in this network.For all considered simulations, the worst case was all the 33 nodes use EMail and the remaining nodes use voice application simultaneously, according to Table 3, the movement speed of mobile nodes assumed (0-5) m/sec. The nodes positions in the network are shown in Figure 10. Simulation Results The environment factors (attributes) are provided by the network designer manually, according to the user requirement. The environment evaluation for this network is shown in Table 4, as it obtained from FMADM system. The output value should be the same for the entire four scenarios. All nodes, in the simulation, had been assumed have the same capability (workstation) and two servers, hybrid from static and mobile with speed in interval (0-5) m/sec, according to this movement speed, mobility speed attribute will be 2.5 according to section 2.3. The environment of MANET and the frequent uses of applications have great effect on the Adhoc protocols performance. the network designer may be able to improve the QoS of some protocols that applied to the network based on the results obtained from the proposed system. For this special case study, a network is designed for a camp, according to its user's requirements the E-Mail application is more frequent use than voice application. The simulation results shown in Table 5, the protocol evaluation value obtained by QoSHFS system from the work in [5], were based on the worst case, all the 33 nodes use E-Mail and 6 nodes use voice (3 links) simultaneously. The result shows that DSR performance could be enhanced, if the numbers of nodes that use E-Mail are reduced in a time. The system shows, DSR protocol QoS jump to [0.8191] when the number of nodes that use E-Mail reduced to 29 nodes. Table 6 contains the name of the idle nodes, whereas Figure 10 shows the positions of these nodes specified in red cross line, these nodes were randomly selected. The QoS of DSR jumps from 0.5180 to 0.806 when reducing the number of nodes that use E-Mail application from 33 to 29 nodes. The number of nodes reduction also enhances the environment from 0.7925 to 0.8191. CONCLUSIONS In this paper, a FMADM system for the MANET environment evaluation had been designed and implemented to be an overall Ad-hoc protocol performance evaluation system. The environment has great benefits to get a close look to the MANET protocol performance, to decide which protocol is the best for specific network pattern. On other hand, environment could be used to increase MANET QoS by coordinating the protocol or applications requirements with its environment factors. it has been shown how MANET environment used to enhance the QoS of DSR protocol by matching the environment with protocol algorithm limitation. DSR packets typically carry complete route information, if the packet header overhead decrease, then the QoS performance of DSR will be increased. Based on this fact we reduce the number of nodes in this MANET to enhance the environment. Because of environment enhancement, the performance of DSR is rapidly improved. For future work, the proposed system can be more expanding to be used in the network QoS enhancement through the matching of network resources with the protocol algorithm limitations that applied to the network or developing the propose system to comprise the security level of the Ad-hoc protocols in the performance evaluation process.
2019-04-16T13:28:06.054Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "3e2e5e59b6867212f7662c4d144716061f53945e", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/1243024/files/8218ijans01.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "31ed32463839b787fee19ad3f8d109ec603b7b42", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
257281336
pes2o/s2orc
v3-fos-license
Risk factors of SARS-CoV-2 infection and complications from COVID-19 in lung cancer patients Background Identifying lung cancer patients at an increased risk of getting SARS-CoV-2-related complications will facilitate tailored therapy to maximize the benefit of anti-cancer therapy, while decreasing the likelihood of COVID-19 complications. This analysis aimed to identify the characteristics of lung cancer patients that predict for increased risk of death or serious SARS-CoV-2 infection. Patients and methods This was a retrospective cohort study of patients with lung cancer diagnosed October 1, 2015, and December 1, 2020, and a diagnosis of COVID-19 between February 2, 2020, and December 1, 2020, within the Veterans Health Administration. Serious SARS-CoV-2 infection was defined as hospitalization, ICU admission, or mechanical ventilation or intubation within 2 weeks of COVID-19 diagnosis. For categorical variables, differences were assessed using Χ2 tests, while Kruskal–Wallis rank-sum test was used for continuous variables. Multivariable logistic regression models were fit relative to onset of serious SARS-CoV-2 infection and death from SARS-CoV-2 infection. Results COVID-19 infection was diagnosed in 352 lung cancer patients. Of these, 61 patients (17.3%) died within four weeks of diagnosis with COVID-19, and 42 others (11.9%) experienced a severe infection. Patients who had fatal or severe infection were older and had lower hemoglobin levels than those with mild or moderate infection. Factors associated with death from SARS-CoV-2 infection included increasing age, immune checkpoint inhibitor therapy and low hemoglobin level. Conclusions The mortality of lung cancer patients from COVID-19 disease in the present cohort was less than previously reported in the literature. The identification of risk factors associated with severe or fatal outcomes informs management of patients with lung cancer who develop COVID-19 disease. Supplementary Information The online version contains supplementary material available at 10.1007/s10147-023-02311-3. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a novel beta coronavirus that originated in China and has affected the entire world with coronavirus disease 2019 . While the case fatality rate of COVID-19 varies based on factors including the availability of testing and methods of mortality attribution, certain populations are at an increased risk of developing complications from COVID-19 [1][2][3]. Patients with cancer are one such group, with mortality estimates ranging from 13 to 29% [4][5][6][7][8][9], much higher than the typical mortality in COVID-19 patients. Factors, such as older age, advanced tumor stage, recent adjuvant chemotherapy, and multiple comorbidities, need for ICU support, elevated D-Dimer, and LDH and elevated lactate blood levels have been associated with Apar Kishor Ganti and Nathanael Fillmore have contributed equally. * Apar Kishor Ganti aganti@unmc.edu severe COVID-19 outcomes [10,11]. Only a few studies have studied outcomes of SARS-CoV-2 infection specifically in patients with lung cancer; these have reported a mortality rate of approximately 30% [12][13][14]. The risk factors for developing complications from COVID-19 disease in patients with lung cancer are not clear as most of the aforementioned studies have not separated risk factors specifically by cancer type. Hence, there is an urgent need for identifying those lung cancer patients at an increased risk of getting SARS-CoV-2 infection and subsequently developing complications. To that end, we hypothesized that baseline clinical and laboratory characteristics are associated with risk of developing serious COVID-19 infection and developed multivariable models that (1) identify the clinical and laboratory characteristics of cancer patients that predict for increased risk of serious SARS-CoV-2 infection, and (2) identify the clinical and laboratory characteristics of cancer patients that predict for increased risk of death from SARS-CoV-2 infection. Identification of patients at an increased risk of SARS-CoV-2 infection will allow their oncology team to alter or delay treatment of the cancer to maximize the benefit of cancer therapy, while minimizing the risk of COVID-19-related complications. Data sources and patient population We conducted a retrospective cohort study using data from the VA COVID-19 Shared Data Resource, the VA Cancer Registry, and the VA Corporate Data Warehouse (CDW), which centralizes electronic health record data for patients seen at VA facilities nationwide [15]. We included patients with a diagnosis of lung cancer in the VA Cancer Registry (ICDOSite = 'LUNG/BRONCHUS') between October 1, 2015 and December 1, 2020, and a diagnosis of COVID-19 as recorded in the VA COVID-19 Shared Data Resource between February 2, 2020 and December 1, 2020. We considered data in the electronic health record collected between October 1, 2015 and December 1, 2020. Ethics The study was approved by the VA Boston Institutional Review Board (IRB) as an exempt human research study prior to data collection and analysis. Due to the sensitive nature of the data collected for this study, requests to access the data set are limited to qualified VA-affiliated researchers. Study variables and outcome We considered two outcomes: onset of serious SARS-CoV-2 infection and death from SARS-CoV-2 infection. Onset of serious SARS-CoV-2 infection was defined as the first occurrence of any of the following within 2 weeks after COVID-19 diagnosis: (a) hospitalization, (b) ICU admission, or (c) utilization of respiratory support (mechanical ventilation or intubation). Unplanned hospitalizations were determined using a previously defined algorithm [16]. ICU admission was defined as a subset of hospitalizations where the specialty ward is either "Surgical ICU" or "Medical ICU". Respiratory support was determined based on the presence of a current procedural terminology (CPT) or ICD-10 procedure code for intubation or mechanical ventilation (Supplementary Table 1). Death from SARS-CoV-2 infection was defined as death occurring within four weeks after COVID-19 diagnosis. We determined age at COVID-19 diagnosis, gender, race, and ethnicity from structured data in the CDW, as well as urban status (urban, rural, and highly rural) of each patient's home address and the region of the VA facility where each patient was tested for SARS-CoV-2. Smoking status was defined using health factors [17]. We extracted date of diagnosis with lung cancer, histology, and stage from the VA Cancer Registry; for patients with multiple records of lung cancer in the registry, we used the first record with a date of diagnosis within our study period. We determined utilization of systemic therapy (chemotherapy, hormone therapy, immunotherapy, or targeted therapy) based on pharmacy records in the CDW, utilization of radiation therapy for lung cancer based on procedure codes (Supplementary table 1) in 1 week proximity to a ICD-10 diagnosis code related to lung cancer (Supplementary Table 1), and utilization of surgical resection for lung cancer by identifying surgical records with a principal-associated diagnosis of lung cancer. We tabulated presence or absence of each type of therapy in the 6 months prior to COVID-19 diagnosis and determined the most recent type of therapy prior to COVID-19 diagnosis. The presence of individual comorbidities, including acute myocardial infarction (AMI), chronic obstructive pulmonary disease (COPD), diabetes, hypertension, and stroke, was derived from diagnosis and procedure codes in the year prior to each patient's first COVID-19 test using Centers for Medicare and Medicaid Services Chronic Conditions Warehouse algorithms adapted for use in the VA [17] (Also, available at Centers for Medicare and Medicaid Services Chronic Conditions Data Warehouse: condition categories. https:// www2. ccwda ta. org/ web/ guest/ about-ccw Accessed October 01, 2020). Finally, we identified laboratory test results from the specimen taken most recently prior to COVID-19 diagnosis for complete blood count (white cell count, platelets, hemoglobin); absolute neutrophil and absolute lymphocyte counts; neutrophil-to-lymphocyte ratio (NLR); electrolytes (sodium chloride, potassium chloride, and chloride); liver function (AST, ALT, and alkaline phosphatase); and kidney function (creatinine). Laboratory test results were categorized using pre-specified cutoffs detailed in Supplementary Table 2. Statistical analysis Baseline characteristics were summarized in the full cohort and patient assigned an outcome group as either those without serious SARS-CoV-2 infection, patients with serious SARS-CoV-2 infection but no death, or patients who died following SARS-CoV-2 infection. For categorical variables, differences in proportion across these groups were assessed using chi-squared tests. For continuous variables, betweengroup differences were assessed using the Kruskal-Wallis rank-sum test. Multivariable logistic regression models were fit relative to the two outcomes of interest (i.e., onset of serious SARS-CoV-2 infection and death from SARS-CoV-2 infection). The index date was the date of diagnosis of COVID-19. For onset of serious SARS-CoV-2 infection, follow-up time was censored by death (if it occurred prior to onset of SARS-CoV-2 complications) or the end of the study period (December 1, 2020). For death from SARS-CoV-2 infection, follow-up time was censored by the end of the study period. All variables described above were included in each model. The demographics of this cohort are described in Table 1. The majority of the patients were male (n = 337; 95.7%) with a median age of 72.6 years (IQR: 69-77.3 years). Whites made up of almost two thirds of this cohort (n = 234; 66.5%), while Blacks comprised of 29% (n = 102). Hispanic or Latino ethnicity was noted in 3.4% of this cohort (n = 12). Nearly half of the cohort consisted of current smokers (n = 158; 44.9%), while another 42.6% of the cohort were former smokers (n = 150). Adenocarcinoma was the most common histology seen (n = 142; 40.3%) and the most recent treatment received in almost 80% of the cohort was an immune checkpoint inhibitor. Twenty-two patients (6.2%) received radiation therapy, while 8 patients (2.3%) underwent surgery in the 6 months preceding their COVID-19 diagnosis. Patients who had fatal or severe infection were older than those with mild/moderate infection (median age: 76.1 and 73.0 years, vs. 72.0 years; p < 0.001). Patients who suffered a fatal or severe infection were also more likely than patients with mild/moderate infection to exhibit low hemoglobin levels (62.3% and 52.4%, vs. 37.8%, p = 0.001). In addition, 57.4% of patients who died within four weeks of SARS-CoV-2 infection suffered from diabetes, compared to 40.6% of patients with mild/moderate infection (p = 0.01). We used multivariable logistic regression to evaluate the relationship between these baseline characteristics and the odds of experiencing serious SARS-CoV-2 infection, or death within four weeks of infection ( Table 2). Older patients had increased odds of experiencing severe infection or death (OR: 1.09; 95% CI 1.04-1.14; p < 0.001). Patients who had anemia (OR: 2.8; 95% CI 1.47-5.33; p = 0.002 also had increased odds of severe infection or death. Gender, race, ethnicity, histology, recent surgery or radiation, chronic obstructive pulmonary disease, hypertension, chronic kidney disease, acute myocardial infection, and other laboratory values were not associated with an increased risk of severe/ fatal infection. In addition, we prepared a multivariable logistic regression model to indicate the odds of death within four weeks of infection (Table 3). Older patients had an increased risk of death (OR: 1.11; 95% CI 1.05-1.18; p < 0.001) from SARS-CoV-2 infection. Patients who most recently underwent checkpoint inhibitor therapy also had increased odds of fatal infection (OR: 6.43; 95% CI 1.76-23.44; p = 0.005). The presence of anemia too associated with increased odds of fatal infection (OR: 2.4; 95% CI 1.02-5.61; p = 0.044). Factors associated with lower odds of death following infection include elevated alkaline phosphatase levels (OR: 0.19; 95% CI 0.04-1; p = 0.05). Gender, race, ethnicity, histology, stage at diagnosis, previous surgery or radiation therapy, diabetes, hypertension, history of stroke, and other laboratory parameters were not associated with mortality from SARS-CoV-2 infection. Discussion Almost 30% of lung cancer patients with COVID-19 infection either developed complications or died. The case fatality rate for lung cancer patients in the present cohort was 17.3%. In contrast to previous studies, the case fatality rate in our series was significantly lower. As discussed earlier, two previous analyses suggested that the case fatality rate in lung cancer patients was greater than 30% [12][13][14]. The first analysis by Tagliamento et al. was a meta-analysis of all previously published studies [12]. Interestingly, they found that the case fatality rate in studies that included more than 100 patients, was lower than that seen in smaller series. They found that the case fatality rate was 32.4% (95% CI 26.5-39.6%) when they included all studies, but this decreased to 22.7% (95% CI 11.8-43.8%) when they excluded studies with < 100 patients, suggesting a possible bias in reporting. The latter estimate is similar to the findings from our cohort, which included over 352 patients. When compared to the patients in the aforementioned TERAVOLT cohort, the patients in the present cohort were older (IQR: 69-77.3 years vs. 61.8-75 years), more likely to be male (95.7% vs. 70%), be exposed to tobacco (87% vs. 81%) and be diagnosed with stage I lung cancer (34.7% vs. 8%) [18]. Similarly, when compared to the patient demographics in the Spanish GRAVID study, the patients in the present cohort were more likely older (mean age 72.6 vs. 67.1 years), be likely to be male (95.7% vs. 74.3%) and be diagnosed at a stage I disease (34.7% vs. 10%) [14]. Our analysis has the advantage of including all the patients within a large US database and is therefore, less likely to be susceptible to reporting bias. Another possible reason for this discrepancy could have been the stage distribution of patients in the present cohort, as a larger proportion were diagnosed with stage I disease. A third possibility could be that our cohort included patients diagnosed throughout 2020. It is possible that these patients may have had the benefit of being treated after more information was known about the pathophysiology of COVID-19 and therefore received more appropriate care. In the present cohort, increasing age, prior immune checkpoint inhibitor therapy, and low hemoglobin level were associated with a severe/fatal SARS-CoV-2 infection. When we evaluated factors associated with mortality in our cohort, these same factors were associated with increased mortality, while elevated alkaline phosphatase was associated with decreased risk of mortality. A prospective French study of 1230 cancer patients, of which 475 had confirmed COVID-19 disease, showed that male gender, metastatic disease, history of inflammatory or autoimmune disease receiving immunosuppressive treatments and lymphopenia were independent predictive factors for death, on multivariate analysis [19]. The TER-AVOLT study, which is a global consortium studying the effect of SARS-CoV-2 in patients with thoracic cancers, has presented data on 400 patients so far [13]. In their updated analysis, age > 65, ECOG PS ≥ 1, use of steroids and anticoagulation prior to COVID-19 diagnosis were associated with increased risk of death [20]. The GRAVID study, which was a prospective, observational study of patients with lung cancer and PCR-confirmed COVID-19 diagnosis across 65 hospitals in Spain demonstrated a mortality rate of 32.7% [14]. They observed that a higher mortality rate among patients treated with corticosteroids during their hospitalization, while anticancer therapy, including immunotherapy, was not associated with an increased risk of hospitalization or death. However only 20% of their patients received an immune checkpoint inhibitor. They also found that patients with lymphocytopenia and high LDH had an increased risk of death, but the lymphocytopenia was most likely related to corticosteroid use, while an elevated LDH was most likely a marker of disease severity. A small report from Indonesia found that elevated neutrophil-to-lymphocyte ratio and elevated platelet to lymphocyte ratio were significantly associated with increased risk of death from COVID-19 infection in patients with non-small cell lung cancer [21]. In contrast, in our analysis, there was no association between laboratory parameters, other than hemoglobin and either severe infection or mortality from COVID-19 infection. Given the inherent nature of our population, which was overwhelmingly male, we were not able to find a gender difference in mortality or risk of severe disease. Significance: ***p < 0.01, **0.01 < = p < 0.05, *0.05 < = p < 0.1 There has been a lot of debate on the influence of immune checkpoint inhibitor therapy on outcomes from COVID-19 infection. On the one hand, immune checkpoint inhibitors can increase the robustness of the T-cell response and improve the ability to resist the impact of SARS-CoV-2 infection resulting in a good clinical outcome; they can also stimulate the development of a hyperimmune response and a severe systemic inflammatory state, which has been the hallmark of severe COVID-19 disease [22]. Clinical evidence on the effect of checkpoint inhibitors and COVID-19 outcomes has been mixed. A single-center, retrospective study of 423 patients with symptomatic COVID-19, found that treatment with immune checkpoint inhibitors (ICIs) predicted for hospitalization and respiratory illness [23]. In contrast, in another study of 69 patients with non-small cell lung cancer, Luo and colleagues did not find an association between PD-1 inhibition and an increased risk of severe COVID-19, defined as a composite rate of intensive care unit stay, intubation, transition to do not intubate status, and death [24]. In our cohort, the use of immune checkpoint inhibitor use was associated with an increased risk of severe SARS-CoV-2 infection and death from COVID-19 disease. One interesting finding in our cohort was the finding that elevated alkaline phosphatase levels were associated with decreased risk of mortality. Previous studies that have studied association between elevated liver-associated enzyme and severity of COVID-19 infection, have noted that elevated liver enzymes were associated with poor outcomes. Chela et al. studied 14,138 patients from the Cerner Real-World Data™ de-identified COVID-19 patient cohort [25]. They found that elevated ALT, AST, and total bilirubin levels, but not elevated alkaline phosphatase levels, were associated either with COVID-19 mortality or risk of intubation. In another study from Poland involving 2184 patients, only an elevated AST level was associated with COVID-19-related mortality [26]. In this analysis too, alkaline phosphatase was not associated with COVID-19 outcomes. The main difference between these studies and ours is that the present study was focused only on patients with lung cancer. However, a clinical explanation for the observation that an elevated alkaline phosphatase protects against COVID-19 mortality in lung cancer patients is unclear, and this finding may simply be due to chance. We did not find any association between race, ethnicity, place of residence, histology, the presence of chronic obstructive pulmonary disease, hypertension, history of acute myocardial infarction or stroke, thoracic radiation therapy, and outcomes from SARS-CoV-2 infection. While unexpected, this is not different from what has been reported earlier [13]. Our analysis has limitations. This was a retrospective cohort of patients collected using the VA in electronic medical record system and therefore suffers from the drawbacks of any retrospective analysis, a major drawback if the absence of stage in a large proportion of patients. This is most likely due to the nature of our database, which uses the cancer registry to identify the stage. Cancer registries usually lag actual diagnosis by at least 6 months and hence are unlikely to have staging information on more recently diagnosed patients. Unlike some of the prospective studies that we have discussed above, we were not able to identify various clinical factors that potentially could have been relevant for the identification of factors affecting outcomes. Additionally, the cohort only includes patients diagnosed with COVID-19 on or before December 1, 2022, prior to the emergency use authorization of COVID-19 vaccinations or COVID-19 specific treatments. As a result, we are unable to measure the effect of vaccination or COVID-19 specific treatments on outcomes for lung cancer patients. Despite this, the present analysis is one of the largest data sets on lung cancer patients affected by COVID-19 disease. Since it includes the entire VA population, the odds of reporting bias are significantly reduced. Second, since this is a cohort of predominantly male veterans from the United States, the generalized ability of this information to female patients is limited, even though approximately 15% of this cohort was made up of women. In conclusion, we found that the mortality of lung cancer patients from COVID-19 disease was less than what has been previously reported in the literature. Older age, anemia, and use of immune checkpoint inhibitors were associated with increased mortality in this cohort. These data will help guide treatment of lung cancer patients with COVID-19 disease during the current pandemic. Funding This work was supported by the VA Office of Research and Development, Cooperative Studies Program. The views represented are those of the study authors and they do not necessarily reflect those of the United States Department of Veterans Affairs or the United States federal government. Data availability statement Due to the sensitive nature of the data collected for this study, requests to access the data set are limited to qualified VA affiliated researchers. Declarations Conflict of interest AKG: Consultant: Genentech, AstraZeneca, G1 Therapeutics, Jazz Pharmaceuticals, Flagship Biosciences Research Support: Takeda, DSMC: Y-mAbs Therapeutics. MK: Executive Program Director of Oncology, Specialty Care Services, VHA, Department of Veterans' Affairs. The other authors do not have any conflicts of interest.
2023-03-03T06:17:03.344Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "1d640f6d7eb17ca2846f70228a8c38c89861ea2b", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10147-023-02311-3.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "abe5075a4cf158e395e3045cedd322a91e7adf74", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
214228281
pes2o/s2orc
v3-fos-license
Thermal stability and some thermodynamics analysis of heat treated quaternary CuAlNiTa shape memory alloy This study presents the heat treatment effect on a quaternary Cu79Al13Ni4Ta4 (wt%) shape memory alloy. The induction technique was used for melting the alloy and then four pieces of the alloy were heat-treated at (673 K, 873 K, 1073 K, and 1273 K) for one hour. Some physical parameters were characterized using DSC, XRD, and SEM-EDX. The as-cast and heat-treated samples were studied in terms of phase transformation temperatures, enthalpy change, entropy change, Gibbs free energy, and elastic energy. The transformation temperature was increased by applying heat treatment. Martensitic phase transformation at heat treatment temperature of 1273 K was not observed. Besides, after rising heat treatment temperature, some new phases such as ϒ 1 ′ and β 1 ′ were specified in XRD patterns and SEM images. Generally, for heat-treated samples, the transformation temperature remains almost constant after the 3rd cycle. However, the thermal stability of the as-cast alloy was not affected through thermal cycling. Introduction Recently shape memory alloys are widely used in modern technological applications such as automotive, aerospace, robotics, and biomedical applications [1,2], because of their well-known properties (superelasticity and shape memory effect), which gives them the ability to return to their original form after deformation [3,4]. In all applications, the effect of temperature on material characteristics is one of the most important factors, especially for the production of primary materials. Although Low-Temperature Shape Memory Alloys (LTSMAs) have transformation temperatures below 100°C, High-Temperature Shape Memory Alloys (HTSMAs) can operate at the temperature range of 100°C [5,6]. After NiTi family, Cu-based SMAs are the most alloy that has attracted the most attention in the technology and industry, because it can be handled easily, and its operation does not need more cost price compared with the other types of SMAs [7]. CuAlNi is the best type of Cu-based HTSMAs because it has high transformation temperature and thermal stability. CuAlNi SMAs can be used for temperatures around 200°C, while NiTi and CuZnAl-based alloys can be controlled for temperatures about 100°C [8,9]. On the other hand, Cu-based SMAs have poor thermal stability, thermal conductivity, and some mechanical properties compared to the other types of SMAs, however, these properties can be improved by adding third and fourth chemical elements [7,[10][11][12]. Saud et al observed that the porosity density and grain size was decreased by adding 2 wt% of Ta to Cu-Al-Ni, while its transformation temperature and the corrosion resistance was increased [11]. Dagdelen and colleagues reported that the grain size and precipitations were decreased through doping CuAlCr alloy with Ni element [13]. In addition, it is reported that microhardness and precipitate particles are increased by substituting Cu with Cr element in CuAl-based SMA [14]. In this study, 4 wt% of tantalum was added to ternary Cu 83−x Al 13 Ni 4 SMA, and the effect of heat treatment has been investigated on transformation temperature, microstructure, and some other thermodynamical properties. The stability of the alloy was studied by applying 10 complete thermal cycles. Experimental procedure CuAlNiTa SMA was produced accurately using high pure powder of primary metal elements, which includes 79% wt Cu-13% wt Al-4% wt Ni-4% wt Ta. The powders were pelletized after mixing the powders and pressed with a mechanical hydraulic compressor (SPACAC). The pelleted specimens were melted using the induction melting furnace. The production process was completed after quenching the sample into ice-water. To study the effect of heat treatment on CuAlNiTa SMA, the ingot was cut into some small pieces and kept them at 673 K (sample A), 873 K (sample B), 1073 K (sample C) and 1273 K (sample D), for one hour. Then, the effect of heat treatment on phase transformation temperatures (PTTs) and some thermodynamics parameters, such as entropy change, enthalpy change, Gibbs free energy, and elastic energy was investigated using Perkin Elmer Sapphire Differential Scanning Calorimetry (DSC) under the argon atmosphere at 10°C min −1 heating-cooling rate. In room temperature, the XRD was performed to analyze the crystal structure and different phases of CuAlNiTa SMA for as-cast and heat-treated samples. In addition, the microstructure of treated alloys was investigated using scanning electron microscope (SEM) and energy dispersive scanning x-ray (EDX), model (EVO 40XVP), also to obtain a clear microimage the specimens were firstly polished mechanically and then they etched with 20 ml HCl-96 ml methanol-5 gr Fe 3 Cl-H 2 O solution. Results and discussions 3.1. Phase transformation temperature and thermodynamic properties The DSC measurements were performed for thermal analysis of all the samples. Figure 1(a) shows the typical DSC curve of as-cast CuAlNiTa alloy. Since the austenite and martensite transformation temperatures are above 400 K, so the alloy can be classified as high-temperature shape memory alloys (HTSMA). Table 1 summarizes the PTTs including austenite start (A s ), austenite peak (A p ), austenite finish (A f ), martensite start (M s ), martensite peak (M p ), martensite finish (M f ) temperatures, and the enthalpy change of phase transformation in both heating/cooling processes. The obtained results showed that all parameters affected by the heat treatment process. To show the thermal cycling behavior, the as-cast sample was subjected to 10 cycles for both exothermic and endothermic processes with a 10 K min −1 heating/cooling rate ( figure 1(b)). The width of the peaks stays constant, while both austenite and martensite peaks start to shift to the higher temperature by increasing the number of cycles. Also, the vibrations in the heating peak were disappeared after completing the first cycle. In addition, sample D has lost its shape memory characteristics, thus, DSC showed peak through neither heating nor cooling process (figure 1(c)). There can be several reasons why martensitic transformation does not occur at 1273 K heat treatment temperature. First of all, this temperature (1273 K) is close to the melting zone and there is a phase transition in this zone, e.g. it can be seen this phase transition in figure 1(c). This phase transition causes the CuAlNiTa alloy to lose its shape memory property. In addition, the precipitation phase in the martensite phase effect the transformation, and thus the composition around the phase is altered. Also, there are interfaces phases in the martensite variants [15,16]. Figure 2(a) illustrates that the PTTs have been generally increased by applying heat treatment on the alloys (except for sample D). The alloy that was aged at 673 K, has the largest temperature hysteresis, however, its value was decreased by increasing the heat treatment temperature ( figure 2(b)). In this study, the chosen heat treatment temperatures are before the eutectoid phase decomposition, which is taking place at 673 K. The eutectoid phase degradation occurs at 838 K [17], thus, it can be concluded that a significant decrease in the value of hysteresis has occurred after the eutectoid transformation took place. In addition, while the value of A p (figure 3(a)) and M p (figure 3(b)) increase gradually with increasing the thermal cycling, in all heat-treated samples these values have diminished. It is well observed that after applying 3 to 4 thermal cycling all cases have been stabilized. Also, the value of M p is converged by repeating the thermal cycling process. For the first cycle, sample C has recorded the highest value of M p , while after three completed cycles its value diminished about 100 K. In general, TTs values of as-cast sample increased while TTs values decreased in the heat-treated alloys, in addition after four complete cycles, the TTs almost stabilized. According to the obtained results, it can be said that the thermal cycling with DSC could regulate the TTs values of all cases. The heat exchange through the heating process (or enthalpy change ΔH M→A ) was obtained using DSC software program, which based on the integration from austenite start to austenite finish temperature. The area under the DSC peaks represents the enthalpy change [10]: where / dq dt is the derivative of the instantaneous heat absorbed by the sample; T and t are the absolute temperature and time, respectively. In addition, another extensive property is entropy change (DS) that shows the disorderness of microstructure in the alloy. For austenite transformation, DS can be found using the following formula [18]: as where H A and S A are enthalpy and entropy in austenite; H M and S M are enthalpy and entropy in the martensite phase. The pushing force for transforming austenite to martensite can be given as [22]: In addition, elastic energy can be obtained from the differences between Gibss free energy at the beginning of the martensite phase transformation and when the transformation is completed [23]: All calculated parameters are listed in table 2. All of the calculated parameters have been obtained using DSC measurements and the aforementioned equations. In figure 4 it can be seen that all parameters have the same pattern since they are directly function of enthalpy change. Generally, the heat-treated sample at 673 K has recorded the highest value of enthalpy/entropy change with a comparably big D  G A M and G e value. Figure 5 reveals the XRD pattern of all specimens. The peaks were indexed by literature [3,[24][25][26]. The main peaks are b ¢ 1 (thin plate martensite) and g ¢ 1 (thick plate martensite). Also, there are some austenite phases including, g 1 and α. The DSC results support this finding. The matrix of all alloys showed martensite phases with some trapped austenite precipitation phases. Furthermore, the pattern of the alloy has been affected by heat treatment, i.e. some peaks have been strengthened. Moreover, to obtain the grain size of the alloys, the Scherrer equation can be used, which is mostly limited to metallic and ceramic microstructures that have grains in the range of nano-scale. Paul Scherrer proposed his equation, which based on the wavelength of the incident x-ray (λ=1.5406 Å), Braggs angle (θ), shape factor (K=0.9), and wideness at half maximum (B) of the XRD peak. Thus, the equation is as follows [27,28]: Crystal and microstructural analysis B cos 8 Figure 6 shows the effect of thermal treatment on the grain size of the alloy. It is obviously can be seen that heat treatment enhanced the size of grains. It is proved that grain size influences the mechanical behavior of all materials [29,30]. In this study, three different tests have been carried out for Vickers microhardness measurements. The standard deviation shows that there are different results obtained from various microstructures of the alloys ( figure 7). Generally, it is found that the value of microhardness has been decreased Table 2. Some calculated thermodynamic parameters for both treated and non-treated samples. by increasing crystalline size. Also, microhardness depends on the heat treatment temperature. Sugimoto et al showed that microhardness in Cu-Al-Ni-Ti alloy starts to increase for heat treatment at 773 K, however, it falls down for higher temperatures [31]. The same results have obtained for Cu-Al-Ni-Ta alloy. On the other hand, Sampath [32] tried to enhance shape memory characteristics and ductility of CuAlNi-based SMAs through grain refinement by adding fourth elements. He found that the microhardness was increased by decreasing the grain size of the alloys [32]. The microstructure of the as-cast and heat-treated samples is shown in figure 8. There are two different martensite phases can be seen in the SEM images, where the first one is course-g¢ 1 and the second one is fine b ¢ . 1 Also, there are some precipitation phases such as sphere-like Ta, Al, and Ta(Cu, Al) 2 phases. Since the SEM images, has been taken in room temperature, they show g ¢ 1 and b ¢ 1 phases in the matrix of all specimens, so the results support the DSC measurements which illustrate that the samples have martensite phase at room temperature. It is proved that g ¢ 1 (2 H) has higher Al content compared to b ¢ 1 (18 R) phase [33]. In addition, the amount of these types of martensite phases can influence transformation characteristics such as PTTs [34,35]. Although in the microstructure of CuAlNiTa alloy, which has been heat treated at 1273 K, shows martensite phases, there is no sign of phase transformation in its DSC measurement. In figure 8 grain boundaries cannot be seen on the given scale, while Sari [33] found that as-cast CuAlNi and CuAlNiMn alloys have shown microscopic grain about 1400 and 350 μm, respectively. The production technique is an important parameter that affects the grain size and hence all mechanical properties. Conclusions In summary, the main outcomes of this study are as follows: • Martensitic phase transformation was observed in all heat treatment temperatures, while the sample that heat treated at 1273 K gave no result. The PTTs were increased by applying heat treatment. • It is found that thermal cycling diminished M p temperature in all heat-treated samples, while its value gradually increased in the as-cast alloy. The transformation temperature remains almost constant after the 3rd cycle, i.e., the thermal stability of the alloys increases. • The XRD has shown various patterns after the heat treatment process was carried out. As the crystallization of the alloy increased, the number of diffraction peaks are also increased. • The calculated grain size using Scherrer formula showed that the grain sizes are in the range of nanoscale, and the grain boundary cannot visible in the SEM images with Mag=2.50 kx and 500 kx. However, the crystal size is generally increased by increasing the heat treatment temperature. • The grain size grew up with applying the heat-treatment process. It is obtained that the quantitative value of hardness has been decreased by increasing the crystal size. • From SEM microimages and EDX analysis, it is found that there are two martensite phases in the alloys, including thin-b ¢ 1 (the matrix) and thick-g ¢ 1 phase. In connection with the production method, Tantalumrich regions have been found.
2019-11-28T12:26:08.415Z
2019-12-09T00:00:00.000
{ "year": 2019, "sha1": "aeaf7ee148789abb0fdd640f524ba04580498a7a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ab5bef", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "9247ebd61aefde0272e1d78be7dc1dcbf88d756e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
236858003
pes2o/s2orc
v3-fos-license
A Quick Laboratory Method for Assessment of Blood Penetration and Splash Resistance of PPE Fabrics During the COVID-19 Pandemic Situation In the current outbreak of COVID-19, healthcare facilities are hit by a shortage of supply of Personal Protective Equipments (PPE) owing to extensive local and global demands and restrictions on their import or export. To circumvent this, trials with several indigenous materials suitable to qualify for PPEs and sterilization techniques for their reuse are being carried out. Prior to their commercialisation, it is imperative to evaluate the resistance of the PPE fabrics against penetration of synthetic blood under applied pressure, 40–300 mmHg as per test standards. Generally, two types of tests are recommended, Penetration Test and Splash Resistance Test, the former being more stringent. While the final certification of PPEs is carried out by authorised agencies, a first impression quick estimate of the choice of fabric can be made using a simple laboratory set-up. This study describes setups developed in the laboratory to carry out these tests. Evaluation of the fabrics, post-gamma irradiation, was also carried out. Microscopic examinations were performed to investigate radiation-induced structural changes in fabrics showing degraded performance. This set-up is useful for selection of fabrics and to assess the feasibility of reuse of PPEs, which is the need of the hour in this pandemic situation. Introduction In the current scenario of COVID-19 pandemic, a large number of Health Care Professionals (HCPs) are being engaged for medical, clinical, and logistics managements. The chances of accidental exposures have increased, with possibility of contact with pathogenic organisms. Major routes of pathogen entries are inhalation and/or ingestion of aerosols, open skin lesions, subcutaneous injections through needle, and scratches with contaminated objects (Coelho and GarcíaDíez 2015;Wurtz et al. 2016). Some crucial viral infections like recent outbreaks of Ebola virus, norovirus, and Middle East Respiratory Syndrome (MERS) corona virus have already been reported for their association with occupational infections (Kilmarx, et al. 2014;Saegeman, et al. 2015;Hsieh 2015). To avoid/minimize the risk of accidental exposure to pathogenic organisms, it is mandatory that the HCPs wear suitable Personal Protective Equipment (PPE) at workplaces when dealing with infected patients. Current scenario demands quality controlled, rapid production of Personal Protective Equipments (PPEs) like face masks, aprons, gloves etc. PPEs act as a protective barrier between the patients and the HCPs, by avoiding penetration of body fluid (blood, saliva, plasma, serum, urine, spit, etc.) which may ooze or spurt-out from the patient's body at a pressure ranging from 60 to 300 mm Hg (Jones et al. 2020). The spurted body fluids may carry various kinds of contagious microorganisms like viruses, bacteria, fungal-spores, and other parasites. Among these, the viruses are critical and seek special attention as they are small in size (Nanometrerange) and become the limiting criteria for the selection of appropriate fabric for the PPEs (Nikiforuk et al. 2017). Selection of suitable PPE material is a crucial step for PPE manufacturing industries. ISO 16603 and ASTM F1670 are the most followed standards to categorise these materials, as per their performance, into different classes, which can be used in various medical/clinical scenarios (ISO 16603 2004;ASTM F1670, F1670M-17a. 2017). There are various methods available for testing of the resistance of the fabrics used for PPE kit preparations. Two standard methods, (1) Synthetic Blood Penetration Resistance Test, and (2) Splash Resistance Test (of synthetic blood), are widely accepted/practiced in the domain of PPE fabric testing (Rengasamy et al. 2015;Shimasaki et al. 2017). Synthetic Blood Penetration Resistance Test (SBPRT) is a standard, most crucial, and widely used criteria for qualification of the fabric. This involves assessing the resistance of the fabric against an applied fluid (synthetic blood) in the pressure range of 60-160 mmHg for varying time durations (5-30 min or more) (JIS, T. 2007;JIS, T. "8060 2007). Detection of penetrated fluid, downstream of the test sample, acts as qualifying indicator of the test. These tests demand high end/sophisticated, established infrastructure, involving high cost instruments/equipment. Besides, these facilities were non-functional or non-accessible during lockdown, when PPEs were high in demand. The proposed method is a simple laboratory set-up, which can be assembled with existing regular laboratory instruments/ equipment in any tissue culturing biological laboratory. Most of the components/equipment used in this set-up such as syringes, momanometer, and filtration assembly are commonly available in the market at low costs. In this pandemic crisis, small laboratory scale test set-ups can help to assess the choice of fabric having potential to qualify for PPE manufacturing and can be effectively used for performance evaluation of PPEs made from new materials, or those subjected to sterilization methods for their possible reuse. Extensive use to PPE kits during pandemic demands high availability and local manufactures. Therefore, locally and globally, there have been an impetus for exploring feasibility of reuse of PPE kits after complete sterilization (World Health Organization 2020). Among all sterilization methods (dry heat, vaporized H 2 O 2 , plasma gas, etc.) radiation sterilization has its own distinct advantages as it can be carried out in a sealed package and without raising the temperature. 60 Co-gamma-rays induced inactivation of SARS-CoV family viruses has been extensively reported. A dose of 30 kGy ( 60 Co-gamma-rays) is recommended for complete inactivation of RNA viruses (SARS-CoV, MERS-CoV), based on cell cultures or tissue-culture assays (Kumar et al. 2015). This study includes, development of test set-up with existing laboratory equipment to evaluate performances of 11 PPE-fabrics against penetration of Synthetic Blood Equivalent (SB) at various pressures ranging from 40 to 300 mmHg applied fluid pressure, both before and after radiation sterilization (30 kGy, gamma radiation). Of the 11 fabrics, three were subjected to splash resistance test. As a mechanistic approach, radiation-induced structural changes in the fabrics were examined under fluorescence and bright field microscope. Test Materials/Fabrics Limited commercial information about composition of the material used in these fabrics was available to us; the maximum fraction of most fabrics is Polyethylene/Polystyrene/ Polypropylene. All the fabrics were non-woven type with multiple internal layers and hot-spot press patterns. However, fractions and densities of Polyethylene/Polystyrene/ Polypropylene were not same in all fabrics. Some samples of fabrics irradiated using gamma radiation to a dose of 30 kGy were also received to study the effect of gamma sterilization on the PPE material. Simulation of Test Set-Up for Synthetic Blood Penetration Resistance Test We have put together a working set-up with existing laboratory instruments while following the guidelines of ISO 16603 ASTM F1670 and JIST 8060 and 8122 with minor modifications without compromising on physical parameters of the test standards. The set-up was built with a conventional biological media filtration assembly, as shown in Fig. 1. It included two vented cylindrical chambers, which could be sealed airtight to sustain planned pressure levels. These two chambers were connected at the middle region of the assembly (which was designed to hold the filter) through a gauge. The circular piece of fabric, of diameter ~ 5 cm of the test material was placed over this filter holder. The upper surface of the fabric was layered with 20 ml of SB and this chamber was connected with a pressure unit of the sphygmomanometer to create and measure air pressure in the chamber. It was ensured that fluid column height of SB was very small (~ 1 cm) so that its gravitation pressure is negligible compared to the applied pressure. For the production of PPEs, commercially available blood repellent fabrics are chosen, performances of these fabrics are being evaluated by employing a liquid, resembling physical characteristics of blood. Surface tension of blood and other body fluid ranges from 42 to 60 mN/m (except saliva). To simulate similar wetting characteristics, synthetic blood was prepared with polysucrose and ionic salts of sodium and calcium disodium in water to obtain the surface tension of 50 ± 10 mN/m (Portnoff et al. 2021). A colouring agent was added for better detection of SB penetrated through the fabrics. The whole set-up was kept in a 2.5 L beaker to support it and collect the dripping fluid, in the vertical set-up, as shown in Fig. 4. The pressure was exerted manually from chamber A with the help of a rubber bulb. Setup was first tested with non-porous nitrile glove material for its pressure holding ability and leakages, if any. It was confirmed that, pressure once exerted persists without any leakage up to tested pressure of 300 mmHg. Our set-up qualified for precise pressure creation and holding criteria without any leakage. Simulation of Test Set-Up for Splash Resistance Test This test set-up comprised of a circular fabric holder and an injector placed at a distance of 30 cm as shown in Fig. 2. The fabric holder was mounted on a lead block for stability and placed in a beaker to collect the dripping liquid. The injector was placed at an appropriate height with the help of a height adjustable stand. In one run, 8 ml of SB fluid was injected on the fabric sample and the penetrated fluid was detected on the other side of the fabric. Detection of Synthetic Blood Penetrating Through the Fabric For both the above-mentioned tests, the penetrated SB was detected at 3 levels; (1) visual detection, (2) detection by absorbent paper-after applying the planned fluid pressure, absorbent paper was swiped on to the downstream surface, SB spot created if any, was examined visually and (3) detection by magnifying lens-third level of detection was carried out with magnifying hand-lens (for both penetrated droplets and spots created on absorbent paper) (Fig. 3). Radiation Sterilization of PPE Fabrics For experimentation a piece of 6 × 6 inch. of fabric was cut from the PPEs and irradiated with the dose of 30 kGy at a dose rate of 6.2 kGy/h in the gamma chamber (GC-5000) at BARC. Microscopic Examinations of Radiation-Induced Structural Changes in Fabrics Fabrics were cut into pieces of 2 cm × 1 cm dimensions (from the piece of 6 × 6 inch.), sandwiched between the glass slide and a cover glass and ends were sealed with rubber cement (Fig. 4) to prepare them for examination under a microscope. Total 6 samples were observed and analysed, 3 before and 3 after radiation sterilization (30 kGy). Structural changes/examination of voids, were carried out at 100× and 400× magnifications (Figs. 11 and 12). Performance Evaluations by Synthetic Blood Penetration Resistance Test The fabric samples A-K were subjected to SBPR test in a pressure range of 0-300 mmHg or up to the breaking/ leaking pressure point. The fabrics were first exposed to the fluid for 5 min at atmospheric pressure. Pressure was applied in an increasing order in step of 20 mmHg for 5 min at each pressure point. Japanese Industrial Standard (JIS) has classified protective clothings/fabrics in different classes based on performance of the fabrics against contact of the blood (synthetic blood) and other body fluids applied under pressure JIST 8122, has ranked the penetration resistance of the fabric material in 7 classes (from < class 1 to class 6), as described in Table 1 (Shimasaki et al. 2016). The results of the test with respect to the sustained pressure, the corresponding JIST 8122 classification and their utility classes are given in Table 2. The effect of radiation in degrading/deterioration of Fabrics A and B is also evident from this Table 2. For Fabric A, the pressure holding ability decreased from 80 to 60 mmHg and Fabric B lost its ability to hold 40 mmHg fluid pressure post-irradiation. The Fabrics C, I, J and K did not reach the breaking point even up to a pressure of 300 mm Hg, before and after radiation sterilization. It was observed that Fabrics D, E, F, G and H were able to sustain at atmospheric pressure but could not resist at 40 mmHg fluid pressure, started leaking heavily, both before and after radiation sterilization. For all fabrics, the test was repeated three times to confirm the results. For Fabrics I and J, resistances were also tested at the fabric joints (sewed and taped). It was observed that, at improperly taped joints, some creases trapped inside the taping line acted as a passage for fluid to reach to the sewed line and penetrate through it (Fig. 5). However, in case of samples with properly taped joints, the highest pressure was sustained by the fabric. It is emphasized that quality of taping plays a crucial role irrespective of performance of the fabric as a whole. Method of Detection of Fluid Penetration Through the Fabric Fluid penetration through the fabric was detected at three levels, first level of detection was visual through naked eye, following ISO 16603 protocols. To enhance sensitivity of the detection, swipe test was performed, wherein leaked surface was swiped with absorbent paper and spot created on it, if any, was observed to confirm leakage (adapted from Shimashaki et al. 2017) (Shimasaki et al. 2017). The test was added assuming that micro amount of fluid penetrating through the fabric may not be visible by naked eye but can be detected by swipe test. The presence or absence of spots on absorbent paper was further confirmed by observation under a magnifying hand-lens. These levels of detection helped us to define cut off pressure holding point for each tested fabric. Figure 6 shows visual observations of the samples showing the leakages through the fabric at varying pressure points, both before and after irradiation. On the other hand, Fig. 7, shows the three levels of detection adopted to confirm the leakage at the breaking point pressure. Fabric Evaluations by Splash Resistance Test Performances of Fabrics A, B and C were further evaluated by Synthetic Blood Splash Resistance Test using the test setup developed (Fig. 8). A ~ 2 cm diameter of fabric sample was placed in the fabric holder and 8 ml of SB was splashed on it at a pressure range of 280-300 mmHg from a distance of 30 cm. This was repeated three times on each sample so that effectively 24 ml of SB was splashed on each tested fabric sample. As per ISO reference protocol, 2 ml of SB should be splashed on the test fabric, with 60-160 mmHg pressure ranges from a distance of 30 cm (ISO 16603 2004). No fluid penetration was detected for all the three fabrics (Fig. 9). Since the fabrics did not leak at the higher-pressure ranges and for large volumes over longer duration, there was no need to evaluate their performances at lower pressure ranges. All three levels of detections were followed as described earlier and are depicted in Fig. 10. Microscopic Examination of Fabrics To observe structural changes induced in the fabrics by radiation sterilization, microscopic examinations were carried out. It was observed that post-irradiation, number and size of voids increased in the mesh region of the Fabrics A and B, however, not much detectable changes were observed in pressed regions (Image of Fabric B is shown in Fig. 11, The image of Fabric A was also found to be similar and hence not shown). This increased number and size of voids was responsible for decreased fluid pressure holding ability. No structural changes were observed (both in mesh and pressed . 7 Illustration of three levels of detection of fluid penetration through the fabric (a) naked eye, (b) swipe test with absorbent paper, (c) magnification with hand-lens regions) in Fabric C, before and after radiation sterilization as illustrated in Fig. 12. Information gathered by microscopic examination of Fabrics A, B and C was in accordance with our SBPRT findings and it provided supplementary structural and mechanistic support to our observations. Conclusion Simple test set-ups have been put together using existing laboratory instruments to evaluate the blood penetration resistance of various fabrics of PPEs. These can help in a quick screening and selection of fabric materials without compromising on parameters of the tests. These set-ups can be adopted in any research/medical facility for rapid Microscopic (bright field) examination of Fabric B, at 100× magnification revealed that radiation sterilization increases the number and size of voids in the mesh region of the fabric, however, no changes/ deteriorations were observed in the pressed region, before and after radiation sterilization performance evaluation of PPE-fabrics during any emergency situation like the current COVID-19 pandemic. Such a set-up will help local manufacturers to produce PPEs in adverse situations without compromising on the safety parameters. Pressure holding ability of the fabric is an intrinsic property of the material that cannot be changed. However, when the PPEs are tailored, the quality of the sewing and taping at the joints is utmost important. As observed in this study, the pressure holding ability of the poor joints was considerably reduced, much below the holding ability of the same fabric. An appropriate tailoring and taping to achieve proper sealing of the joints can improve the performances of these PPEs.
2020-05-28T09:13:39.772Z
2020-05-27T00:00:00.000
{ "year": 2022, "sha1": "0298b7a5b9f185a6a882b195ffdb2e96511658a5", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s41403-021-00318-8.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "eb39029d437c1d7afb0eb589eb9f2e2388920dd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Environmental Science" ] }
226025761
pes2o/s2orc
v3-fos-license
Outbreak of Tularemia in a Group of Hunters in Germany in 2018—Kinetics of Antibody and Cytokine Responses In November 2018, an outbreak of tularemia occurred among hare hunters in Bavaria, Germany. At least one infected hare was confirmed as the source of infection. A number of hunting dogs showed elevated antibody titers to Francisella tularensis, but the absence of titer increases in subsequent samples did not point to acute infections in dogs. Altogether, 12 persons associated with this hare hunt could be diagnosed with acute tularemia by detection of specific antibodies. In nine patients, the antibody and cytokine responses could be monitored over time. Eight out of these nine patients had developed detectable antibodies three weeks after exposure; in one individual the antibody response was delayed. All patients showed an increase in various cytokines and chemokines with a peak for most mediators in the first week after exposure. Cytokine levels showed individual variations, with high and low responders. The kinetics of seroconversion has implications on serological diagnoses of tularemia. Introduction Francisella tularensis is the causative agent of the zoonosis tularemia. The bacteria infect a broad range of animals and can be transmitted to humans by different routes: direct contact with infected or contaminated animals (ulceroglandular form), alimentary ingestion (oropharyngeal form), inhalation (respiratory form), or by smear infection (oculoglandular form) [1,2] (WHO Guidelines on Tularemia, 2007 (http://apps.who.int/iris/bitstream/10665/43793/1/9789241547376_eng.pdf)). Initial clinical symptoms are often fever and enlarged lymph nodes, followed by more specific signs depending on the route of infection. Hare hunters in endemic regions for tularemia are at special risk of the disease, typically due to underestimation or lack of awareness. In countries with a low prevalence of the disease, such as Germany [3], physicians might consider tularemia as a differential diagnosis quite late in the course of the disease, leading to a delay in specific diagnostics and treatment [4][5][6]. In November 2018, an outbreak of tularemia occurred among hare hunters in Bavaria, Germany. At least, 42 persons and 11 hunting dogs came into contact with one or more infected hares or were indirectly involved in the hunting event. Altogether, 12/41 tested persons associated to at least one infected hare could be confirmed serologically as having acquired tularemia infection. In addition, the cytokine response was determined in these patients. The epidemiological analyses with determination of risk factors were conducted elsewhere [7]. Hunters who were directly involved in the processing of hares were more at risk and were 10 times more likely to be infected with F. tularensis than hunting participants who were not directly involved. Furthermore, hunting dogs involved were tested for specific antibodies to investigate transmission of the pathogen from dog to human that was suspected in at least one case. Here, we describe the laboratory investigation of this outbreak, facilitating the early laboratory confirmation of clinical diagnoses of tularemia in patients with early and successful antibiotic treatment. Hare Material The Bavarian Health and Food Safety Authority (LGL, Oberschleissheim, Germany) conducted the collection and primary investigation of the hare samples. Four out of eight hunted hares were accessible for an investigation. Organ parts were tested in the microbiology department of animal bacteriology. In addition to testing for Francisella tularensis, PCR was also performed for leptospires and Brucella. DNA from tissue was extracted with the MagNA Pure 24 system (Roche, Mannheim, Germany) as recommended by the manufacturer. For real-time PCR, the LightMix Kit Francisella tularensis 16S (Tib Molbiol, Berlin, Germany) and LightCycler Fast Start Set Hybridization Probes (Roche) were used. The subspecies was determined by RD1 PCR [8] after cooking bacterial cells from isolated strains. At the Robert Koch Institute (RKI, Berlin, Germany), 16 cultures were investigated which were grown with suspected colonies for F. tularensis subsp. holarctica on Martin Lewis agar plates (BD Diagnostics, Heidelberg, Germany) or with turbidity in brain heart infusion enriched with isovitalex inoculated from different sample materials (muscle tissue, lymph node, and bone marrow) from four hares. DNA was isolated from lymph node material (A-1299/6) and bone marrow material (A-1299/7) from one hare, and the Francisella isolate (A-1338) was isolated from the lymph node from the same hare. The DNA isolation was performed by using the MagNA Pure 24 system (Roche), see above. A draft genome sequence was performed from Francisella strain A-1338 [9]. Samples from Hunting Dogs Serum from 10 hunting dogs involved in the hunting event was obtained. Nine dogs were tested twice for antibodies against the lipopolysaccharide (LPS) of F. tularensis. Nine throat swabs and EDTA blood samples were tested for F. tularensis by inoculation on culture media and by specific real-time PCR. Clinical Material from Humans: Blood Cultures, Throat Swabs, Serum Altogether, 42 persons were involved in this investigation: 35 persons participated in the hunting event, two were family members, one was a veterinary assistant, and four persons were butchery employees. Three of the latter four persons stated having had contact (e.g., touched, washed, or disassembled) to the hunted hares, while one person was not sure. However, all individuals stated the days when they had contact with the hunted hares and when the activities were carried out. A manuscript on the detailed outbreak description is in progress. All clinical samples (blood cultures, swabs, and sera) of patients admitted to the Klinikum St. Marien, Amberg, Germany, were tested for pathogens at the internal Institute of Laboratory Medicine and Microbiology, Klinikum St. Marien, and tested in parallel for F. tularensis and Brucella spp. at the Specialised Laboratory for Highly Pathogenic Bacteria (ZBS 2), RKI, Germany. Fifty-six blood cultures, ten throat swabs, and one eye swab were tested for F. tularensis and Brucella spp. by inoculation on culture media and by specific real-time PCR. Overall, 69 human sera were analyzed for antibodies against the LPS of F. tularensis. A signed consensus by patients for serum donation for late diagnostic investigation was obtained. All data were collected in the framework of the curative diagnostic approaches. Thus, the responsible Bavarian Ethical Committee confirmed upon request that an additional approval was not required. Cultivation of Sample Material from Humans and Dogs All throat swabs, the eye swab, and 50 µL of blood culture sample or EDTA blood were each streaked onto CHAB agar plates (CHA (Difco, Bestbion, Cologne, Germany), 1% brain heart infusion broth, 1% proteose-peptone, 1% D-glucose, 0.5% NaCl, 0.1% L-cystine, 1.5% agar, 9% sheep blood), commercial Neisseria selective medium Plus, and chocolate agar plates (both Oxoid, Wesel, Germany) at 37 • C with 5% CO 2 and incubated for up to 3 days. For differential diagnoses, the samples were plated onto commercial Columbia blood agar plates (Oxoid). For enrichment we used the liquid medium T described by Becker et al. [10]. Genomic DNA for Molecular Analysis DNA extraction was performed from all throat swabs, one eye swab, and all EDTA and whole blood samples. In addition, DNA extraction was conducted out of bacterial colony material (A-1338) or lymph node material (A-1299/6) and bone marrow material (A-1299/7) from hare using the QIAGEN DNeasy Blood and Tissue kit (Qiagen, Hilden, Germany) following the manufacturer's instructions as described recently [5]. DNA elution was performed in 100 µL of QIAGEN Elution Buffer (Qiagen). 2.6. PCR Detection fopA, tul4, DD brucellosis mazG, IS711, Singleplex and Multiplex Real-Time PCRs, RD1-PCR Multiplex real-time PCR (5 nuclease assay, TaqMan technology) targeting fopA and tul4 specific for F. tularensis in combination with the extraction and amplification control targeting KoMa2 were performed with oligonucleotides and probes as described recently [5]. A singleplex real-time PCR assay was performed from the clinical human sample for the detection of c-myc as an internal extraction control. In brief: Both real-time PCR assays were run in a total volume of 25 µL, including 5 µL of DNA samples to be analyzed. Samples were analyzed in duplicate in each run. Amplification was performed in an Applied Biosystems 7500 Real-Time PCR System (ThermoFisher Scientific, Langenselbold, Germany), each run including 40 cycles. The block PCR of the region of difference 1 (RD1-PCR) was used for the subspecies differentiation of F. tularensis as described recently [5]. The PCR was carried out using 15-100 ng of template DNA according to the protocol described by Broekhuijsen et al. [8]. Whole Genome Sequencing Whole genome sequencing (WGS) from DNA of sample A-1338, one Francisella isolate from a hare belonging to the outbreak, was performed and analyzed regarding the biovar and genetic clade of the respective strain [9]. Library pool sequencing was performed in paired-end mode on a MiSeq Instrument (Illumina, San Diego, CA, USA); for the next generation sequencing of sample A-1338-1 Illumina sequencing in combination with Nextera XT library generation was used (Illumina), as recently described [5]. Enzyme Linked Immunosorbent Assay (ELISA) and Western blot (WB) An ELISA was used for screening and WB for confirmation of antibodies against F. tularensis LPS. Both in-house assays are accredited by DIN EN ISO/IEC 17025:2005 and DIN EN ISO 15189:2014 and have been described elsewhere [11]. Briefly, a 96-well microtiter plate Nunc-Polysorb (Thermofisher Scientific, Berlin, Germany) was coated with purified LPS from the live vaccine strain as antigen (Micromun, Greifswald, Germany). Bound human antibodies to F. tularensis LPS were detected by polyvalent or monovalent goat anti-human IgA, IgM, and IgG horseradish peroxidase-conjugated secondary antibody (Dianova, Hamburg, Germany) and subsequent substrate reaction. Serum dilutions starting with 1:500 that revealed an optical density above the validated cut-off were counted as positive. The dog sera were tested by the same ELISA approach but an anti-dog IgG horseradish peroxidase-conjugated secondary antibody (Dianova, Hamburg, Germany) was used (this assay was not yet fully validated, but a number of unrelated dog sera showed negative results only). For the WB, the soluble fraction of formalin-inactivated live vaccine strain was separated using sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to polyvinylidene difluoride (PVDF) ImmobilonP-Millipore membranes (Roth, Karlsruhe, Germany). Using polyvalent horseradish peroxidase-conjugated secondary antibodies, the typical LPS ladder revealed the presence of specific anti-F. tularensis antibodies. The final results were obtained after confirmation of the ELISA results by WB: "positive" denoted strong bands, "negative" almost no bands, and "borderline" weak but clearly visible bands. Cytokine and Chemokine Measurements Serum cytokine and chemokine responses were analyzed by LegendPlex assay (BioLegend, Fell, Germany). See Supplementary Table S1 for the analyzed mediators and their respective detection limits. For time sampling points 1-3 (1, 2, and 3 weeks after exposure), sera from all nine patients for whom antibody testing was performed were available. For time sampling point 4 (21 weeks after exposure), only sera from patients 1, 3, 5, 6, 7, and 9 were available. As controls, 16 sera from healthy blood donors who were seronegative for tularemia were employed. These 16 sera were randomly picked from a larger group of negative sera. Patients were grouped according to their respective time sampling points. Statistical analysis was performed with GraphPad Prism 7 software (GraphPad Software Inc., La Jolla, CA, USA). For comparison between the analyzed groups, the Kruskal-Wallis test with subsequent Dunn's multiple comparisons test was used. Pathogen Detection in Hares, Humans, and Hunting Dogs At the Institute of Laboratory Medicine and Microbiology in Amberg, all blood cultures of patients admitted to the hospital remained negative until day 10 of incubation. In the framework of the initial laboratory diagnostic attempts, all throat samples were negative for Influenza A/B and Respiratory Syncytial Virus (RSV). Cultivation of throat swabs showed growth of normal bacterial flora of the oral cavity but no evidence of pathogenic bacteria or fungi. All sera were tested negative for Leptospira IgM and IgG. At the Robert Koch Institute, one hare out of four investigated animals was confirmed to be infected with F. tularensis subsp. holarctica by PCR detection. DNA from Francisella strain (A-1338) isolated from this hare and DNA isolated from the lymph node of the same hare (A-1299/6) has been sequenced, and phylogenetic analysis indicated that the F. tularensis ssp. holarctica strain belonged to biovar II, clade B.12 and subclade B.33 [9]. All throat swabs and blood cultures from the patients investigated one week after exposure remained negative in PCR testing for F. tularensis and Brucella spp. Neither subsequent cultures from blood cultures and throat swabs nor the eye swab showed growth of these bacteria. All investigated samples from hunting dogs remained negative for F. tularensis in both culture approaches and PCR. Antibody Detection in Hunting Dogs Ten of the 11 dogs involved were sampled close to the beginning of the outbreak. Several dog sera were positive but did not show dynamics for acute or new infections when testing paired sera within 8 days (Table 1). Serology and Antibody Kinetics in Humans As all blood cultures and throat swabs remained negative in PCR and culture, serological confirmation was the only approach to confirm an infection with F. tularensis. Altogether, 12/41 persons related to this outbreak could be serologically confirmed by ELISA and WB (data on WB not shown). In total, 10/35 hunting participants and 2/4 butchery employees were tested positive. Because tularemia was early suspected, the development of antibodies could be monitored in nine affected hunters, starting one week after exposure to confirm the clinical diagnoses with laboratory methods (Table 2). All nine patients were negative for antibodies against the pathogen one week after exposure ( Figure 1A-D). All nine patients were negative for antibodies against the pathogen one week after exposure (Figure 1 A-D). Thus, all subsequent detection of antibodies could be taken as confirmation of the diagnosis. It can be seen that some patients started to develop specific antibodies one week after exposure and in Thus, all subsequent detection of antibodies could be taken as confirmation of the diagnosis. It can be seen that some patients started to develop specific antibodies one week after exposure and in parallel to the development of clinical symptoms. Interestingly, the patients reacting earliest (patients 3, 5, and 8) showed a predominant IgG response combined with IgA (3 and 5) rather than an IgM response at that early stage of the infection. The overall antibody response detected by a polyvalent anti-Ig conjugate was mainly determined by the IgG response. Three weeks after exposure and two weeks after clinical onset of the disease, all but one patient had developed specific antibodies. In one exposed patient [9], antibodies could not be detected during three weeks after exposure, but 21 weeks after the infection, when we were able to obtain another blood sample to check for seroconversion again, antibodies were detected. Cytokine and Chemokine Changes During Infection Serum levels of eotaxin, G-CSF, IFNα, IFNγ, IL-1β, IL-2, IL-6, IL-8, IL-10, IP-10, MCP-1, MIP-1α, MIP-1β, and PDGF were significantly increased in tularemia patients when compared to healthy controls and/or between different time points of sampling. Marked changes, however without statistical significance, were found for IL-4, IL-9, IL-12, IL-13, and IL-22 (Figure 2). Generally, most cytokine and chemokine levels peaked in the first week after exposure, followed by normalization over the next weeks. Of note, some cytokines and chemokines peaked at blood sampling time point 4 instead (eotaxin, IL-4, PDGF), or showed a second peak at time point 4 (IL-1β, IP-10, MCP-1, MIP-1α, MIP-1β). This second peak seen in the patient cohort 21 weeks after exposure was mainly due to elevations of these cytokines in patients 3, 6, and 7. Patients 4 and 7 showed the highest concentrations for most cytokines and chemokines tested at the first three sampling points (high responders). In contrast, patient 9 who seroconverted later than three weeks post exposure showed generally the lowest cytokine and chemokine concentrations throughout the observation period (low responder). Discussion With regard to tularemia in humans, Germany represents a region of low incidence. F. tularensis subsp. holarctica is endemic and the only species known so far to cause tularemia in Germany [3,[11][12][13]. Up to 50 cases of tularemia are reported per year but the number increases, indicating that tularemia is a re-emerging disease [3,14]. Studies on Francisella isolates from humans and wild animals revealed a high genetic diversity of F. tularensis subsp. holarctica in Germany [9,12,13,15]. Phylogenetic analysis demonstrated that F. tularensis subsp. holarctica of biovar I (erythromycin-susceptible isolates) are mainly found in Western Europe and isolates of biovar II (erythromycin-resistant strains) occur in Northern and Eastern Europe. A similar North-West pattern is seen in Germany [1,9,[16][17][18][19][20]. In Germany, most human isolates belong to the phylogenetic clade B.12 (biovar II) or clade B.6 (biovar I). However, it seems that isolates of clade B.6 are more often isolated from tularemia patients with pneumonia than from individuals with other forms of the disease [9]. Biovar I (clade B.6) is more commonly found in human isolates from Bavaria [9]. In the outbreak described here, a Francisella isolate belonging to clade B.12/B.33 could be isolated from one of the hunted hares. The same was true for an uncommon tularemia outbreak associated with contaminated fresh grape must, which revealed F. tularensis subsp. holarctica belonging to the B.12 (B.34) phylogenetic clade as the causative agent. This outbreak occurred in Rhineland-Palatinate and involved six cases. Generally, strains of biovar I are dominant in this region [4,5,9]. It was shown that infected or contaminated dogs can infect humans [21]. One of our patients was not involved in the processing of hunted hares but had only contact to participating hunting dogs that were fed the remains of the processed hares. The serological investigation showed that probably 4/10 dogs had been exposed to F. tularensis prior to the outbreak described here, indicating a high prevalence of the pathogen in wildlife of this region. Dogs that tested positive did not show an increase in the antibody titer in paired sera taken with a time shift of eight days. However, it is not excluded that contaminated or infected dogs contributed to the human infections in this outbreak. Further investigation of hunting dogs with well validated assays is required to confirm this conclusion. The diagnosis of tularemia is often delayed due to the low prevalence of this zoonosis and the lack of awareness. After starting treatment with effective antibiotics like ciprofloxacin, it can be difficult to detect the pathogen. Recently, we have seen an increased detection rate conducting long-term blood cultures (up to 10 days) only when blood was taken before antibiotic treatment of the patients. Therefore, the diagnosis is often based on antibody detection and highly specific and sensitive assays are available [11,22]. In this outbreak, tularemia was suspected already one week after the exposure when the very first clinical flu-like symptoms like fever and malaise became apparent. This was very unusual and was based on the awareness of one of the hunters. The affected persons had admitted themselves to the hospital and, based on the patients' history and clinical symptoms, the treatment with first choice antibiotics (ciprofloxacin 2 × 400 mg intravenously. or 2 × 500 mg orally) according to the recommendation of the Robert Koch Institute (https://www.rki.de/DE/Content/Kommissionen/Stakob/ Stellungnahmen/Stellungnahme_Tularaemie.pdf?__blob=publicationFile (in German language)) [23] had been started immediately without laboratory confirmation of tularemia. However, the laboratory diagnostics started at the same time in parallel. As no visible manifestations of the disease such as skin ulcera or swollen lymph nodes were present at that time, blood cultures and throat swabs for detection of the pathogen as well as serum for detection of specific antibodies were the only clinical samples with the potential to confirm the disease. All samples taken from the patients remained negative for Francisella cultivation and genome detection as well as for other pathogens. Due to the effective antibiotic treatment, the patients recovered quickly and were discharged after one week while continuing the antibiotic treatment at home. In spite of the obvious epidemiological context and in order to verify the correct antibiotic treatment, appropriate measures were undertaken to confirm the clinical diagnosis by laboratory evidence. Sera taken one week after exposure were also tested negative for specific anti-Francisella-LPS antibodies. Thus, RKI's laboratory was requested by the treating physicians to confirm the clinical diagnosis as early as possible. Sera were repeatedly investigated to confirm or exclude tularemia. Eight out of nine clearly exposed individuals who all took part in the handling of the hunted hares developed antibodies three weeks after exposure (two weeks after onset of the disease). Interestingly, the diagnosis of tularemia in one patient could be confirmed only with the sample taken 21 weeks after exposure. This patient had no relevant previous illnesses and no home medication. He presented clinically with cephalgia and small wounds on his hands and small painful lymph nodes in the left axilla and on his upper arm. In the first blood count, he had a subtle bicytopenia (thrombopenia and leukopenia). We hypothesize that this could be due to a low bacterial load or due to a delayed or impaired immune response as reflected by the comparably low cytokine and chemokine levels in this patient. The patient did not report other risks of exposure until the detected seroconversion. Our data show that all exposed and hospitalized individuals became clinically ill and developed specific antibodies against the pathogen. The dynamics of antibody formation is individually different and the time span to develop antibodies detectable by standard ELISA might be longer than three weeks. There was no dominant immunoglobulin isotype involved. This confirms earlier observations that in tularemia not only IgM indicates an acute infection, but that rather all isotypes might be elevated at an early stage of infection [24]. It can be assumed that specific antibodies can be detected earlier when using lower initial dilutions of the serum (in our assay 1:500 was evaluated for best specificity and sensitivity), but in this case any lower specificity of the assay must be considered accordingly. In our study, we tested antibody kinetics in parallel with cytokine kinetics. While antibodies were detectable three weeks after the exposure to the pathogen, significant changes of cytokine and chemokine levels already occurred in the first week after the exposure. The highest elevations for most mediators in our patients were seen in week 1 after exposure, with normalization of most cytokines and chemokines during the following weeks. Of note, the patients received antibacterial therapy from the eighth day after the hunt (day of inpatient admission) which might have influenced the cytokine and chemokine concentrations afterwards. Some of our patients showed high cytokine and chemokine responses in general, whereas others were classified as low responders. The differences in the inflammatory response may indicate an individual feature or may depend on the bacterial load (infective dose). It remains unclear whether the late cytokine response (second peak) of patients, especially of individuals 3, 6, and 7, was still caused by the tularemia infection or by other effects. It could be of interest to study long-term kinetics of cytokines in tularemia patients, as prolonged courses of tularemia infection have been described before, while the pathogenesis in these cases is unclear [25,26]. In our patients, a hypercytokinemia was observed which is generally regarded to be characteristic of the immune dysregulation during an acute infection. Mixed Th1 (IFNγ, IL-2, IL-12) and Th2 (IL-4, IL-13) responses were seen with elevations of both pro-inflammatory and anti-inflammatory cytokines as well as regulatory cytokines and chemokines. The role of cytokines and chemokines in host responses during tularemia is only beginning to be elucidated [27]. There are several animal models, but only few studies involving human subjects are described. The rapid production of pro-inflammatory and Th1-type cytokines, especially IFNγ, is critical for the initial control of Francisella infection; however, little is known about the role of Th2 cytokines [27]. Chemokines, such as IL-8, IP-10, MCP-1, MIP-1α, and MIP-1β, were found to be upregulated during the acute phase of infection in our study; they are known to attract neutrophils and macrophages into inflamed tissue, promoting leukocyte-endothelial cell interaction. The elevation of eotaxin and PDGF might possibly reflect convalescence, as these concentration peaks were seen at time sampling point 4. Conclusions Seroconversion occurred after individual time intervals within a time frame of three weeks. Serological diagnosis of tularemia could be confirmed most reliably aroundthree weeks after exposure and two weeks after onset of clinical symptoms. However, the development of antibodies may be considerably delayed in some individuals, and multiple serial blood samples have to be analyzed in order to diagnose tularemia when the disease is clinically suspected. Thus, although Francisella tularensis was not directly detectable in patients due to the early administration of antibiotics, the clinical signs and epidemiological context allowed us to suspect an infection with F. tularensis. The seroconversion within three weeks after exposure in most of the patients confirmed an infection with F. tularensis.
2020-10-29T09:07:47.788Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "773d0da77e1c5e9894b8d96253aed4be38c8271f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/8/11/1645/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6ab838a802c5d810f8977cbc6274a18799a64338", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247030461
pes2o/s2orc
v3-fos-license
Dependence of click-SELEX performance on the nature and average number of modified nucleotides The click-SELEX procedure enables the identification of nucleobase-modified aptamers in which chemical entities are introduced by a copper(i)-catalysed alkyne-azide ‘click’ reaction. Here we report on the impact of modified nucleobases on PCR conditions and the average amount of modified nucleobases on click-SELEX performance. We demonstrate click-SELEX being strongly dependent on which and on how many modifications are used. However, when using C3-GFP the number of modifications did not impact the overall success of the selection procedure. Click Reaction Click reaction was done according to Pfeiffer et al. 1 Briefly, freshly prepared sodium ascorbate (25 mM), THPTA (4 mM) and CuSO 4 (1 mM) and in 100µL ddH 2 O were incubated for 10 min (catalyst solution). EdU containing DNA, was clicked in a solution containing 1 mM azide in DMSO (10 % v/v final), 1x phosphate buffer, and 1x catalyst solution in a total volume of 100 µL ddH 2 O. The mixture was incubated 15-60 min at 37°C and 650 rpm. Samples were purified using Nucleospin Gel and PCR clean-up kit (Macherey-Nagel) according to the manufacturer's instructions. qPCR 5 pmol of the respective FT2 library (DNA, FT2, FT2-0.35 and FT2-7) were clicked in a total volume of 50 µL (final 0.1 µM) as described. After click reaction for 15 min 450 µL dH 2 O (final 0.01µM) was added and the samples were stored on ice. As controls, the DNA library was incubated in click reaction but without azide (cs). Instead, the respective amount of DMSO was used. Furthermore, DNA library was diluted to 0.01 µM in dH 2 O without prior click reaction. Some parameters were changed in single experiments: Elongation time (step 4 of qPCR method) was increased to 5 and 10 min in the first two cycles. Elongation temperature (step 4 of qPCR method) was changed to 68 and 74°C. Polymerase was changed to ReproHot and Vent exo according to manufacturer's instruction. MgSO 4 concentration was increased by the addition of the respective amount of 100 mM MgSO 4 for 2.5, 3 and 3.5 mM final MgSO 4 concentration. 5 % DMSO was added to the master mix. NGS NGS samples were prepared according to Tolle et al. 2 and are measured on an Illumina HiSeq1500 platform. Briefly, PCR with index primers was performed using canonical nucleotides, and thus the EdU was replaced with T. PCR products were purified using Nucleospin Gel and PCR clean-up kit (Macherey-Nagel) according to the manufacturer's instructions. Purified DNA were pooled, and adapter sequences were added by enzymatic ligation using TruSeq DNA PCR-Free Sample Preparation Kit LT (Illumina). After DNA agarose purification and Nucleospin Gel and PCR clean-up kit (Macherey-Nagel), the DNA was eluted in resuspension buffer (TruSeq DNA PCR-Free Sample Preparation Kit LT (Illumina)). The DNA was validated and quantified using KAPA library quantification kit (Sigma-Aldrich) prior sequencing. Illumina sequencing was performed with 75 bp single end sequencing. Analysis of raw data was done using an in-house bioinformatic analysis program (AptaNext, Laura Lledo Bryant). FACS The binding interaction of Cy5 labelled DNA to C3-GFP immobilized on magnetic beads was investigated with a FACSCanto II (BD Bioscience). In total a minimum of 30'000 events was recorded and the Cy5-fluorescence in the APC-A channel was analyzed. 3 μl of the magnetic bead solution were incubated for 30 min at 37°C, 1200 rpm with 300 nM Cy-5 labelled DNA in SB in a total volume of 10 μl. The beads were washed two times with 120 μl SB and resuspended in 100 µl SB and measured directly. Click-SELEX The selections were performed following the protocol in the previous work. 3 Briefly, after the incubation of the click functionalized library with C3-GFP immobilized on magnetic beads for 30 min at 37°C, 800 rpm in SELEX buffer (SB, 138 mM NaCl, 2.6 mM KCl, 1.5 mM KH 2 PO 4 , 8.1 mM Na 2 HPO 4 , 0.5 mM MgCl 2 , 0.9 mM CaCl 2 , pH 5.3, 1 mg/ml BSA, 0.1 % Tween20, 0.1 mg/ml salmon sperm DNA), the beads were washed three times with 200 µL SB1. The bound sequences were eluted by incubation of 100 µL of 300 mM imidazole solution for 15 min at 37°C, 800 rpm. The supernatant was used as a template for PCR reaction. After DNA purification via NucleoSpin® Clean-Up kit (Macherey-Nagel) according to the manufacturer's recommendation, lambda exonuclease digestion was performed. Purified PCR product was mixed with 10x λ-exonuclease reaction buffer and 3 µL λ-exonuclease (5000 U/mL) and incubatied for 20 min at 37°C, 1000 rpm. After, the samples were purified with the NucleoSpin® Clean-Up kit, according to the manufacturer's recommendation. The click reaction of the corresponding azide was performed as described, and the functionalized library was used for the next selection cycle. Selection details are listed in Supporting Table 1. C3-GFP click SELEX with different libraries C3-bead preparation 3 mg of Dynabeads His-Tag Isolation were washed three times with 1500 µl 1x SB2 and resuspended in 1500 µl 1x SB2. 750 µl were used as empty beads and 500 µl of 3.5 µM C3-GFP (Sino Biological) was added to the remaining beads. After incubation at 25°C and 800 rpm the supernatant was discarded, and the beads were washed three times with 750 µL 1x SB2. Beads are resuspended with 750 µL 1x SB2 and stored at 4°C. Click SELEX 500 pmol of indole clicked DNA library was incubated with C3-GFP immobilised magnetic beads for 30 min at 37°C, 800 rpm in SB. The beads were washed three times with 200 μl SB and the enriched library was recovered from the beads by addition of 100 μl of a 300 mM imidazole solution after 5 min at 37°C and 800 rpm. Recovered DNA was PCR amplified and purified by NucleoSpin® Clean-Up kit (Macherey-Nagel) according to the manufacturer's recommendation. Purified DNA was incubated with 3.5 µL λ-exonuclease (5000 U/mL) in 1x λ-exonuclease reaction buffer for 20 min at 37°C and 800 rpm and afterwards purified by NucleoSpin® Clean-Up kit (Macherey-Nagel). The purified single stranded DNA was click functionalized with indole-azide and applied in the next selection cycle. To increase the selection pressure, washing time was increased and the amount of magnetic beads was reduced during the SELEX. Starting from round four, a click competitor was used. To prevent unspecific binding to the bead matrix, the library was pre-incubated with 50 µL empty beads for 30 min at 37°C, 800 rpm from the third round on, the supernatant was recovered and incubated subsequently with C3-GFP beads. Selection details are listed in supporting table 2.
2022-02-23T16:22:29.640Z
2022-02-21T00:00:00.000
{ "year": 2022, "sha1": "0682e8a517ffaf67d562d0d87e82c47f710c3035", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/cb/d2cb00012a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6b430bbfb5b135b3877f5cee02e365c70a86d6d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
104419685
pes2o/s2orc
v3-fos-license
Release Behavior of Benzimidazole-Intercalated α -Zirconium Phosphate as a Latent Thermal Initiator in the Reaction of Epoxy Resin : The intercalation compound of benzimidazole with α -zirconium phosphate ( α -ZrP) was evaluated as a latent thermal initiator in the reaction of glycidyl phenyl ether (GPE) and hexahydro-4-methylphthalic anhydride (MHHPA). No reaction occurred at 60 ◦ C after 1 h. Upon increasing the temperature to 140 ◦ C, the conversion reached 97% after 1 h. The deintercalation ratio of Bim from the intercalation compound of benzimidazole with α -zirconium phosphate ( α -ZrP · Bim) was measured in the reaction of the GPE-MHHPA system. The deintercalation ratio increased upon increasing the temperature, reaching 97% at 120 ◦ C after 1 h. The storage stability at 25 ◦ C and 40 ◦ C in the reaction of GPE-MHHPA was tested and was found to be maintained for 14 days at 25 ◦ C. The intercalation compound of α -ZrP · Bim can effectively serve as a latent thermal initiator in the reaction of GPE-MHHPA. 20.3 Å (2 θ = 4.4 ◦ ) to 19.3 Å (2 θ = 4.6 ◦ ), showing that deintercalation of Bim from the interlayers of α -ZrP · Bim occurred. The ratio of C, H, and N of the product was C: 42.67, H: 4.61, and N: 3.87 and the composition was Zr(HPO 4 ) 2 · (C 9 H 12 O 3 ) 2.3 · (C 7 H 6 N 2 ) 1.1 as determined by elemental analysis. 31% of intercalated Bim was deintercalated from α -ZrP · Bim and the molar ratio of 2.3 of MHHPA to Zr was detected. The interlayer distance after the treatment of α -ZrP · Bim with MHHPA slightly decreased, indicating that MHHPA was immobilized on the surface of α -ZrP. Imidazoles are widely used in industry as curing agents for epoxy resins found in electric devices, laminated plates, semiconductor sealing agents, etc. The equivalents of imidazoles in intercalation compounds of α-ZrP are 0.78 (Im), 0.96 (2MIm), and 0.65 (2E4MIm). However, after reaction with the GPE-MHHPA system, 35% (α-ZrP·Im), 48% (α-ZrP·2MIm), and 37% (α-ZrP·2E4MIm) of the intercalated imidazoles were deintercalated from the layers of zirconium phosphate [14]. Essentially, less than half of the imidazole was available for the reaction of GPE-MHHPA. 2 of 13 Benzimidazoles are known as good curing agents for epoxy-anhydride systems [16]. Using benzimidazole as an intercalation compound, Costantino et al. [17] reported a molar ratio of benzimidazole/zirconium phosphate of up to 1.9. Therefore, we anticipated that benzimidazole-intercalated α-ZrP would have better efficiency as a curing agent because it had a higher loading in α-ZrP. We prepared the intercalation compound of benzimidazole (Bim) and examined the capabilities of α-ZrP·Bim as a latent thermal initiator in the reaction of GPE with MHHPA. The release behavior of Bim from the interlayer of α-ZrP was studied in detail. Results and Discussion The intercalation of benzimidazole (Bim) into the layers of α-zirconium phosphate (α-ZrP) was carried out by slightly modifying a previously reported method [14]. The α-ZrP was added to a solution of Bim in 1:1 water:methanol. The reaction mixture was stirred at 60 • C for 24 h. After the reaction, the intercalation compound was recovered by suction filtration. The ratio of C, H, and N of the product was C: 27.94%, H: 2.64%, and N: 9.28% and the composition was Zr(HPO 4 ) 2 (C 7 H 6 N 2 ) 1.60 ·0.50H 2 O as determined by elemental analysis. The interlayer distance of α-ZrP was calculated from the XRD patterns, which showed that pristine α-ZrP 7.6 Å (2θ = 11.7 • ) was expanded to 20.3 Å (2θ = 4.4 • ) as seen in Figure 1a. Thus, the intercalation of Bim into the layers of α-ZrP (α-ZrP·Bim) was confirmed. To evaluate the catalytic activity of α-ZrP·Bim, the copolymerization of glycidyl phenyl ether (GPE) and hexahydro-4-methylphthalic anhydride (MHHPA) was carried out. The conversion of GPE was 97% at 140 • C for 1 h as determined by 1 H NMR analysis. The calculation of the conversion by 1 H-NMR analysis determined was shwon in ref. [14]. The intercalation compound α-ZrP·Bim showed good reactivity under heating conditions. To evaluate the change of the layer structure after the reaction of α-ZrP·Bim, the product was washed with THF to remove the GPE-MHHPA products and the residue of α-ZrP (α-ZrP·Bim-RXN) was recovered. The interlayer distance of α-ZrP·Bim-RXN was increased to 22.9 Å (2θ = 3.8 • ) and the peak intensity was decreased, as shown in Figure 1b. This might cause the intercalation of the reaction products into the layers and the crystallinity of α-ZrP·Bim was decreased after the reaction. The 31 P MAS NMR spectra of α-ZrP·Bim and α-ZrP·Bim-RXN are shown in Figure 2a,b. The peak of pristine α-ZrP is observed at δ-20.1. As shown in Figure 2a, the peak of α-ZrP·Bim is observed at δ-20.6. This chemical shift suggests that interactions between Bim and the HPO 4 group were not strong compared with that between alkylamines and the HPO 4 group [18,19]. In Figure 2b for α-ZrP·Bim-RXN, the signal at δ-21.2 and -23.6 were observed. This shift of the signal from δ-20.6 to δ-21.2 and -23.6 might due to the separation of Bim from the phosphate groups [11]. The 13 C NMR spectra of α-ZrP·Bim and α-ZrP·Bim-RXN are shown in Figure 3a,b. The 2-position of the imidazole ring at δ144.5 (N=CH-NH) in Figure 3a (assigned to 1) completely disappeared after the reaction (α-ZrP·Bim-RXN) and the corresponding peaks of the products of GPE-MHHPA appeared [ Figure 3b]. Therefore, Bim was completely deintercalated from the interlayers. This demonstrates that all of the intercalated Bim could be used to initiate the reaction of GPE-MHHPA. We have previously reported the intercalation compounds of α-ZrP·Im, α-ZrP·2MIm and α-ZrP·2E4MIm, showing that the deintercalation ratios after the reaction with GPE-MHHPA were 35%, 48%, and 37%, respectively [14]. Moreover, the copolymer of GPE-MHHPA was not formed after the reaction of α-ZrP·Im-RXN, α-ZrP·2MIm-RXN, and α-ZrP·2E4MIm-RXN. Substances derived from GPE were present in the interlayer of these three intercalation compounds of α-ZrP. In the case of α-ZrP·Bim, the copolymer was confirmed by the presence of ester groups at δ173.2 in Figure 3b (assigned to 1). Therefore, the products of GPE-MHHPA can be intercalated in the layers of α-ZrP·Bim. The intercalation compound of Bim (α-ZrP·Bim) was efficiently utilized in the reaction of GPE-MHHPA. FT-IR spectra of α-ZrP·Bim and α-ZrP·Bim-RXN are shown in Figure 4a aromatics (ν C-C at 1600, 1497 and 1458 cm −1 ), and ether groups (ν C-O-C at 1249 cm −1 ) in the products of GPE-MHHPA were clearly observed in Figure 3b. The capabilities of Bim as a latent thermal initiator were examined in the reaction of GPE and MHHPA. The conversion of GPE and the deintercalation ratio of Bim from the layers of α-ZrP containing 3 mol% of Bim were measured at varying temperatures for 1 h, as shown in Figure 5. The deintercalation ratio of Bim from α-ZrP·Bim were calculated by decreasing the N content of α-ZrP·Bim by elemental analyses. The conversion did not occur at 60 °C after 1 h. Upon increasing the reaction temperature to 120 °C, the conversion improved to 97%. The deintercalation ratio increased with increasing reaction temperature. At 120 °C, the deintercalation ratio became quantitative, (i.e., all of the Bim in the interlayer of α-ZrP deintercalated). At 60 °C, the deintercalation ratio was 38% and the reaction did not proceed in 1 h. The capabilities of Bim as a latent thermal initiator were examined in the reaction of GPE and MHHPA. The conversion of GPE and the deintercalation ratio of Bim from the layers of α-ZrP containing 3 mol% of Bim were measured at varying temperatures for 1 h, as shown in Figure 5. The deintercalation ratio of Bim from α-ZrP·Bim were calculated by decreasing the N content of α-ZrP·Bim by elemental analyses. The conversion did not occur at 60 • C after 1 h. Upon increasing the reaction temperature to 120 • C, the conversion improved to 97%. The deintercalation ratio increased with increasing reaction temperature. At 120 • C, the deintercalation ratio became quantitative, (i.e., all of the Bim in the interlayer of α-ZrP deintercalated). At 60 • C, the deintercalation ratio was 38% and the reaction did not proceed in 1 h. To study the reaction behavior of GPE-MHHPA, the effect of the layer distances of ZrP using GPE and MHHPA at 100 • C for 1 h was investigated. In the case of GPE, the XRD pattern is shown in Figure 7a It is important to maintain stability under storage conditions. The stabilities were examined at 25 °C and 40 °C in the GPE-MHHPA system. The conversion of GPE was 51% for Bim and 22% for α-ZrP·Bim at 25 °C for 14 days, as shown in Figure 8. At 40 °C, the conversion was 50% for Bim and 21% for α-ZrP·Bim for 7 days, as shown in Figure 9. The storage stability for α-ZrP·Bim was maintained for 14 days (2 weeks) at 25 °C. Accordingly, α-ZrP·Bim can serve as a latent thermal initiator in the reaction of epoxy-acid anhydride systems. In the reaction of GPE-MHHPA with α-ZrP·Bim, the conversion reached 97% at 140 °C for 1 h, and the storage stability was maintained for 2 weeks at 25 °C. All of the intercalated Bim could be deintercalated at 120 °C for 1 h. It is important to maintain stability under storage conditions. The stabilities were examined at 25 • C and 40 • C in the GPE-MHHPA system. The conversion of GPE was 51% for Bim and 22% for α-ZrP·Bim at 25 • C for 14 days, as shown in Figure 8. At 40 • C, the conversion was 50% for Bim and 21% for α-ZrP·Bim for 7 days, as shown in Figure 9. The storage stability for α-ZrP·Bim was maintained for 14 days (2 weeks) at 25 • C. Accordingly, α-ZrP·Bim can serve as a latent thermal initiator in the reaction of epoxy-acid anhydride systems. In the reaction of GPE-MHHPA with α-ZrP·Bim, the conversion reached 97% at 140 • C for 1 h, and the storage stability was maintained for 2 weeks at 25 • C. All of the intercalated Bim could be deintercalated at 120 • C for 1 h. Measurements X-ray diffraction (XRD) patterns were obtained using a Rigaku RINT2200 (Tokyo, Japan) with Cu Kα radiation over a scan range of 3-40 • at a rate of 2 • min −1 . NMR spectra in solution were recorded on a Varian Unity-300 spectrometer (Palo Alto, CA, USA) and a JEOL JNM-ECZS (400 MHz) spectrometer (Tokyo, Japan) using tetramethylsilane (TMS) as an internal standard. The 31 P MAS NMR and 13 C CPMAS NMR spectra were recorded on a JEOL ECA-600 NMR spectrometer (Tokyo, Japan). The contents of benzimidazole and water in the intercalation compounds of α-ZrP were measured using a PerkinElmer 2400II (Waltham, MA, USA). Gel permeation chromatography (GPC) was carried out on a Shodex GPC-101 (LF804*3 and KF-800RF*3, THF as eluent) (Showa Denko Co. Ltd., Tokyo, Japan) using polystyrene standards. The Fourier transform infrared spectroscopy (FT-IR) measurements were carried out with an ALPHA spectrometer (Billerica, MA, USA). Typical Polymerization Procedure A mixture of GPE (150 mg, 1.0 mmol), MHHPA (168 mg, 1.0 mmol), and benzimidazole intercalation compound with α-ZrP (α-ZrP·Bim) (9.0 mg, 0.019 mmol, content of benzimidazole: 0.030 mmol) was heated at 120 • C for 1 h. A small aliquot of the reaction mixture was dissolved in CDCl 3 , and its 1 H-NMR spectrum was acquired to determine the extent of the conversion of GPE and MHHPA. At 40 • C, a small aliquot of the sample was collected at determined times. Conclusions The intercalation compound of α-ZrP·Bim can effectively serve as a latent thermal initiator in the reaction of GPE-MHHPA under heating conditions. All of the Bim intercalated in the layers of α-ZrP was effectively deintercalated for 1 h. At 140 • C, the conversion reached 97% in 1 h. The storage stability was maintained up to 14 days at 25 • C. This investigation of the deintercalation behavior can be applied to other intercalation compounds of α-ZrP as latent thermal initiators. Author Contributions: O.S. conceived, designed and wrote the article; S.S., K.K., S.K. and M.S. performed the experiments; A.O. and R.N. contributed to a helpful discussion. Funding: This research received no external funding.
2019-04-10T13:13:04.353Z
2019-01-10T00:00:00.000
{ "year": 2019, "sha1": "ba432987582780803dd822409107c0ffa42d65db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/9/1/69/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b1293be3370fa0aeaf254f10d0ac2a0c0f1aa56c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
215827713
pes2o/s2orc
v3-fos-license
A new tool to derive chemical abundances in Type-2 Active Galactic Nuclei We present a new tool for the analysis of the optical emission lines of the gas in the Narrow Line Region (NLR) around Active Galactic Nuclei (AGNs). This new tool can be used in large samples of objects in a consistent way using different sets of optical emission-lines taking into the account possible variations from the O/H - N/O relation. The code compares certain observed emission-line ratios with the predictions from a large grid of photoionization models calculated under the most usual conditions in the NLR of AGNs to calculate the total oxygen abundance, nitrogen-to-oxygen ratio and ionization parameter. We applied our method to a sample of Seyfert 2 galaxies with optical emission-line fluxes from the literature. Our results confirm the high metallicity of the objects of the sample and provide consistent values with the direct method. The usage of models to calculate precise ICFs is mandatory when only optical emission lines are available to derive chemical abundances using the direct method in NLRs of AGN. Introduction The energetic radiation coming from the central black holes in galaxies is partially reemitted by the surrounding gas as very bright emission lines which in turn can be used to derived the physical conditions in these extreme regions. Since they can be observed up to very high redshifts, Active galactic Nuclei (AGNs) are thus a powerful source for the study of cosmic evolution of galaxies. It is widely accepted (Ferland & Netzer 1983) that the main mechanism of the narrowline region (NLR) in AGNs is photoionization. However, it is also known that the total metallicity derived using the direct method (i.e. the T e method) gives sub-solar metallicities in AGNs, as compared to the predictions from photoionization models. Using a sample of NLRs of AGNs, Dors et al. (2015) found that the T e -method using the optical lines underestimated the oxygen abundances by an averaged value of ∼0.8 dex as compared to calibrations based on photoionization models. Models are, therefore, a powerful tool to interpret the observed lines and provide valuable information to study chemical abundances. In this work, we describe a new code based on photoionization models to derive chemical abundances in the NLR in AGNs. The code In Pérez-Montero et al. (2019) we present a full description of a new code to derive the total oxygen abundance, nitrogen-to-oxygen ratio (N/O), and the ionization parameter (U) from the analysis of optical emission lines in the NLR of type-2 AGNs. The code is based on the well proven HII-CHI-MISTRY † code (hereafter HCM, Pérez-Montero 2014) originally developed for the analysis of star-forming regions. The advantages of the code are: a) it can be applied to a large number of objects in an automatic way; b) all objects are analyzed in a consistent way regardless of the set of input emission lines; c) it provides uncertainties for all the estimated quantities; d) it provides an independent estimation of the N/O ratio; and e) it is consistent with the direct method. The code uses a grid of 5 865 photoionization models run with the code Ferland et al. (2017) v.17.01. The models cover a wide range of the parameters space with typical NLRs conditions (see Pérez-Montero et al. 2019 for further details). The spectral energy distribution (SED) is composed by two components: the Big Blue Bump at 1 Ryd and a power law with spectral index α X = −1. The continuum between 2KeV and 2500Å is modeled by a power law with spectral index α OX = −0.8. All models were calculated using a spherical geometry with a filling factor of 0.1, a standard dust-to-gas ratio and a constant density of 500 particles per cm −3 . In addition we checked the effect of changing in the models the α(ox) down to -1.2 and enhancing the electron density up to 2 000 cm −3 but no noticeable changes were found in the calculation of the chemical abundances using the method described here. For more details on the results of these comparison see Pérez-Montero et al. (2019). The models cover the range of 12 + log(O/H) from 6.9 to 9.1 in bins of 0.1 dex. The N/O range goes from -2.0 to 0.0 in bins of 0.125 dex and log U from -4.0 to -0.5 in bins of 0.25 dex. The code uses as input the reddening-corrected relative-to-Hβ emission line intensities with their corresponding errors. However, the code is adapted to provide also a solution in case one or several of these lines are not given. In short, the work-flow of the code is as follows. First, the code constrain the parameter space searching for N/O as a weighted mean over all models, using optical emission lines for similar excitation, such as the ratio Dors et al. (2017) and the values derived by HCM using different input lines as shown in Figure 1. In the (depending on the availability of the observed lines) are used in a second iteration to sample a subset of models constrained to the N/O values previously calculated to obtain the oxygen abundance and the ionization parameter. The control sample No empirical derivation of chemical abundances (i.e. no abundances using the direct method) in the NLR of AGNs using optical emission lines are available in the literature. Therefore, we use as a control sample the abundance estimations by Dors et al. (2017) obtained from detailed tailored photoionization models using the cloudy code. They compiled a sample of 47 Seyfert 1.9 and 2 galaxies at a redshift z 0.1 providing the most prominent optical emission lines, including the auroral line [OIII] 4363Å. They do not provide an error estimation of the oxygen abundances obtained from their models. Comparisons In Fig. 1 we compare the total oxygen abundance derived for the control sample by Dors et al. (2017) with those obtained by HCM when all or only some of the input lines are used †. The option of restricting the number of input lines simulates common observing conditions when only limited sensitivity or spectral coverage of the detector is available. The left upper panel shows the best case scenario when all possible emission lines are provided. There is a good agreement between both sets with a dispersion of 0.21 dex and a residual of -0.01 dex. The upper right panel displays the relation when lines [O iii] λ 4363Å and [O ii] λ 3727Å are not included. This is common case when the blue part of the spectrum is not available (e.g. in the Sloan Digital Sky Survey at very low redshifts) and the [O iii] λ 4363Å is to faint to be observed. In this case, the dispersion is nearly the same but the residual increases by 0.1 dex. Even when only a couple of lines or only [NII] λ 6583Åis available the agreement is good, with deviations from the abundances lower than the usual uncertainties. Table 3.2 shows the mean and standard deviation of the residuals of the comparison cases presented in Fig. 1. Consistency with the direct method There is known discrepancy between the chemical abundances derived using the T e method in NLRs of type-2 AGNs, leading to very low values if compared to those obtained from some photoionization models (e.g. Dors et al. 2015). The code HCM has proved to be in accordance with the T e method in star-forming regions (Pérez-Montero Figure 1. Comparison between total oxygen abundances 12 + log(O/H) derived using the method described in this work (HCM) and those taken from Dors et al. (2017) 2014. Thus, we can use HCM to investigate the possible origin of the discrepancies in AGNs. In Fig. 2 we show the total oxygen abundance derived by Dors et al. (2017) for their sample of Sy2 galaxies, compared to the addition of the abundances of the most prominent 2017) whose total abundances where calculated using tailored models, while their ionic abundances were calculated following the Te method. The dashed black line represents the 1:1 relation. oxygen ionic species in the optical part of the spectrum, i.e. O + and O 2+ , calculated using the T e method. The addition of the relative ionic abundances of the oxygen is 0.7 dex lower that the one derived by the models. Figure 2 shows also predictions from the grid of models for different ionization parameter values. The difference is well explained as an important dependence on the total metallicity and ionization parameter. This result highlights the importance of using models to derive the total oxygen abundance in NLRs of AGNs when only opital lines are available, as ionization correction factors (ICFs) are far to be negligible, contrary to star-forming regions.
2020-04-21T01:01:14.956Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "2959d5293b0ea23a84d0b5094157a964ac7e8ef6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2004.08405", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2959d5293b0ea23a84d0b5094157a964ac7e8ef6", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
86674094
pes2o/s2orc
v3-fos-license
Impact on the Performance of Search and Rescue Team by Cloud-Based Services: A Case Study of TransAsia Flight GE235 The purpose of this study is to determine the significant gap between the suitable objective environmental conditions and the real performance outcomes when Taipei fire fighter officers and volunteer fire fighters handled the TransAsia GE235 incident. This study employs the “Unified Theory of Acceptance and Use of Technology (UTAUT)” model to investigate the search and rescue (SAR) team members’ adoption of cloud-based services to improve their SAR performance. This study uses the Partial Least Square (PLS) validation for the research hypotheses. The results show that the Taipei City Fire Department’s SAR team and the Volunteer Fire Fighter team have certain gaps in applying the cloud-based service to improve the incident SAR performance. It is revealed that resources, especially in training/education of the government official team, are significant to performance improvement. Finally, several management implications are presented to improve the SAR operation performance. Introduction In early 2015, people in Taiwan were preparing for the coming lunar New Year holiday when TransAsia Airways Flight 235 (GE235/TNA235, a domestic flight) crashed into the Keelung River at 10:50 am (T hour), on 4 February 2015, shortly after takeoff from Taipei Song Shan Airport, which was 5.4 km to the west. In this major incident, there was a very significant problem: government authorities had invested a huge amount of the budget to the cloud service for SAR operations. Bureaucrats had proper training for the cloud-based service, and most of the people involved in this incident had the devices required to run the cloud-based applications. The responders were well prepared to use the cloud-based service to improve the performance of SAR operations; however, this incident indicates otherwise. The chaotic situation from the incident T -0 hours until days later remained almost the same. The past, research related to aviation safety, regardless of the human factor approach ( [1], [12], [29]), SAR operation approach ( [21]), aviation incident prediction approach ( [4], [15]) and relevant incident management approach ( [14], [30]), all focused mainly on safety precautions or incident predictions. Even the cloud-based technology, which has been in service for a decade, and how to apply this new technology in incident SAR operations have only been studied at the theoretical level ([31]). There is no research on an actual case in which cloud services were adopted to improve the performance of SAR operations when an incident occurs. The primary motivation for this paper is the observed full utilization of cloud-based services by the team members who were involved in the SAR operation of an incident, so the purpose of this study is to explore the gap between the technology and the humans who were involved in this SAR operation by using the Unified Theory of Acceptance and Use of Technology (UTAUT) model. The differences between this study and past studies lie in three areas: volunteer fire fighters, information-communication-technology (ICT), and the actual SAR operation involved in this major urban aviation incident. The cloud-based services include a mixture of hardware (smartphone), software (e.g., LINE, Google) and telecommunication (Wi-Fi, e-mail, cellular phones, and mobile internet). Cloud-based services have been part of the public safety (PS) domain, for the management of crisis and disaster is to reduce the impact and injury to individuals, assets and society, and it requires a set of capabilities, including communication, resource management, supply chain management and access to relevant data sources. Communication is an essential element in various operational scenarios and at different levels of the hierarchy of PS organizations. First responders should be able to exchange information (i.e., voice and data) in a timely manner to coordinate relief efforts and improve the situational awareness of the environment. (Baldini, et al., 2011). Currently, many user acceptance models with different determinants exist to measure the user agreement in information systems, which is an important factor for indicating system success or failure ( [17]). Each theory or model has been widely tested to predict user acceptance ( [28]). However, no comprehensive instrument to measure the variety of perceptions of information technology innovations existed until Venkatesh et al. ( [28]) attempted to review and compare the existing user acceptance models, with the ultimate goal of developing a unified theory of technology acceptance by integrating every major parallel aspect of user acceptance determinants from those models. The results from the UTAUT model explained seventy percent (70%) of the variation in users' intention to accept technology ( [28]). The eight original models and theories of individual acceptance synthesized by From the above discussion, this study attempt to apply the UTUAT theory and extant factors that influence the behavioral intention when an incident SAR operation is executed. The rest of this paper is organized as follows: In Section 2, the hypothesis and statistical results are discussed. Section 3 describes and analyzes the results are used to validate the hypothesis. Section 4 gives the conclusions and the outline of future works. The hypothesis and statistical results This study uses the research framework (see Figure 1) which included the four main dimensions of UTAUT: Performance Expectancy (PE), Effort Expectancy (EE), Social Influence (SC), and Facilitating Conditions (FC). The major differences between this research framework and the original UTAUT model lie in the temporal dimension and the connotation of determinants by "external variables". Regarding temporal dimension, because Venkatesh et al. ( [28]) used specific application software to train the same participant, which required three tests in three periods before and after training. Additionally, this paper only discusses the relationship among external variables, behavioral intention, and use behavior; the control variable is not discussed. According to the UTAUT model, the research framework of this study was modified, and the research hypotheses were proposed as follows: H1 -A participant who uses the cloud-based application/service expects that "performance expectancy" will increase the "behavioral intention" of cloud-based application/service adoption in a SAR operation. H2 -A participant who uses the Cloud-Based Application/Service expects that "effort expectancy" will increase the "behavioral intention" of cloud-based application/service adoption in a SAR operation. H3 -A participant who uses the Cloud-Based Application/Service expects that "social expectancy" will increase the "behavioral intention" of Cloud-Based Application/Service adoption in a SAR operation. H4 -A participant who uses the Cloud-Based Application/Service expects that "facilitating conditions" will increase the "behavioral intention" of Cloud-Based Application/Service adoption in a SAR operation. H5 -A participant who uses the Cloud-Based Application/Service expects that "behavioral intention" will increase the "use behavior" of Cloud-Based Application/Service adoption in SAR operation. The on-site survey approach was used to complete the resulting questionnaire. The demographic data showed that 57 subjects (76.0%) were Taipei fire fighter officers, indicating that the percentage of Taipei fire fighter officers was much higher than the percentage of volunteer fire fighters among those responding to the survey. The sample included 23 subjects between the ages of 26 and 35 (30.7%) and 20 subjects between the ages of 36 and 45 (20.0%). Half of the subjects (50.7%) were between the ages of 26 and 45. The data revealed that 36 subjects (48.0%) had a college background and that 25 subjects (33.3%) had a university background. Most of the subjects were not aware of government cloud services or were not very aware of government cloud services, numbering 33 (44.0%) and 17 (22.7%), respectively. Table 1 provides a summary of the source of each UTAUT construct; the study uses the definition from Venkatesh et al. ( [28]) to consider the characteristics of the aviation SAR operation and selects proper items to measure search and rescue team members' awareness of each construct measure. These relationships include Effort Expectancy (EE), Performance Expectancy (PE), Social Influence (SI), and Facilitating Conditions (FC) predicting Behavioral Intention (BI), which influence Use Behavior (UB). The results This study employs partial least squares (PLS), which performed by the SmartPLS software, a technique for analyzing or constructing predictive models, especially for a causal model analysis of potential variables over LISREL. Since there are few samples in this study, PLS is not limited by the number of variables assigned or the number of samples, and it has a good ability of prediction and interpretation. Because the sample size is not large, the bootstrap resampling method is repeatedly used to extract 1,000 samples for parameter estimation and inference. ( [20], [22]) When using partial least squares (PLS) data analysis, the first step is to test the reliability and validity of the measurement model, and the second step is to detect the structural model of the path coefficient significance and its ability to predict. According to Table 2, the reliability of each construct reached an acceptable level of 0.7 or greater, indicating that the questionnaire items in this study have good reliability. The CR values of all constructs in this study ranged from 0.821 to 0.948, all of which were in the standard range of 0.6 or above, indicating that the scale of this study has a good composite reliability. The AVE values of all constructs in this study range from 0.607 to 0.820, all of which are above the standard AVE value of 0.5; therefore, they have good convergence validity. ( [9], [11], [13], [16], [19]). In this study, the Bootstrap method was used to repeatedly extract 1,000 samples to verify the relationship and significance of path coefficients in the structural model. In terms of the influence on Behavioral Intention, from the results of the path analysis in Figure 2, Performance Expectancy (β=0.684, t-value=4.427), Facilitating Conditions (β=0.466, t-value= 2.453), and Social Influence (β=0.345, t-value=3.177) all had a significant positive influence on Behavioral Intention (BI). Hypotheses H1, H3 and H4 are thus supported. Effort Expectancy (β=0.079, t-value=0.282) has no significant positive influence on Behavioral Intention, so H2 is not supported. In addition, Behavioral Intention (β = 0.781, t-value = 6.059) has a significant positive influence on Use Behavior; therefore, H5 is supported. From Figure 2, the R 2 values for Behavioral Intention and Use Behavior are, respectively, 68.0%, 74.7%, indicating that the study model has good explanatory power. The solid line indicates that the p-value is significant, and the dotted line indicates that the p-value is not significant. To further compare the difference of demographic variables between Behavioral Intention and Use Behavior (See Table 3). Performance expectancy has a significant positive influence on behavioral intention, indicating that users' impact on job performance will directly affect their behavioral intentions toward the new system after the importation of the cloud-based service technologies that are currently used. Therefore, the management unit should, in addition to training uses on a system during the training period, further enhance the explanation of the performance and other benefits that can be gained after the system itself is used to improve the user's perception of performance expectancy, thereby enhancing their behavioral intention. Facilitating conditions have a significant positive influence on behavioral intention, indicating that users perceive the extent of their own resources (including those provided by the supervisor and their own) that affect their behavioral intentions. Therefore, in addition to providing users with proper education and training so they have their own capabilities, the management unit should combine all necessary resources and advocate for the user to understand and design a simpler and more convenient and easy-to-use system so users have no doubts about the use of the cloud-based service system and increase their perception of the system's convenience, thereby enhancing their behavioral intention. Social influence has a significant positive influence on behavioral intention, indicating that users' perception of the use of cloud-based services by their peers, bosses, and friends affects their behavioral intentions. Therefore, after the introduction of the system, the management unit should conduct a simple follow-up observation of the user's use behavior. If there is any resistance or other negative attitudes toward the cloud-based service, the management unit should communicate with the individual to reduce the negative attitude toward the system expansion, which can enhance the overall behavioral intention. To improve the user's actual use behavior of the new system and further improve the system's efficiency, the supervisory unit should strengthen the quality and quantity of education and training during the process of introducing the new system. In addition to providing complete and clear training for use, users should be made aware of the system's benefits and increase their chances of practicing prior to actual use, thus increasing the users' awareness of performance expectancy and facilitating the conditions and social influence of the system. Additionally, supervisors should increase their use of flexible or individual user guidance or use learning methods to share their experiences with peers to enhance use behavior. Conclusion Most modern SAR operations have incorporated cloudbased services into their activities, with the aim of achieving higher efficiency and improving productivity, which in turn leads to higher satisfaction (Attuquayefio, 2014). Every participant must be willing and ready to apply cloud-based services in major incidents to improve the SAR operation performance. However, the results show that actual attitudes toward adapting cloud-based services is not equal between government officers and volunteers. This is obviously because the Taipei fire department is the official unit that can access the cloudbased services and has access to the most resources to adapt to any kind of incident. This finding shows that the government must invest more promotion, education and training into the cloudbased service that already exists. To cover the digital gap between these groups, the government must pay for more training-related resources for the volunteer section. The government needs volunteers as a resource to supplement its direct work force; therefore, the volunteer section requires government funding that includes training. The major contribution of this study is providing an understanding of the gaps in new technology adaption and the differences between different identities and education. The results show that the incident SAR operations do occur occasionally, especially in Taiwan, whose residents are forced to deal with extreme natural disasters, including typhoons and earthquakes. Over the past decade, global warming has caused stronger and longer typhoon seasons, stronger winter seasonal wind, and temperature drops. The correct approach must be applied to this type of objective situation, including intensive training/exercise. In other words, by using a real government cloud service, the related unit/individual can share different user experiences to improve the system optimization from the user interface to the core database/application system. The TransAsia GE235 incident in particular provides a field case to test the new cloud-based technology by applying the UTAUT model in a new scenario. The research objective of this study is only aimed to examine the actual participation of disaster victims in this crash incident. There are some limitations to the availability and source of samples. We suggest that future research should use a case-based interview or a focus group interview to discuss the incident in greater depth. Although the UTAUT Model is used in this study for practical analysis, it is not included in the discussion of control variables. It is suggested that future research discuss individual control variables; other models and potential construct variables may also be considered to help identify the key factors and the causal relationship in the model. Practical use behavior, although a directly observable variable, is often affected by other external variables, which increases the difficulty of measurement. It is suggested that the future research consider as many external factors as possible to measure the actual use behavior and design suitable moderating factors to present the actual behavior that is influenced by the potential constructs.
2019-03-28T13:14:16.761Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "4f5d7e8c354c20414416d245fb1f59f503a77940", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/16/matecconf_isc2018_02004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bdbeb35e5cb362ec77d84f4c2a16d7b4d6e77837", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
126302383
pes2o/s2orc
v3-fos-license
Multiplicity of chaotic attractors in a model of lasers with variable feedback delay We demonstrate coexistence of chaotic attractors induced by periodic modulation of the delay time in the feedback circuit which controls pumping rate in a laser. Depending on the n-number of pulses in the delay interval, the dynamics of the pulse regime is described by the dynamics of the (2n + 2)-dimensional map. It is possible to switch chaotic regimes of different structures by a single impulse perturbation of pumping in a suitable phase of the oscillations. Introduction Delayed feedback (FB) is widely used to control the dynamics of nonlinear optical and electrooptical systems [1]. Recently, modifications of standard FB schemes have been actively studied, in particular, high-frequency periodic modulation of the delay time in the FB scheme was discussed to stabilize the state of unstable equilibrium [2,3]. On the other hand, the variation of the delay time with a frequency comparable with the relaxation one can be used to obtain oscillating regimes with given characteristics. This may be of interest for methods of optical processing and information coding, optical vibrometry and in other applications, see, for example, [5]. In this paper, we show the coexistence of irregular pulse regimes caused by periodic modulation of the delay time in the FB circuit. Such regimes can be classified as slowly oscillating with inter-spikes intervals longer than the delay time or fast oscillating with inter-spikes intervals shorter than the delay. Of particular importance can be the task of purposeful switching of coexisting attractors. Previously, quick switching periodic cycles by external impulse was suggested for a loss-driven CO 2 laser [6]. With a suitable choice of the switching pulse characteristics the system may be found on stable periodic cycles, as well as on unstable cycles. It was found that there is an optimal time of application of impact, which corresponds to the minimum duration of the transition process to the target cycle. The theoretical explanation was given on the base of the 3-dimensional non-autonomous model [7]. In the present paper, we study coexisting chaotic attractors with basins that may have fractal boundaries and are embedded in the infinite-dimensional phase space of a system with retarded argument. Thus, the possibility of a single-cycle transition is not obvious. Nevertheless, we note the conditions that can ensure successful switching by a quick change of the system position within the phase space. To this end we derive asymptotically the finite-dimensional maps responsible for the dynamics of spikes. With an impulse-perturbation of the pumping rate in a suitable phase of the oscillation, it is possible to achieve switching chaotic regimes of different structures. Model of laser dynamics Relaxation oscillations in a semiconductor laser with an optoelectronic FB can be studied on the base of the model proposed in [8], which we supplements by periodic modulation of the delay time, where q is the pumping rate; v is the ratio of the rate of decay photons in the cavity to the rate of relaxation of populations; γ is the feedback factor (FB level); t is current time normalized at the time of relaxation of the population inversion. The optoelectronic FB is represented by the term γu(g(t)), where γ is the feedback coefficient, the delayed argument has the form g(t) = t − (τ 0 + B cos Φ(t)), the modulator phase Φ(t) = ωt + ϕ, τ 0 is the constant time of the radiation transformation in the FB circuit, B and ω is the amplitude and frequency of the delay modulation, respectively, ϕ is the initial phase of the modulating signal. Here we will consider positive FB with γ > 0. The delay modulation amplitude and frequency should be limited by the inequalities which ensures positive values of the delay τ (t) > 0 and positive derivation g (t) > 0 (ensures pulsed structure of the solution). Further we study the system with the assumptions B < τ 0 , Bω < 1. ( For class B lasers including semiconductor lasers, some solid-state lasers, CO 2 gas lasers typical values v ∼ 10 3 are large while the other parameters are of the order of unit. Considering v 1 as a large parameter, an asymptotic approach for spiking can be done. The method of investigation is following. The phase space of system (1) is the direct product of the Banach space C [−τ,0] of continues functions by the number line R 1 , i.e., the values of the functions from C [−τ,0] and the value y(0) ∈ R 1 are given as initial conditions. In this space, we shall distinguish a (fairly wide) set S(ξ) dependent on the vector parameter ξ and consider the solutions with initial conditions from this set. It is possible to construct uniform asymptotic approximations of all such solutions and show that after a certain time these solutions again fall within S(ξ). Thus, the operator of the shifting along the trajectories, which makes a function from S corresponds to a function also from S, is naturally determined. The properties of this operator are mainly determined by the finite-dimensional mapξ = f (ξ). To a fixed point of the map there corresponds the fixed point of the operator, and to the later point there corresponds a periodic solution of the same stability in the original system. Examples of the investigation of similar systems were given in our previous works [9]. Solutions in form of spikes can be classified as slowly oscillating (SO) solutions with interspikes intervals longer than the delay time, fast oscillating (FO n ) solutions with inter-spikes intervals shorter than the delay, mixed slowly and fast (MSF) oscillating solutions. For each type of solutions we will find asymptotically (at v → ∞) the maps responsible for the dynamics of spikes. Slowly oscillating solution For SO solutions any interval between spikes greater than the time delay. Fix a time point when the radiation spike begins as the starting point, so that radiation intensity u(0) = 1 whereas before that, in the delay interval, the radiation intensity is of noise level. The set of initial conditions can be define as follows, where c ∈ (1, q], ϕ ∈ [0, 2π], and the function ψ(s) ∈ S 0 is chosen from the set S 0 of the functions with the properties, Note, the parameter c > 1 that providesu(0) > 0. The form of the functions ψ(s) is not specified, hence, the set S(c, ϕ) is wide enough. Now we integrate the system step-by-step dividing the evolution path into segments [t i , t i+1 ], where the asymptotic estimates (under v → ∞) for functions u(t − τ ) and u(t) are known. The solution values at the end of the interval are used as the initial conditions for the next interval. In this way we obtain an analytical estimate of the SO solution up to the time t = t 2 , when the next radiation pulse begins. The main result is that the problem of further constructing the solution for t > t 2 has returned to the original problem with the initial conditions wheret 0 (ϕ) is the positive root of the equation τ 0 + B cos(ωt 0 + ϕ) =t 0 ; p(c) is the positive root of the equation c − p = ce −p ; and T (c, ϕ) is the root of the equation Note, the value of p characterizes the pulse energy and the value of T characterizes the interpulse interval. Integrating the system, we have supposed that inequality , which associates (by means of solutions) any element (c, ϕ) from the set S with another element (c,φ) from the same set S. Evolution of the operator and, in turn, evolution of slowly oscillating pulsed solution is determined by the iterations of the two-dimensional map (5). In particular, to the stable fixed point (c 0 , ϕ 0 ) there corresponds stable periodic slowly oscillating pulsations of the period T 0 = T (c 0 , ϕ 0 ) in the original system and T 0 > τ 0 + B. If map (5) has another attractor and inequality (6) is fulfilled for each iteration of the map, then the attractor corresponds to pulsed solution of SO structure in system (1). To bifurcations of the mapping attractor there correspond bifurcations of pulsed regime. Inequality (6) provides SO structure of such solutions and, in this way, restricts the domain in parameters space and in phase space where SO solutions can realized. By computing map (5) with q = 1.5, γ = 0.3, τ 0 = 0.3 and ω = 29.8 we find that at sufficiently small modulation amplitudes, 0 < B < 0.012, there is a stable mapping point which corresponds to a stable cycle with a period longer than the delay. At 0.012 < B < 0.04, cycles and chaotic attractors (quasiperiodic and chaotic SO oscillations) are observed. At B > 0.04 that corresponds to a violation of the condition Bω < 1, a transition takes place to oscillations a with interpulse intervals longer and shorter than the delay time. Thus, there are sufficiently wide domains of modulation parameters at which chaotic spikes of special SO-sructure may be observed. Fast oscillating solutions Inequality (6) determines the parameters with which the intervals between neighboring pulses are greater than the delay. Violation of this condition leads to occurrence of n ≥ 1 pulses on any interval of delay length. We call such regimes fast oscillating (FO n ) modes, since the time intervals between pulses are shorter than the delay time in the FB circuit. Consider FO 1 regimes with one pulse on the delay interval. The set of initial conditions for Eqs.(1) can be given as follows, where c ∈ (1, q], ϕ ∈ [0, 2π], ψ 1 (s) ∈ S 1 (ξ, p 1 ), and The parameter ξ determines the moment of pulse onset on the delay interval so that the value T = (τ 0 +B cos ϕ−ξ) > 0 is the interval between pulses, the parameter p 1 > 0 is the energy of the pulse in the delay interval. The ψ 1 (s) values are asymptotically small in the intervals between pulses, the pulse shape is not specified, for example, a pulse can be square (non-smooth). It is only necessary that the pulse width δ 1 should be sufficiently short, δ 1 → 0, with v → ∞. Note also, the condition c > 1 ensures the positive derivative u (0) > 0 at the initial moment, which we determine at the moment of a new pulse onset. Thus the set S(c, ϕ, ξ, p 1 ), depending on four parameters, is rather wide. The main result is that in time t 2 = T + o(1) we find the system in the state analogous to the initial one with replacing (c, ξ, p 1 , ϕ) by (c,ξ,p 1 ,φ), wherē Numerical simulations show that the conditions can be valid for map's attractors only in the case of γ > 0 (positive feedback). With modulation amplitude B increases, chaotic attractor can form, hence, one can expect chaotic spiking of special FO 1 -structure in the original infinitedimensional delayed system. Inequalities (9) restrict the domain in the parameter space of the system, where solutions of this type can be realized. It appears that the parameter domain for FO 1 -solution may intersect with the domain of SO-solutions, hence the multiplicity of different periodic spike regimes takes place. It is possible to construct analogously (2n+2)-dimensional map responsible for FO n solutions representing n > 1 spikes within the delay interval and find the parameter domain where such solutions coexist. Below we numerically demonstrate the coexistence of chaotic spike regimes in the case of the modulated delay. Numerical simulation In order to verify the above theoretical conclusions we provide numerical simulations of the original delayed system with v = 10 3 and q = 1.5 that are typical for a semiconductor laser. Note, v takes sufficiently large value. Other parameters were chosen so that both maps (5) and (8) have the chaotic attractor: FB level γ = 0.3, the delay τ 0 = 0.6, modulation amplitude of the delay B = 0.032, modulation frequency ω = 29.814 was chosen comparable with natural relaxation frequency. Preliminary calculation of maps is really necessary, because the parameter regions where various solutions are realized are relatively narrow, and also depend on the initial conditions. In accordance with set (3) we choose the initial function u(s) in the form u(s) = 0, s ∈ [−(τ 0 + B), 0), u(0) = 1. The initial value of the modulation phase is Φ(0) = 3.6. The initial value of the inversion y(0) = 1.25 was chosen so that map (3) has the SO chaotic attractor. At y(0) = 1.15 map (6) has the FO 1 chaotic attractor and at y(0) = 1.08 there is FO 2 chaotic attractor. In order to specify structure of irregular spiking, we show the dependence of inter- pulse intervals T i+1 on T i that correlates with a projection of the maps obtained. In Fig.2 coexistence of three chaotic attractors of different structure is demonstrated. One can see that all points of the map corresponding to SO spikes are located in domain T i > τ 0 . In domain τ 0 /2 < T i < τ 0 the map corresponds to FO 1 chaotic attractor, and in domain τ 0 /3 < T i < τ 0 /3 the map corresponds to FO 2 chaotic attractor with two pulses on any delay interval. Switching to the desired attractor Here we study the effects of an additional short-time square-shape control signal in the pumping circuit . In Eqs. (1) we set where ρ is the force amplitude, t x is the moment of the signal application, Θ is its duration of the order or shorter than radiation pulse width, η = ρΘ is the pulse energy which assumed to be not an asymptotically small value. In order to switch on the target attractor one has to move the phase trajectory into its attractive basin. The phase space of the delayed system (1) is infinite-dimensional. Therefore, we can not fully describe the attractive basins of coexisting solutions. But, using asymptotic conditions (6) and (9) for existence of the solution of each type, it is possible to propose perturbations guaranteeing access to the target attractor. In order to move from an SO solution to a FO solution, it is necessary to violate condition (6). To do this, we must apply a perturbation of the corresponding energy η at the time t x in the interval less than τ 0 , following the radiation pulse, as shown in Fig. 2a. Figure 2. Switching induced by pump-perturbation signal: a) from irregular SO spiking to FO 1 spiking, b) from irregular FO 1 spiking to SO spiking, c) from irregular FO 1 spiking to FO 2 spiking. The moment of application of the control signal is marked with a black triangle. In order to move from an FO 1 -solution to a SO-solution, it is necessary to violate condition (9), namely to get T i > τ 0 /2. To do this, we must apply a perturbation of the corresponding energy η at the time t x synchronized with the radiation pulse, as shown in Fig. 2b. At last, if the perturbation restricts condition (9), namely to get T i < τ 0 /3, FO 1 -solution can switch to FO 2 -solution, as shown in Fig. 2c. Conclusion In conclusion, we have analytically described pulsed solutions of various structure in the laser diode with variable delayed FB. Doing so, the problem on dynamics of the original infinitedimensional system has been reduced to the problem on the dynamics of nonlinear finite- dimensional maps. The advantages of the proposed asymptotic method are as follows: 1) the initial conditions (attractive basin) are determined for the desired regime; 2) the domain of parameters can be found for the desired regime; 3) the dependencies of characteristics (amplitude and inter-spike interval) can be obtained from the control parameters; 4) bifurcations of spikes can be followed. A hierarchy of spiking regimes has been proposed on the base of pattern complexity, i.e. on the number n of the pulses within the delay interval. We distinguish between slowly oscillating solutions determined by the 2D-map, fast oscillating solutions determined by the (2n+2)D-maps and mixed solutions determined by the 2D-map. Since the dynamics of such a map completely determines pulse dynamics, including chaotic pulsing, we expect that the dimension of the corresponding attractor in the infinite-dimensional phase space would be respectively limited. With increasing amplitude of modulation of the delay time, we observed intermittency of slowly and fast oscillating solutions and merge of attractors. Such a scenario leading to annihilation one of the coexisting states can be proposed for controlled monostability of a chaotic attractor. The obtained maps can be used for finding the parameters of the system at which chaotic pulsing with special characteristics are realized. In the case of negative FB, we demonstrate a chaotic spikes following in strictly alternating intervals of less than and more than the delay time. In the case of positive FB, we demonstrate possible coexistence of slowly and fast oscillating solutions (multistability of spiking) at the same parameters. These results are promising for developing methods for dynamic control, in particular, for the fast switching of attractors by a suitable external shock.
2019-04-22T13:08:47.628Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "23d473525afda480f94b876c079722591069a1ef", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/937/1/012016", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "074ff98484e7d4e02464938b83e1454af8dd73df", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
100939854
pes2o/s2orc
v3-fos-license
Effect of Growth Pressure on Structural Properties of SiC Film Grown on Insulator by Utilizing Graphene as a Buffer Layer s Heteroepitaxial growth of silicon carbide (SiC) on graphene/SiO 2 /Si substrates was carried out using a home-made hot-mesh chemical vapor deposition (HM-CVD) apparatus. Monomethylsilane (MMS) was used as single source gas while hydrogen (H 2 ) as carrier gas. The substrate temperature, tungsten mesh temperature, H 2 flow rate and distance between mesh and substrate were fixed at 750 °C, 1700 °C, 100 sccm and 30 mm, respectively. The growth pressures were set to 1.2, 1.8 and 2.4 Torr. The growth of 3C-SiC (111) on graphene/SiO 2 /Si were confirmed by the observation of θ-2θ diffraction peak at 35.68  . The diffraction peak of thin film on graphene/SiO 2 /Si substrate at pressure growth is 1.8 Torr is relatively more intense and sharper than thin film grown at pressure growth 1.2 and 2.4 Torr, thus indicates that the quality of grown film at 1.8 Torr is better. The sharp and strong peak at 33  was observed on the all film grown, that peak was attributed Si(200) nanocrystal. The reason why Si (200) nanocrystal layer is formed is not understood. In principle, it can’t be denied that the low quality of the grown thin film is influenced by the capability of our home-made apparatus. However, we believe that the quality can be further increased by the improvement of apparatus design. As a conclusion, the growth pressures around 1.8 Torr seems to be the best pressures for the growth of heteroepitaxial 3C-SiC thin film. Introduction Growth of SiC/c-Si heteroepitaxial are interesting topic on the point of view the potential application of SiC material on the production of heterobipolar transistor [1] or optoelectronic devices [2].Hydrogenated amorphous silicon carbide is preferred to use in many studies in comparison with monocrystalline material, since the growth temperature is relatively low and which guarantees a large compatibility among the fabrication processes of the silicon carbide layer with current silicon technology.In order to improve the performance of the devices, carrier mobility should be enhanced and this is possible with the crystallization of SiC layers.Hence, it is important to grow heteroepitaxial crystalline SiC on different substrates. Growth of the Single crystalline SiC films is usually realized only at high growth temperature greater than 1000 C [3].owing to the lattice mismatch (20%) and the difference in the thermal expansion coefficient (8%) between SiC and Si, it is quite likely generating a residual stress and a high density of interface defects when the processing temperatures are very high [4].However, several applications can benefit from heteroepitaxial system.Heteroepitaxial SiC/c-Si with optimized properties possesses potential application such as on switching devices [5], sensors [6], detectors [7] and micro-electrical mechanical system (MEMS) [8]. Graphene is one monolayer of carbon atoms packed into a two-dimensional (2D) honeycomb crystal lattice.The electron in an ideal graphene sheet behaves like massless Dirac-Fermion.Graphene possesses high carrier mobility, up to 200,000 cm 2/ Vs, even at room temperature (RT), and this mobility in turn result in along mean free path of 1.2 μm at a carrier concentration of 2 x 10 11 cm -2 [9].The physical structure of graphene is also fascinating as it behaves like a 2D crystal in which electron tranvel up to micrometer distance without scattering.This makes it superior for transport properties and the material itself is very robust, highly elastic and chemically inert offering a high potential for technological application [10]. In order to realize high quality SiC thin film on silicon, we have studied the growth of SiC on Si substrate by using carbonization layer to reduce the lattice mismatch between SiC and Si interfaces [11].The latter work triggers and idea to employ graphene , a 2D carbon material with only one atomic-layer thick, as a buffer layer for epitaxial growth of SiC thin film on insulator substrate.To our knowledge, growth of the material on the insulator substrate is not new topic, such as Ge-on-insulator [12], graphene-on-insulator [13], GaAs-on-insulator [14] and SiC-on-insulator [15].However, the growth SiC on insulator by introducing graphene as buffer layer has not been reported yet.In this pioneer work, we reported the effect of the pressure and graphene as a buffer layer for heteroepitaxial growth of SiC thin film. Experiments Procedure Heteroepitaxial SiC films are deposited on poly crystalline graphene/SiO2/Si substrates by home-made hot-mesh chemical vapor deposition (hot-mesh CVD) technique.Fig. 1 shows the schematic of home-made hot-mesh CVD apparatus.Schematic of the graphene/SiO2/Si(100) substrate as shown in Fig. 2.After the tradisional cleaning was applied prior to the growth which consists of acetone, ethanol and deionized (DI) water, the substrate was loaded to the chamber directly.The distance between the tungsten mesh wire (diameter of 0.1 mm, 300 mesh/in.) and the substrate was set to 30 mm.Monomethylsilane (MMS) gas was used as a single source gas while hydrogen (H2) as a carrier gas with constant flow rate of 100 sccm.This method utilizes heated tungsten wire arranged in a mesh, which promotes the high decomposition effeicency of H2 gas.The substrate temperature and tungsten mesh temperature were fixed at 750 C and 1700 C, respectively.The growth pressures were set to 1.2, 1.8 and 2.4 Torr.Xray diffraction (XRD) spectra were measured using an X-ray diffractometer (RAD IIIA, Rigaku) over the range of 2 = 20 -40.Mean crystalline sizes were determined from the full width at half maximum (FWHM) of XRD peaks using Scherrer's formula.Cross sectional image of the SiC film were observed using field effect scanning electron microscope (FESEM).Raman scattering spectra were measured using microraman equipment (Jobin Yvon, Horiba) with Ar + laser 514.5 nm wavelength. Result and Discussion The FESEM image of as received graphene/SiO2/Si(100) substrate from Graphene Laboratories Inc. USA is shown in Fig. 3.It is clear that the size of grapheme grains of several tens micrometer in the diameter.The polycrystalline graphene grains were grown by CVD technique with the coverage of single layer graphene grains of 90%.The thickness of the graphene grain was also confirmed by Raman measurement, as showns in Fig. 4. Based on the Fig. 4, the sharp peak was clearly observed at 1580 cm -1 and 2670 cm -1 is attributed to G and 2D bands, respectively.The ratio intensity of the 2D and G band (I2D/IG) is about 1.6, hence indicate that the graphene is a single layer [16].In general, the 2D band spectra change its shape, width, and position with the increase of layer number.At the same time, G band peak position will also change by down shifting to lower wave number due to the chemical bonding between graphene layer [17].The effect of graphene thickness on the shifting of 2D and G band can study continue if the graphene substrate was prepared by using mechanical exfoliation technique.Not only affected to the 2D and G band shift on Raman measurement also will affect on the image contras of the graphene layer.Graphene is 2D hexagonal network of carbon atoms which formed by making strong triangular σ-bonds of the sp 2 hybridized orbitals.This bonding structure is similar to the C plane of hexagonal ctystalline structure and (111) plane zincblende structure.With this regard, the growth of (111) oriented SiC on graphene in <111> direction is feasible (Fig. 5).Fig. 5 shows the X-ray diffraction (XRD) spectra of the samples deposited on the graphene/SiO2/Si(100) substrate at various gas pressure.Two diffraction peak were found in the range from 20 to 40.The peaks at 33 is attributed to crystalline Si(200) peak [18].Komura et.al. [19] was reported the film grown at below 1.5 Torr is contained Si nanocrystallites, where that Si nanocrystallite embedded amorphous SiC (a-SiC) film were prepared by HWCVD using CH4 as a carbon source [20] and our result at the low pressure condition below 1.5 Torr are consistent with it.On the other hand, for films prepared at 1.8 Torr, XRD peaks due to 3C-SiC(111) were observed at 35.68.This mean that 3C-SiC growth occurred at the pressure 1.8 Torr and that gas pressure is a key parameter for preparing 3C-SiC film [21].For the film prepared at 1.8 Torr, the diffraction peak intensity is higher than the film grown at 1.2 and 2.4 Torr.It is can be assume the growth pressure 1.8 Torr is optimum condition to obtain the 3C-SiC film on insulator by introducing graphene as buffer layer.According to this result, we speculate that the growth of 3C-SiC on graphene/SiO2/Si(100) substrate has been enhance in (111) domain.In this work, the grown SiC films were also polycrystalline same as the reported SiC film on SiO2 [22][23][24] since the polycrystalline single layer graphene flake were used.However, if the technology to form large area single crystalline grapheme is realized and then, such single crystalline grapheme structures is applied, it should lead to the realization of highly oriented single crystalline 3C-SiC (111) continous thin film.It seem to show that grapheme is a promising buffer layer to grow single crystalline material structure on amorphous material.Recently, Takahashi et al. reported the graphitization process or the formation of epitaxial grapheme on the 3C-SiC (111) surface by annealing process in ultrahigh vacuum condition [25].Therefore, it supports the feasibility of forming 3C-SiC film on graphene since both process are simply reversed to each other and we assume that the same bonding structures should be formed. The FESEM image of 3C-SiC films grown on insulator substrate which graphene as a buffer layer show the film structures with no void formed at the interface between SiC and SiO2.Fig. 6 shows the cross sectional FESEM of the 3C-SiC film on graphene/SiO2/Si(100) substrate.It is clearly shown that 3C-SiC film can be grown on the surface of the insulator which is the polycrystalline grapheme as a buffer layer is incorporation on the SiC film formation.The thickness of the 3C-SiC film is about 2 µm where the film shows grain like structure which is the size of the grain are similar to the size of graphene grain.The Raman scattering spectra of the 3C-SiC film on graphene/SiO2/Si(100) substrate prepared at variation of the pressure growth is shown in Fig. 7.It is reported that SiC gives Raman scattering from a transverse optic (TO) phonon at approximately 796 cm -1 and a longitudinal optic phonon (LO) at 973 cm -1 for bulk crystal 3C-SiC [26].In this work, the TO and LO peaks of SiC crystallites is shifted toward a lower wavenumber region (redshift) and broadened with respect to those of the bulk 3C-SiC.Broad and low intensity of the TO and LO phonon mode peaks for SiC is indicated that the grown film is polycrystalline with small crystallite size [27][28][29].The shifting of the TO and LO peaks is caused by the small crystallite size (quantum confinement effect) [30] and the defect of the structure which is strain associated with them [31].Accordingly, we concluded that the peaks at around 789 cm-1 and 955 cm-1 would be due to TO and LO modes of 3C-SiC, respectively.As shown in Fig. 6 the intensities of the Raman peaks due to LO mode 3C-SiC increased with pressure growth, indicating that the crystallinity of 3C-SiC was improved. Conclusion In conclusion, these preliminary result suggest that graphene can be used as a promising buffer layer for growth of 3C-SiC thin films on insulator at relatively low temperature.Based on the XRD and Raman result, the 3C-SiC film was ontained is polycrystalline 3C-SiC with (111) domain orientation.The growth pressure around 1.8 Torr seems to be the best pressure for the growth of heteroepitaxial 3C-SiC thin films. Figure 1 . Figure 1.Schematic of the homemade hot-mesh CVD apparatus.
2019-01-02T15:07:33.831Z
2015-06-28T00:00:00.000
{ "year": 2015, "sha1": "3f7a66084932633e9615c1af5eed503686658986", "oa_license": "CCBYSA", "oa_url": "http://journal.walisongo.ac.id/index.php/JNSMR/article/download/476/429", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3f7a66084932633e9615c1af5eed503686658986", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
257796927
pes2o/s2orc
v3-fos-license
Quantifying strategies to minimize aerosol dispersion in dental clinics Many dental procedures are aerosol-generating and pose a risk for the spread of airborne diseases, including COVID-19. Several aerosol mitigation strategies are available to reduce aerosol dispersion in dental clinics, such as increasing room ventilation and using extra-oral suction devices and high-efficiency particulate air (HEPA) filtration units. However, many questions remain unanswered, including what the optimal device flow rate is and how long after a patient exits the room it is safe to start treatment of the next patient. This study used computational fluid dynamics (CFD) to quantify the effectiveness of room ventilation, an HEPA filtration unit, and two extra-oral suction devices to reduce aerosols in a dental clinic. Aerosol concentration was quantified as the particulate matter under 10 µm (PM10) using the particle size distribution generated during dental drilling. The simulations considered a 15 min procedure followed by a 30 min resting period. The efficiency of aerosol mitigation strategies was quantified by the scrubbing time, defined as the amount of time required to remove 95% of the aerosol released during the dental procedure. When no aerosol mitigation strategy was applied, PM10 reached 30 µg/m3 after 15 min of dental drilling, and then declined gradually to 0.2 µg/m3 at the end of the resting period. The scrubbing time decreased from 20 to 5 min when the room ventilation increased from 6.3 to 18 air changes per hour (ACH), and decreased from 10 to 1 min when the flow rate of the HEPA filtration unit increased from 8 to 20 ACH. The CFD simulations also predicted that the extra-oral suction devices would capture 100% of the particles emanating from the patient’s mouth for device flow rates above 400 L/min. In summary, this study demonstrates that aerosol mitigation strategies can effectively reduce aerosol concentrations in dental clinics, which is expected to reduce the risk of spreading COVID-19 and other airborne diseases. deposit near the patient's mouth due to gravitational settling. The term "aerosol" is used for particles smaller than about 10 μm that can stay airborne for an extended period of time. Both splatter and aerosols represent a risk to transmit infections, but aerosols are invisible to the naked eye and have the greatest potential of disease transmission to people more than six feet away from the infected patient (Harrel and Molinari, 2004;Klompas et al., 2020;Kumar and Subramanian, 2020). Many studies have demonstrated that aerosol concentration increases in dental clinics during patient procedures (Grenier, 1995;Sotiriou et al., 2008;Pasquarella et al., 2012;Polednik, 2021;Dudding et al., 2022). Since the coronavirus that causes COVID-19 is found in saliva , dental healthcare workers are at an increased risk of contracting COVID-19. However, it remains unclear if the high aerosol levels in dental clinics are associated with a higher incidence of COVID-19 among dental healthcare workers. One study analyzed data from the United States Department of Labor and reported that dental healthcare workers have the highest occupational risk of contracting COVID-19 among different professions (Zhang, 2021). In contrast, a survey of dental healthcare workers in France reported that the prevalence of laboratory-confirmed COVID-19 among dentists was similar to the general population (Jungo et al., 2021). Furthermore, a study from Israel reported that the incidence of COVID-19 among dental healthcare workers was lower than in the general population (Natapov et al., 2021). Thus, the available evidence suggests that in most cases standard infection control measures are effective at reducing the risk of transmission of airborne pathogens in dental clinics (Petti, 2016;Meethil et al., 2021). While transmission of airborne pathogens in dental clinics is rare, some reported cases illustrate the risk. One documented case of disease transmission occurred in a medical office where the measles virus was spread through the ventilation system to multiple people (Harrel and Molinari, 2004). The source patient was a 12-year-old boy who was coughing. Of the seven people who acquired measles at this clinic, one entered the office one hour after the source patient had left. Another example that illustrates the risk of disease transmission by airborne pathogens comes from the SARS outbreak in China. An outbreak in a Hong Kong apartment complex was likely facilitated by the ventilation system spreading the SARS coronavirus between different apartment units (Harrel and Molinari, 2004). Several strategies are used to minimize the risk of infection in dental clinics. Dental healthcare workers use personal protective equipment (PPE), such as gloves, gowns, and N-95 masks, to protect themselves. However, aerosols that remain suspended in air for an extended period of time represent a risk of contamination to consecutive patients treated at the same clinic and to clerical staff at the reception desk who often do not wear protective masks. The gold-standard strategy to reduce aerosols in dental clinics is the use of a high-volume evacuator (HVE), which has been shown to reduce the contamination arising from the operative site by more than 90% (Harrel and Molinari, 2004). However, HVE utilization requires a human assistant ("four-handed dentistry"). This is a limitation because many treatments are performed by dental hygienists without an assistant. Another strategy to minimize aerosol dispersion is the rubber dam, which virtually eliminates contamination with saliva, reducing the source of contamination to the tooth being treated. However, it is not feasible to use the rubber dam in many dental procedures, such as routine prophylaxis, periodontal surgery, and subgingival restoration (Harrel and Molinari, 2004). Strategies such as the HVE, rubber dam, and saliva ejector are aimed at preventing aerosols from escaping from the patient's mouth. Another category of aerosol mitigation strategies are engineering controls, which are aimed at capturing the aerosols after they have been released from the patient's mouth. Engineering controls include increasing the ventilation rate of the dental clinic, using portable high-efficiency particulate air (HEPA) filtration units, and using extra-oral suction devices (also known as local exhaust ventilation). However, these engineering controls have not been universally adopted yet, in part because their effectiveness is poorly characterized. One major question that dentists faced during the COVID-19 pandemic was when it was safe for a new patient to enter a room after the treatment of the previous patient. Research studies suggest that pathogens can remain airborne for at least 20 min after the dental treatment is completed (Chuang et al., 2014). Another major question is the determination of the optimal flow rate of HEPA filtration units and extra-oral suction devices. Higher flow rates are more effective at scrubbing aerosols, but higher flow rates also generate louder noises that can be a nuisance for patients and dental healthcare workers. Currently, it is difficult to predict the optimal flow rate that is effective at removing aerosols while minimizing noise. In this study, we apply computational fluid dynamics (CFD) to compare the aerosol removal efficiency of four engineering controls, namely (1) increasing room ventilation, (2) a portable HEPA filtration unit, (3) a circular extra-oral suction device, and (4) an elliptical extra-oral suction device. The CFD simulations are designed to represent 15 min of dental drilling followed by 30 min of a resting period. We estimate the minimal flow rate required to remove all particles emanating from the patient's mouth using extra-oral suction devices. We also estimate the amount of time required for aerosol concentration to return to background levels as a function of the room ventilation and the HEPA filtration unit flow rate. Geometry and flow rates The dimensions of the dental clinic (height = 2.86 m, width = 3.31 m, depth = 3.58 m, volume = 33.9 m 3 ) in this study are based on a single-chair dental clinic at the School of Dentistry at Marquette University. A 3D reconstruction of a human head was positioned on the dental chair (Fig. 1). We investigated the aerosol removal efficiency of three devices, namely two extra-oral suction devices and a portable HEPA filtration unit. The geometries of the two extra-oral suction devices were inspired by the Treedental dental suction unit (model TR-YP606D4, TREE USA Inc., Valley Cottage, NY, USA) and the Xuction HVE Dental Aerosol Reducer (Xuction Dental, Midlothian, VA, USA), respectively. The extra-oral suction device #1 had a circular inlet (diameter = 10 cm) positioned lateral to the patient's mouth ( Fig. 1(B)). The extra-oral suction device #2 had an elliptical inlet (major axis diameter = 42.5 mm, minor axis diameter = 12.5 mm, area = 417 mm 2 ) positioned inferior to the patient's mouth ( Fig. 1(C)). The portable HEPA filtration unit was located at the corner of the room (Figs. 1(A) and 1(D)). It pulls air from its side and returns the filtered air through its top. Its dimensions and flow rates are based on the JADE air purification system (model SCA5000C, Surgically Clean Air, Toronto, Ontario, Canada). The geometry of the inlet vent on the ceiling was developed by Komperda et al. (2021) and kindly shared with us for this study. The dental clinic has a ventilation of 3540 L/min (corresponding to 6.3 ACH (air changes per hour)) through inlet and outlet vents on the ceiling (Fig. 1). This ventilation rate was used in all simulations with the extra-oral suction devices and the HEPA filtration unit. To investigate the effect of increasing room ventilation, simulations were Quantifying strategies to minimize aerosol dispersion in dental clinics 293 performed for ventilation rates of 6.3, 9, 12, 15, and 18 ACH. Simulations were also performed to investigate the four speeds of the portable HEPA filtration unit, namely 4333 L/min (low speed), 6513 L/min (medium speed), 8835 L/min (high speed), and 11,497 L/min (turbo speed). The flow rates of the extra-oral suction devices #1 and #2 are unknown. Preliminary simulations were performed to identify the flow rate threshold above which these devices effectively scrubbed all particles emanating from the patient's mouth. The final simulations were performed with flow rates of 0, 50,150,200,250,400,600, and 800 L/min for the two extra-oral suction devices. The computational mesh included all three devices, but each device was studied separately while keeping the flow rates of the other two devices equal to zero (Table 1). Computational fluid dynamics-airflow simulations The CFD simulations were performed in ANSYS Fluent 2020 R2. The geometry of the dental clinic was created in ANSYS ICEM-CFD. To transfer the geometry from ICEM-CFD to Fluent, a tetrahedral mesh was created, exported in ".msh" format, and imported into Fluent. A polyhedral mesh was created in Fluent Meshing with five prism layers. The mesh was graded near the patient's mouth, the inlet vent, and the extra-oral suction devices to accurately capture the airflow patterns (Fig. 2). The mesh size was selected via a mesh density study (see Section 3 Results). Steady-state airflow simulations were performed with the k-omega turbulence model using an air density of 1.2 kg/m 3 and air viscosity of 1.8×10 -5 kg/(m·s). A pressure-inlet boundary condition was used to impose atmospheric pressure at the inlet vent on the ceiling. A mass flow outlet boundary condition with a mass flow rate of 0.0708 kg/s (corresponding to the room ventilation of 6.3 ACH) was applied at the outlet vent on the ceiling. Mass flow inlet and mass flow outlet boundary conditions were applied at the inlet and outlet of the HEPA filtration unit, while a mass flow outlet boundary condition was applied at the outlet of the extra-oral suction devices in simulations representing these devices turned on. The coupled scheme was used for the pressure-velocity coupling and second-order discretization was used for all partial differential equations. Computational fluid dynamics-particle transport simulations Dental instrumentation generates a particle cloud with complex dynamics. Particles are released in varying directions as the dentist moves the instrumentation along the patient's teeth. Furthermore, continuous release of a large number of particles generates a particle cloud that transfers momentum to the surrounding air. Simulating all this complexity would require simulating different positions of the dental instrumentation, knowledge of the precise particle size distribution, and performing two-way particle transport simulations. In this work we adopt a simplified approach, which nonetheless allows us to investigate the efficacy of aerosol mitigation strategies. Particles were released with an initial velocity of 7 m/s perpendicular to a circular surface at the center of the patient's mouth. The velocity of 7 m/s is a median of the particle velocity range of 2-12 m/s reported for dental instrumentation (dental drilling, ultrasonic scaler, and 3-in-1 air water syringe) (Eames et al., 2021;Haffner et al., 2021;Li et al., 2021;Sergis et al., 2021;Ohya et al., 2022). An inlet-velocity boundary condition with air velocity of 7 m/s was imposed at this circular release surface to approximate the momentum transfer from particles to the surrounding air. The diameter of the circular release surface (6.8 mm) was selected so that the volume flow rate emanating from the patient's mouth was 15 L/min, which is similar to the exhalation rate of an adult at rest. Spherical particles of diameters from 0.3 to 10 μm were investigated using 16 log-spaced particle size bins (Table 2). This particle size range represents small droplets that can remain suspended in air for an extended period of time as opposed to larger particles (diameter > 10 μm) that tend to deposit near the patient's mouth due to gravitational settling. This range of particle sizes with 16 size bins was selected because it is commonly used in instrumentation to monitor aerosol concentrations (Allison et al., , 2022Sergis et al., 2021;Vernon et al., 2021;Ye et al., 2021;Fennelly et al., 2022). The particles had a density (ρ) of 1000 kg/m 3 so that the geometric diameter was equivalent to the aerodynamic diameter. The discrete phase model in Table 2 Particle frequency distribution used in this study to represent the aerosol generated by dental drilling based on the experimental data in Fig. 3 ANSYS Fluent accounted for the acceleration of gravity and buoyancy effects. The particles were assumed to be inert (i.e., particle evaporation was not considered). Thus, our CFD simulations represent dental drilling, but do not represent procedures that generate water droplets where it is important to consider droplet evaporation (Komperda et al., 2021). A trap boundary condition was applied on all walls and an escape boundary condition was applied at all outlets, including the device outlets. Dental instrumentation generates polydisperse aerosols. Here, we adopt the particle size distribution reported by Vernon et al. (2021), observed during drilling with an air turbine without the use of aerosol mitigation devices (Fig. 3). x + (Pearson r = -0.991) (Fig. 3). This particle count distribution was used to obtain the frequency distribution d f of the 16 particle sizes simulated (Table 2) with the definition: where d f is the fraction of the aerosol cloud that is composed of particles with diameter d. Estimation of PM 10 The aerosol concentration in the dental clinic was quantified as the particulate matter under 10 μm in aerodynamic diameter (PM 10 ) in μg/m 3 , namely 16 16 where V = 33.9 m 3 is the volume of air in the dental clinic and ( ) d M t is the total mass of particles of diameter d that remain airborne at time t. The sum in Eq. (2) is performed over the 16 particle diameters simulated. The total mass of particles of diameter d that remains airborne at time t is the mass of one spherical particle of diameter d, where ρ is the particle density. Rather than simulating the exact number of particles released during a dental procedure, we simulated a total of particles. The particles were released from a grid of uniformly spaced release points inside the circular release surface. One particle of each diameter was released from each release point so that the total number of particles simulated was where 16 is the number of particle diameters simulated and rp N is the number of release points. The number of release points was selected by performing a parameter sensitivity analysis (see Section 3 Results). We simulated a scenario of 15 min of dental drilling followed by 30 min of a resting period. The steady-state CFD simulations quantified the trajectories of a single packet of T CFD ( ) N particles and provided the time i t when each particle deposited on a surface or exited the fluid domain via the outlet vent, the HEPA filtration unit, or the extra-oral suction devices. A MATLAB code was developed to read the time i t from text files (.dpm files) generated by ANSYS Fluent and compute the time evolution of 10 PM ( ) t . The MATLAB code assumed that a new packet of T CFD ( ) N particles was released from the patient's mouth every second during the dental drilling procedure. The number of particles of diameter d airborne at time Δ t t + was computed from Eq. (5): where Δt = 1 s is the time step, added particles was released every second during the 15 min of dental drilling and using the duration of each particle trajectory. Since we did not simulate the exact number of particles released during a dental procedure, it was necessary to convert the number of particles simulated to the number observed in the dental procedure via a scaling factor. First, we notice that the concentration of particles in air is proportional to the mass of particles released per second, namely are respectively the total masses of particles released per second in the CFD simulation and during a dental procedure, and 1 k is a constant. Using the particle frequency distribution generated by dental drilling (Table 2), the total mass of particles released per second in a dental procedure is is a constant determined by the particle size distribution. S. Dey, M. Tunio, L. C. Boryc, et al. 296 Next, we notice that the number of particles that remain airborne at time t is proportional to the number of particles released, so that t are the numbers of particles of diameter d that remain airborne at time t in the CFD simulations and experiments, respectively. Using the particle frequency distribution generated by dental drilling (Table 2) Meanwhile, the number of particles of diameter d released per second in the CFD simulations is equal to the number of release points: where we used Eq. (4). Substituting Eqs. (12) and (13) into Eq. (11), we have where we used from Eq. (9). Finally, using Eqs. (2), (3), and (14), we find that the particle concentration in air in the dental clinic is given by where the rescaling factor 1 , k which accounts for the fact that the CFD simulations were performed with less particles than the actual number of particles released during the dental drilling procedure, is given by Eq. (9): In this work, we assume that 15 min of a dental drilling procedure generates an air concentration of 10 EXP (PM ) = 30 μg/m 3 based on the experimental measurements by Sotiriou et al. (2008) in the absence of aerosol mitigation devices (Sotiriou et al., 2008). Readers should note that our CFD simulations assumed that the air entering the dental clinic through the inlet vent on the ceiling had zero particles, when in reality the air in ventilation systems always has a background particle concentration. Therefore, our CFD simulations represent the excess PM10 generated by dental drilling-i.e., the additional particle concentration above the background level. The efficiency of aerosol mitigation strategies was quantified by the scrubbing time, which was defined as the amount of time required to remove 95% of the aerosol released during the dental procedure. Verification of the CFD model Polyhedral meshes were created with maximum cell sizes of 30, 22, and 20 mm, which provided three mesh sizes, namely 3.3 million cells (coarse mesh), 6.2 million cells (medium mesh), and 8.7 million cells (fine mesh). The mesh resolution of the circular release surface (1 mm), 3D reconstruction of the patient's head (10 mm), inlet vent (10 mm), extra-oral suction device #1 (4 mm), extra-oral suction device #2 (1 mm), and HEPA filtration unit (20 mm) were kept fixed to provide a higher mesh resolution at these locations. Airflow simulations were performed in the baseline case (i.e., zero flow through the devices). The air velocity magnitude was investigated along two lines, namely a vertical line from the patient's mouth to the ceiling and a horizontal line crossing the patient's mouth laterally. Reasonable agreement was observed for the air velocity magnitude predicted by the three meshes (Fig. 4). A colormap of air velocity magnitude in the baseline case showed that air velocity was almost 0 m/s in most of the space in the dental office, except near the ceiling due to the inlet vent and near the patient's mouth due to the plume of air and particles emanating from the patient's mouth (Figs. 5(A) and 6(A)). Likewise, the plot of air velocity magnitude along the horizontal line showed that the air was quiescent in most of the dental clinic, except near the patient's mouth (Fig. 4(B)). This flow pattern was captured by all three meshes (Fig. 4). However, some differences in air velocity magnitude were observed among the meshes, especially along the vertical line (Fig. 4(A)). This suggested that the airflow field was not entirely mesh-independent. Discrete phase model (DPM) simulations were also performed to quantify the impact of mesh resolution on the predicted PM10. These simulations were performed in the case with extra-oral suction device #1 using four different flow rates (0, 250, 500, and 1000 L/min). Reasonable agreement was observed for the three mesh resolutions with the peak PM 10 observed at the end of the dental drilling procedure (time = 15 min) decreasing from the baseline of 30 to 0 μg/m 3 when the flow rate of extra-oral suction device #1 exceeded 500 L/min in all three meshes (Fig. 7). However, some variability was observed between the coarse, medium, and fine meshes when the flow rate of extra-oral suction device #1 was 250 L/min (Fig. 7). Based on these results, we concluded that the fine mesh provided an airflow field and particle tracking results that were nearly, but not entirely, mesh-independent. The fine mesh was the highest resolution that could be created in our local workstation due to a memory limitation (32 GB of RAM memory). Therefore, all subsequent CFD simulations were performed in the fine mesh. A parameter sensitivity analysis was also performed to investigate how changes in the number of release points used in the DPM simulations affected the predicted PM10. Simulations were performed with 500, 1000, 2000, 4000, or 8000 release points in the baseline case. Changes in the number of release points had a negligible impact on the temporal evolution of PM 10 (Fig. 8). Thus, a value of N rp = 2000 release points was used in all subsequent simulations. Fig. 7 Grid independence study showing that PM10 reduced to zero when the flow rate of extra-oral suction device #1 exceeded 500 L/min. This result was independent of the mesh density, but PM10 was sensitive to mesh density for a flow rate of 250 L/min. Fig. 8 Parameter-sensitivity study showing that PM10 was nearly independent of the number of particle release points (Nrp). Airflow pattern It is important to understand the airflow pattern inside the dental clinic since this is a crucial factor determining the particle trajectories and evolution of PM 10 . Figures 5 and 6 compare the colormap of air velocity magnitude in the baseline case and in the cases with the extra-oral suction devices and HEPA filtration unit operating. In all cases, airflow coming from the inlet vent on the ceiling flows along the ceiling until it reaches the walls, where low-velocity flow vortices are formed. The tendency of an airflow jet emerging from an orifice to flow along an adjacent surface is known as the "Coanda effect", and is explained by the ambient pressure pushing the lower pressure incoming jet against the ceiling. Consequently, air velocity in the center of the room has a low magnitude except for the air jet emanating from the patient's mouth (Figs. 5(A) and 6(A)). Simulations with the extra-oral suction devices #1 and #2 operating at a flow rate of 200 L/min show a significant reduction in the jet emanating from the patient's mouth because the air is sucked by the devices (Figs. 5(B) and 5(C)). In contrast, the simulation with the HEPA filtration unit operating at a flow rate of 4333 L/min shows an air jet emanating from the patient's mouth similar to the baseline condition, but there is an increase in the air velocity near the ceiling as the HEPA filtration unit blows filtered air from its top surface toward the ceiling (Fig. 5(D)). Overall, the elliptical extra-oral suction device #2 had the best performance (Figs. 5(C) and 6(C)) in terms of reducing the air jet emanating from the patient's mouth. Aerosol concentration in the dental clinic In the baseline case with a ventilation of 6.3 ACH, PM 10 increased steadily and reached 30 μg/m 3 after 15 min of dental drilling, and then declined steadily reaching 0.2 μg/m 3 at the end of the 30 min resting period (Fig. 9(A)). Increasing the room ventilation from 6.3 to 18 ACH reduced the peak PM 10 from 30 to 10.1 μg/m 3 (Figs. 9(A) and 10(A)). This increase in room ventilation reduced the scrubbing time from 20.5 to 4.5 min (Fig. 11). In the case with the extra-oral suction device #1, increasing the device flow rate had almost no impact on PM 10 for device flow rates below 200 L/min (Figs. 9(B) and 10(B)). When the device flow rate exceeded 200 L/min, a sharp reduction in aerosol concentration was observed with PM 10 decreasing to 0 μg/m 3 for flow rates above 400 L/min. A similar behavior was observed in the case with extra-oral suction device #2, except that the sudden reduction in PM 10 was observed at a smaller device flow rate and PM 10 decreased to 0 μg/m 3 for device flow rates above 150 L/min (Figs. 9(C) and 10(C)). In the case with the HEPA filtration unit, operating the device at its lowest speed (4333 L/min) reduced the peak PM 10 from the value of 30 μg/m 3 observed in the baseline case to 18.1 μg/m 3 (Figs. 9(D) and 10(D)). Increasing the flow rate of the HEPA filtration unit to its maximum speed (11,497 L/min) further reduced the peak PM 10 to 6.0 μg/m 3 . The scrubbing time was predicted to decrease from 9.8 min at the lowest speed to 1.1 min at the highest speed of the HEPA filtration unit (Fig. 11). The highest speed of the HEPA filtration unit was equivalent to 20.3 ACH. When combined with the room ventilation of 6.3 ACH, the rate of air replacement of the dental clinic was 26.6 ACH. Consequently, the scrubbing time of 1.1 min predicted for the highest speed of the HEPA filtration unit was lower than the scrubbing time of 4.5 min predicted for the highest room ventilation of 18 ACH in the absence of aerosol mitigation devices (Fig. 11). Fig. 11 Scrubbing time (i.e., amount of time required to remove 95% of the aerosol released during the dental procedure) as a function of the room ventilation and the flow rate of the HEPA filtration unit expressed in ACH. To further interpret these results, we quantified the volume-averaged air velocity in a 25-mm-diameter spherical region in front of the patient's mouth. This volume-averaged air velocity ( V ) was approximately linearly related to the flow rate of the extra-oral suction devices (Figs. 12(B) and 12(C)). Increasing the flow rate of the extra-oral suction device #1 increased the volume-averaged air velocity from 0.33 to 1.7 m/s when the device flow rate increased from 0 to 800 L/min (Fig. 12(B)). Similarly, increasing the flow rate of the extra-oral suction device #2 increased the volumeaveraged air velocity from 0.33 to 4.1 m/s when the device flow rate increased from 0 to 800 L/min ( Fig. 12(C)). The greater air velocity generated by the extra-oral suction device #2 is explained by the smaller cross-sectional area of its suction cup. In both cases, PM10 reduced to 0 μg/m 3 when the volume-averaged air velocity in front of the patient's mouth exceeded approximately V = 0.9 m/s (Figs. 13(B) and 13(C)). Smaller changes in air velocity were observed in front of the patient's mouth in simulations varying the room ventilation and the flow rate of the HEPA filtration unit. Specifically, increasing room ventilation from 6.3 to 18 ACH increased V only from 0.33 to 0.42 m/s ( Fig. 12(A)), and increasing the flow rate of the HEPA filtration unit from 0 to 11,497 L/min increased V only from 0.33 to 0.36 m/s (Fig. 12(D)). This illustrates the different mechanisms of aerosol mitigation strategy, namely the extra-oral suction devices scrub the aerosol plume at its source, while room ventilation replaces the aerosol-laden air in the dental clinic with clean air, and the HEPA filtration unit filters the aerosol-laden air and returns the filtered air to the room. Discussion To our knowledge, this is the first study to apply CFD to quantify the relative efficacy of extra-oral suction devices, HEPA filtration units, and room ventilation to reduce dental aerosols. A major innovation of this study is the development of a numerical method to compute PM 10 . Previous studies have applied CFD to investigate particle dispersion in dental clinics (Komperda et al., 2021) and how to optimize ventilation in hospital rooms (Méndez et al., Quantifying strategies to minimize aerosol dispersion in dental clinics 301 2008; Bhattacharyya et al., 2020), but these studies only reported the aerosol removal efficiency for specific particle sizes or the air replacement rate. PM 10 is likely a better metric to assess the risk of disease transmission than a single particle size because PM 10 is the total concentration of airborne particles with aerodynamic diameters below 10 μm. A recent review of the assumptions in numerical studies of airborne virus transmission stated that it is essential to incorporate the particle size distribution for realistic predictions of disease transmission risk (Pourfattah et al., 2021). Experimental studies have reported that extra-oral suction devices are effective strategies to reduce aerosol dispersion in dental clinics, with some studies reporting greater than 90% reduction in aerosol concentration (Allison et al., 2022;Fennelly et al., 2022) while other studies reported a more modest reduction of 38%-86% (Ou et al., 2021;Remington et al., 2022). These experimental studies cannot be directly compared to our CFD results because they did not report PM10, but rather used different metrics of aerosol concentration, such as particle counts. Nevertheless, these experimental observations contrast to our prediction of 100% aerosol removal efficiency for device flow rates above 400 L/min (Fig. 10). Importantly, the distance from the suction cup to the mouth was 0-2 cm in our study, but ranged from 10 to 20 cm in these experimental studies. The crucial importance of the suction cup position was demonstrated by Ou et al. (2021), who reported that when the suction cup was moved 4 cm further away (from 14 to 18 cm from the mouth), its capture efficiency dropped from 74% to 38% at a device flow rate of 1670 L/min, and from 96% to 56% at a device flow rate of 3653 L/min. Additional studies are needed to characterize how the suction cup position affects the aerosol removal efficiency of extra-oral suction devices. Several limitations of this study must be acknowledged. First, our numerical methods did not account for thermal and humidity effects, such as evaporating droplets and changes in air density, that may affect the evolution of PM10. Second, our CFD simulations were not entirely mesh-independent. Nevertheless, our mesh density test suggests that the main conclusions of this study are valid. Third, this study did not investigate systematically how the shape of the suction cup of extra-oral suction devices affects the aerosol removal efficiency. The shapes of the extra-oral suction devices #1 and #2 were different (circular vs. elliptical), but their positions were also different (lateral vs. inferior to the mouth). The fact that the device flow rate at which PM10 decreased to 0 μg/m 3 had a similar magnitude in the two extra-oral suction devices (Fig. 10) suggests that the device flow rate is the most important parameter determining the efficacy of extra-oral suction devices. This hypothesis is supported by a previous CFD study which found that the shape of the suction cup has a relatively small impact on the aerosol removal efficiency (Liu et al., 2022). Additional studies are needed to investigate the importance of the shape of the suction cup. Fourth, our study did not quantify spatial variations in PM10 that can lead to higher aerosol concentrations near the patient and dental healthcare workers. We estimated PM 10 based on the duration of particle trajectories without knowledge of the actual path that each particle traveled. In other words, our computational method assumed an equal concentration throughout the room (i.e., a well-mixed gas), when in reality the aerosol concentration is expected to be higher near the patient. Holliday et al. (2021) performed a crown preparation on a mannequin with fluorescein dye introduced either into the mannequin's mouth or the irrigation system. Filter papers placed up to 6 m from the mannequin demonstrated contamination at large distances from the mannequin, but with higher contamination in the mannequin's vicinity. Grenier (1995) used agar plates to investigate bacterial contamination in a multi-chair clinic at a dental school. Bacterial contamination was detected 11 m away from where dental activity occurred, but at a lower level than in the area where patients were treated. While these studies confirm the expectation that aerosol concentration is higher near the patient, our review of the literature suggests that most experimental studies sampled the air at a single site and did not investigate spatial variations in aerosol concentration. Future studies should investigate the degree of spatial variation in PM10, which can influence the transmission risk of airborne pathogens. Another limitation of this work is that only a single location of the HEPA filtration unit was investigated. A previous CFD study by Chen et al. (2010) demonstrated that the aerosol removal efficiency of an air cleaner was determined by the combination of its location and direction of airflow. In our study, the HEPA filtration unit was positioned at the corner of the room based on the dentists' judgment that this location was the least intrusive for clinical care. The HEPA filtration unit we investigated pulls air from its sides and returns the filtered air through its top. Additional studies are needed to investigate how the design and location of HEPA filtration units impact their aerosol removal efficiency. Finally, the efficacy of increasing room ventilation to reduce aerosols is dependent on the design of the dental clinic, including the positions of the dental chair, inlet vent, and outlet vent. Memarzadeh and Xu (2012) reported that the path from the contamination source to the outlet vent was more important than the ventilation rate in determining contaminant removal (Memarzadeh and Xu, 2012). Conclusions In summary, to our knowledge, this is the first CFD study to quantify the evolution of PM 10 in a dental clinic after a dental procedure. We investigated the efficacy of three strategies to reduce aerosol dispersion in dental clinics, namely increasing room ventilation, using a portable HEPA filtration unit, or using extra-oral suction devices. In the baseline simulation, PM 10 reached 30 μg/m 3 after 15 min of dental drilling, and it took 20.5 min for PM 10 to fall below 1.5 μg/m 3 (i.e., to remove 95% of the released aerosols) in a dental clinic with ventilation of 6.3 ACH. The scrubbing time reduced to under 5 min when the air exchange rate exceeded 15 ACH by either increasing the room ventilation or by increasing the flow rate of the HEPA filtration unit. The CFD simulations also demonstrated that extra-oral suction devices can be used to remove 100% of the particles released by using a suction cup in close proximity to the patient's mouth. This 100% aerosol removal efficiency is achieved when the flow rate of the extra-oral suction device exceeds 400 L/min, which corresponds to an air velocity of about 1 m/s in front of the patient's mouth. Additional research is needed to quantify how aerosol dispersion in dental clinics is affected by factors not investigated in this study, such as the location of the HEPA filtration unit and the distance from the extra-oral suction device's suction cup to the patient's mouth.
2023-03-29T15:23:15.269Z
2023-03-28T00:00:00.000
{ "year": 2023, "sha1": "9eb0da72b1310fe157da3145df21c29db5d6d3a5", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s42757-022-0157-3.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c53e05bc791892e0539ebe07bc09586992951c53", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
114550033
pes2o/s2orc
v3-fos-license
Consideration of hazardous and especially hazardous hydrometeorological impacts in design of buildings and structures of nuclear power plants External impacts of the hydrometeorological origin have a significant influence on the safety level of objects of use of atomic energy (OUAE), including nuclear power plants (NPP). Therefore, the existing NPP-related safety regulations demand to consider such impacts at all stages of the NPP life cycle. It is important to make decisions on considering or ignoring certain external impacts while designing NPP buildings and structures. The main criterion for such decisions is the probability of a non-project accident associated with the release of radionuclides into the environment when an extreme phenomena occurs. The aim of this study is to develop a concept for refinement regulatory requirements, considering hydrometeorological factors in organization of NPP engineering protection. Criteria for consideration of hazardous and especially hazardous hydrometeorological impacts for design of NPP buildings and structures were analyzed, and recommendations for refinement of regulatory requirements, considering hydrometeorological factors in organization of NPP engineering protection, were developed. Introduction External impacts of the hydrometeorological origin have a significant influence on the safety level of nuclear facilities, including NPP [1,2]. International and Russian OUAE safety codes and regulations state that such impacts shall be considered at all stages of their life cycle [3,4]. These codes and regulations generally require to consider a wide range of hydrological and meteorological phenomena and processes. However, they do not provide clear criteria determining the need for consideration of relevant hydrological and meteorological loads in calculations. So it is necessary to refine regulatory requirements for consideration of hydrometeorological factors in provision of NPP engineering protection. The abovementioned normative technical documents [3,4] require consideration of external hydrometeorological factors listed in Table 1, which also indicates general impacts of such factors on NPP buildings and structures. According to the table, potential impacts of the listed factors are quite diverse. However, the mentioned extreme hydrometeorological phenomena and processes can cause a non-project NPP accident only in extremely rare occasions. Taking into account the importance of NPP radiation safety assurance, even such cases require detail analysis of possible consequences of a hypothetical accident. Probabilistic safety criteria for NPP Decision on consideration or ignoring of certain hydrometeorological factors according to [4] shall be based on the threshold probability P 0 (basically, recurrence), which is 10 -4 /year per reactor. However, taking into account high dangers of tornados and their impacts on NPPs, their threshold probability is assumed equal to 10 -7 /year per reactor [5,6]. Lightning impacts on NPP buildings, structures and infrastructure also have specific features. In many instances, parameters of their impacts considered by design shall be determined at the expert level. Collection and analysis of input data for statistical processing and determination of design hydrometeorological characteristics are carried in the process of engineering hydrometeorological surveys [7]. Statistical data processing is performed using standard methods (e. g., refer to [8,9]). In Russia and other IAEA member counties, NPP safety requirements are based on nonexceedance of the non-project accident probability with the accidental release limit of P G = 10 -7 /year [4]. This value is the main criterion of NPP safety in relation to external impacts of different origin. The non-project accident probability caused by a specific hydrometeorological factor is where P Adimensionless probability of an accident resulting in maximum permissible release into the environment upon occurrence of the impact event for this factor. With P A = 1 for tornado impacts and P A > 10 -3 for impacts of other hydrometeorological factors, the non-project accident probability will reach or exceed the main safety criteria of P G = 10 -7 /year. In case of tornado impacts the non-project accident with radionuclides release is not always inevitable. This is due to the fact that according to the tornado zoning scheme of the ex-USSR territory the estimated characteristics of the maximum probable tornado with the NPP impact probability of 10 -7 /year can be relatively weak [5,6,10]. As for impacts of other hydrometeorological factors, their consideration shall be formally based not on the occurrence of the threshold probability of P 0 = 10 -4 /year, as it is provided for by federal regulations [4], but on the following condition: (2) To ensure appropriate NPP protection against external impacts, including those of the hydrometeorological origin, engineering measures shall be implemented for NPP protection. If the threshold probability of P 0 = 10 -4 /year is still considered, the NPP engineering protection is required, when the non-project accident probability resulting from the impact of a specific hydrometeorological factor P A exceeds 10 -3 . In its turn, the maximum estimated value of a relevant hydrological or meteorological factor, e.g. water level or wind speed, is the probability level quantile of 1 -10 -4 /year and is determined basing on the statistical distribution of this value. In addition, this greatly varying quantile will by no means always result in a non-project accident with a probability above 10 -3 . Otherwise stated, the potential hazard from the impact of such meteorological factor can be overestimated and cause excessive material costs for NPP engineering protection. To avoid this, it is necessary to proceed not from the assumption of the threshold level of P 0 = 10 -4 /year, but from the following condition: In this formula x C is the estimated extreme value of the variable x (probability level quantile of 1 -P 0 ×year). Dependence of the non-project accident probability P A (x C ) on the impact of a given hydrological or meteorological factor x is derived from construction calculations. The cumulative distribution function 1 -P 0 (x)×year of the variable concerned is determined based on results of statistical processing of observation data. Therefore, it is generally not difficult to determine values of x C , P 0 and P A . It may turn out that the quotation (3) will have no solution, if the variation range of the hydrological or meteorological variable x does not result in a potential non-project accident with a probability above 10 -3 . This means that in some cases the danger of occurrence of a certain hydrological or meteorological event may be overestimated. The engineering protection of NPPs from hydrometeorological factors provides for significant expenditures for new construction of each object as well as for its expansion or reconstruction. In some cases there will not be necessary to design expensive protection from some factors without reduction of the NPPs safety. In this regard, it seems advisable to refine regulatory requirements to consideration of hydrometeorological factors, as well as other factors of the natural origin in designing a NPP. Conclusions • Based on the applicable NPP safety codes and regulations, analysis of consideration criteria for hazardous and especially hazardous hydrometeorological impacts in NPP design. • Recommendations for refinement of regulatory requirements were developed to consider hydrometeorological factors in organization of NPP engineering protection against such factors. • It is noted that there may be situations when there is no need to design the engineering protection from some factors without reduction of the NPP safety.
2019-04-15T13:07:53.445Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "270476778aa2769a9ff78cedc939172642b7980c", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/49/matecconf_ipicse2016_04005.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a506d75926f7a1860b732fc3d494459b262d8e34", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
11429522
pes2o/s2orc
v3-fos-license
Effect of Maturity on Phenolics (Phenolic Acids and Flavonoids) Profile of Strawberry Cultivars and Mulberry Species from Pakistan In this study, we investigated how the extent of ripeness affects the yield of extract, total phenolics, total flavonoids, individual flavonols and phenolic acids in strawberry and mulberry cultivars from Pakistan. In strawberry, the yield of extract (%), total phenolics (TPC) and total flavonoids (TFC) ranged from 8.5–53.3%, 491–1884 mg gallic acid equivalents (GAE)/100 g DW and 83–327 mg catechin equivalents (CE)/100 g DW, respectively. For the different species of mulberry the yield of extract (%), total phenolics and total flavonoids of 6.9–54.0%, 201–2287 mg GAE/100 g DW and 110–1021 mg CE/100 g DW, respectively, varied significantly as fruit maturity progressed. The amounts of individual flavonols and phenolic acid in selected berry fruits were analyzed by RP-HPLC. Among the flavonols, the content of myricetin was found to be high in Morus alba (88 mg/100 g DW), the amount of quercetin as high in Morus laevigata (145 mg/100 g DW) while kaempferol was highest in the Korona strawberry (98 mg/100 g DW) at fully ripened stage. Of the six phenolic acids detected, p-hydroxybenzoic and p-coumaric acid were the major compounds in the strawberry. M. laevigata and M. nigra contained p-coumaric acid and vanillic acid while M. macroura and M. alba contained p-hydroxy-benzoic acid and chlorogenic acid as the major phenolic acids. Overall, a trend to an increase in the percentage of extraction yield, TPC, TFC, flavonols and phenolic acids was observed as maturity progressed from un-ripened to fully-ripened stages. Introduction Soft fruits such as strawberries and mulberries are gaining greater recognition among other fruit crops due to their high economic and nutritional value. Recently, these fruits have gained much attention as an ingredient of functional foods due to their potential source of valuable bioactives such as flavonoids, phenolic acids and free radical scavengers with potential health benefits [1,2]. The quality of soft fruits, in terms of taste, functional food value and consumer's acceptance, is primarily based on their biochemical composition [3,4]. Flavonoids are broadly dispersed in the plant kingdom accounting for over half of the 8000 naturally occurring phenolic compounds [5]. Among the phytochemicals in fruit, phenolic acids and flavonols are regarded as major functional food components and are thought to contribute to the health effects of fruit-derived products due to the prevention of various diseases associated with oxidative stress, such as cancers, cardiovascular diseases and inflammation [6,7] Phenolic acids constitute about one-third of the dietary phenols and are present in plants in free and bound forms [8]. Maturation of fruit or other plant tissues involves a series of complex reactions, which leads to changes in the phytochemistry of the plants. Two distinct phenomena of change in phenolic contents have been observed during maturation: Steady decrease [9,10] or rise at the end of maturation [11][12][13][14]. The content of phenolics in fruit is affected by the degree of maturity at harvest, genetic differences (cultivar), pre-harvest environmental conditions, and post-harvest storage conditions and processing [15], however, their concentration varies from plant to plant or even in different organs of the same plant at different ripening stages [16,17]. The commercial strawberry fruit (Fragaria × ananassa Duch.) belongs to the Rosales order of the Rosaceae family [18]. It is one of the most widely consumed fruits worldwide, either as a fresh fruit, as processed products or even as dietary supplements. Worldwide, the production of strawberries has increased steadily during the last 40 years with most (>95%) of it being located in the northern hemisphere. The USA is the leading producer, followed by Spain, Turkey and the Russian Federation. China is nowadays a direct competitor for most of the major strawberry producing regions with an estimated production of 1.3 million metric tons (MMT) for the period of 2010 [19]. Strawberry growth and development is characterized by changes in color, size, sweetness, acidity, and aroma [20,21]. Four or five different maturity stages for strawberry fruit are described in the literature according to the development of the non-ovarian receptacle tissue [22,23]. Marked compositional variability in the content of phenolics in berries is not only affected by varietal or cultivar, genetic differences, season, and climate, but also by the degree of maturity at harvest [24][25][26]. Mulberry belongs to the genus Morus of the family Moraceae. Morus have 24 species with one subspecies and comprise at least 100 known varieties. Black (M. nigra), red (M. rubra), and white mulberries (M. alba) are extensively grown in Pakistan, northern India, and Iran. These are known by the Persian-derived names toot (mulberry) or shahtoot (King's or "Superior" mulberry). Shahtoot (M. laevigata), particularly the white variety, is a popular hybrid species in Pakistan. Mulberries are grown at considerably high altitudes in the Himalaya-Hindu Kush region and are widely cultivated in northern regions of Pakistan [27,28]. In Pakistan, shahtoot is valued for its delicious fruit, which is eaten fresh as well as in dried forms, and consumed in marmalades, juices, and liquors, and used for natural dyes and in the cosmetic industry [29]. The deep colored mulberry fruits are rich in phenolic compounds, including flavonoids, carotenoids and anthocyanins [30][31][32]. Previous studies mostly conducted on ripened strawberry fruit reported significant amounts of phenolic acids and flavonols. The major phenolic acids in strawberry are neochlorogenic acid and p-coumaryl quinic acid [33] as hydroxycinnamic acids derivatives [34,35]. However, small amounts of chlorogenic acid [33] and ferulic acid [36] have also been reported. Hydroxybenzoic acids (p-hydroxybenzoic acid) were only found in small amounts in strawberry [36]. According to Franke et al. [37] and Olsson et al. [38], kaempferol was detected to be the major flavonol while Sultana and Anwar [39] reported myricetin to be the main flavonol compound in selected cultivars of strawberry. Studies of ripening are of special interest because they allow the identification of the optimum point of maturity for harvesting and enable delivery of fruit to consumers in its best condition in terms of nutritional and functional properties. Information regarding the changes in particular phenolic constituents during fruit maturation is limited. In this study, we looked at how the accumulation of phenolic acids and flavonol in the strawberry and mulberry fruits is affected by maturation. The results will be informative and novel with regard to the quantification of specific flavonols and fruit materials considering their native region and the effect of maturity. This study will be valuable for researchers in providing base line data for future detailed characterization of other bioactives in these fruits, and thus a step forward towards their potential commercialization for nutraceutical and anti-oxidant applications through value addition. Effect of Maturity on the Yield of Extract (%), Total Phenolics and Total Flavonoid Content in Strawberry and Mulberry Fruits The results showed that the yield of extract (%), total phenolics (TPC) and total flavonoid content (TFC) in the strawberry and mulberry fruit cultivars at different maturity stages varied considerably ( Table 1). As the fruit maturity progressed, the yield of extract (%), TPC and TFC of strawberry fruit increased from 8.5 to 53.3%, 491 to 1884 mg gallic acid equivalents (GAE)/100 g DW and 83-327 mg CE/100 g DW, respectively. Similar to our present finding, an increasing trend in total phenolics (216-290 mg GAE/100 g FW) as fruit maturity progressed in two strawberry cultivars has been reported by Pineli et al. [13]. Bohm et al. [40] found a TPC between 1800-2200 mg GAE/100 g while Piljac-Zegarac and Samec [41] reported values as high as 335 mg GAE/100 g FW in ripe strawberries. In another study conducted by Lin and Tang [32], the TFC (14.6 mg QE/100 g FW) of ripened strawberries was found to be in close agreement with that determined in fully ripened samples of the present work. Values (mean ± SD) are averages of three samples of each fruit, analyzed individually in triplicate (p < 0.05); The different small letters in superscript represent the significant differences of ripening stages; A as per gallic acid equivalent (mg GAE/100 g DW); B as per catechin equivalent ( mg CE/100 g DW). Significant variation was also observed in the yield of extract (%), TPC and TFC of mulberry fruit. The highest yield of extract (%) was obtained for M. laevigata (12-54%) while the lowest was found for M. nigra (11-28%). The concentration of total phenolics (TP) was highest in M. nigra (395-2287 mg GAE/100 g DW) while it was lowest in M. laevigata (201-1803 mg GAE/100 g DW). TPC (223-257 mg GAE/100 g DW) and TFC (0.06-6.54 mg CE/100 g DW) as studied by Bae and Suh [42] in five Korean mulberry cultivars (Pachungsipyung, Whazosipmunja, Suwonnosang, Jasan, and Mocksang) were somewhat lower than our present results. Lin and Tang [32] found that Morus alba had 1515 mg GAE/100 g DW of TP. Similarly, in another study by Ercisli and Orhan [28], the amount of TP in different species of mulberry fruit varied from 181 (M. alba)-1422 (M. nigra) mg GAE/100 g FW and the total flavonoids (TF) from 29 (M. alba) to 276 (M. nigra) mg QE/100 g FW. As investigated previously by Imran et al. [43], the contents of TP in M. laevigata varied considerably (1100-1300 mg/100 g FW). TPC of the Morus alba fruit from Turkey ranged from 18.16 to 19.24 μg GAE/mg [44]. With few exceptions, these results are all within the range of our present data. The difference of phenolics (TPC and TFC) among different mulberry fruits might be linked to their varied genetic makeup as well as the extent of fruit maturity and ecological conditions of the harvest [26]. It has previously been reported that plant genotype [45], cultivation site and extraction technique [46] affect the total phenolic contents in berry group fruits. Overall, a trend towards increase was observed in the yield of extract (%), TPC and TFC as strawberry and mulberry fruits progressed from un-ripened to fully-ripened stages. Likewise, Aminah and Anna [47] described the effect of different ripening stages on bitter gourd and observed an increase in TP as maturity progressed. In agreement to our findings, several authors reported an increase in the concentration of TP in different fruits such as Khirni [12], sweet cherry [11], Morinda citrifolia [14] and strawberry [13] as maturity progressed. However, an inverse trend for TPC was reported by some other authors in mushrooms [48] and strawberry fruits [9,10]. Effect of Maturity on Quantification of Flavonols and Phenolic Acid The data for the quantitative analysis of individual flavonols and phenolic acids in strawberry cultivar fruits at different maturity stages are presented in Table 2. Kaempferol was the dominant flavonol in strawberry followed by myricetin and quercetin (Figures 1,2). Kaempferol levels in the strawberry cultivar fruit during three maturity stages ranged from 19.9 to 98.1 mg/100 g DW. The kaempferol content of strawberry fruit in the present investigation was found to be higher than that previously reported, namely 0.6 to 1.3 mg/100 g [37], 10.8-43.7 mg/100 g [38] in fully-ripened strawberry fruits. In the present analysis of strawberry, the amounts of myricetin and quercetin varied from 12.8-28.5 and 1.4-11.2 mg/100 g DW, respectively. In strawberry (Var. Korona and Tufts) the contents of flavonols (kaempferol, myricetin and quercetin) were mainly increased as the fruit maturity progressed from un-ripened to fully-ripened stages. Lugasi and Hovari [49] found that quercetin was present in strawberry at 5.3 mg/100 g whereas myricetin was present at 99.4 mg/100 g and kaempferol was not detected in strawberry samples. Cordenunsi et al. [3] reported the contents of quercetin and kaempferol in three strawberry cultivars to be in the range of 3.9-6.8 mg/100 g and 1.3-2.1 mg/100 g FW, respectively. Kevers et al., [50] described that strawberry contained kaempferol, quercetin and myricetin at the levels of 99, 123 and 979 µg/100 g FW, respectively. Fruits are an important source of dietary polyphenols in human nutrition and contribute significantly to the daily intake of polyphenols (32% of the daily intake of flavonols in Finland) [51]. Studies revealed that the total polyphenols (12-50 mg/g DW) in fruit is much higher than in vegetables (0.4-6.6 mg/g DW) and cereals (0.2-1.3 mg/g DW) [52]. Previous studies reported methanol soluble cinnamic acid and p-hydroxycinnamic acid in strawberries to be the major components followed by caffeic acid and ferulic acid [2,51]. In another study, p-coumaric acid was found to be the predominant hydroxycinnamic acid as sugar esters in strawberries and raspberries and as free form in cloudberries [53]. As reported above [54], p-hydroxybenzoic and p-hydroxycinnamic were the most abundant phenolic acids in strawberry fruit, and occurred in almost equal quantities (ranging from 64.9-110.5 mg/100 g and 64.2-110.4 mg/100 g, respectively), which is comparable with our present results. The amount of p-coumaric acid notably increased during maturity in strawberry cultivars [55]. The concentrations of chlorogenic and p-coumaric acids also increased during ripening of strawberry [38]. Ndri et al. [56] studied the phenolics in Ivorian Gnagnan (Solanum indicum L.) berries at different maturity stages and found that as the maturity progressed, the amount of phenolic acids increased. These trends are similar to those displayed in our present study. Hybrid strawberry cultivated in Turkey [55] contained 4-58 mg/kg FW of p-coumaric acid, while Ecuador strawberry [57] contained 18 mg/kg FW. In another study of six Finnish strawberry types [46], the content of p-coumaric acid was 9-41 mg/kg FW showing comparable values with our present study. Hernanz et al. [58] assessed statistically significant differences (p < 0.001) of phenolic acids among five strawberry cultivars grown in two different soilless systems. Ellagic and p-coumaric acids were the major phenolic acids found in the Finnish strawberry as reported by Hakkinen et al. [51]. Similar results were reported by Maatta-Riihinen et al. [53] and Cordenunsi et al. [3] in a commercial strawberry harvested from Brazil. In the case of p-coumaric acid, its level varied from 1.43 µg/g (cv. Diamante-CS) to 25.47 µg/g (cv. Ventana-CS) [58]. When comparing flavonoids, the cultivars analyzed in the present study were more promising in relation to beneficial effects on health, due to their higher content of flavonols. A wide variation of flavonoids in strawberry cultivars has been reported in the literature. These variations may be correlated to the varying genetic makeup of the varieties tested as well as to the post harvest conditions involved. In another related study, the effect of storage conditions on the flavonoid content was investigated, and the amount of quercetin was found to be increased while kaempferol and myricetin were decreased during storage at −20 °C [59]. Variation in flavonol content in fruits is strongly influenced by extrinsic factors such as fruit type and growth, season, climate, degree of ripeness, food preparation, and processing [60][61][62][63]. The data in Table 3 depicts the composition of flavonols and phenolic acids of mulberry fruits at different maturity stages. Morus laevigata had the highest amount of total flavonols (quercetin myricetin, kaempferol) followed by Morus nigra, M. alba and M. macroura. Kaempferol and quercetin amounts were highest in M. laevigata while myricetin was predominant in M. alba. The concentration of kaempferol increased with ripening, ranging from 9.8 mg/100 g (un-ripened) to 56.1 mg/100 g (fully-ripened) and quercetin ranged from 7.0 mg/100 g (un-ripened) to 145.7 mg/100 g (fully-ripened) for M. laevigata. The myricetin content increased from un-ripened to semi-ripened stages (11.5-22.3 mg/100 g), and then slightly decreased at the fully-ripened stage (20.0 mg/100 g). In M. macroura and M. alba, the concentration of kaempferol was decreased from the un-ripened to fully-ripened stage while the reverse trend was observed for myricetin. Meanwhile, M. alba showed a decreasing trend (1.3-0.7 mg/100 g) as the fruit progressed from un-ripened to fully-ripened stage. The level of flavonols (kaempferol, quercetin, myricetin) in M. nigra increased from un-ripened to semi-ripened stage (8.56-56.62, 8.10-43.46, 52.57-63.30 mg/100 g) and then decreased at fully-ripened stage (31.67, 11.75, 56.10 mg/100 g). Compositional changes of flavonols during ripening due to several biotic and abiotic factors significantly affected their accumulation in berries and grapes [64]. Consequently, the time when the fruit is picked has a strong impact on the flavonol content. Bilyk and Sapers [65] reported a positive correlation between flavonol contents and blackberry maturity from red to black (quercetin content 9.01-15.8 mg/100 g FW and kaempferol content 0.7-1.74 mg/100 g FW). In another study conducted by Vuorinen et al. [63], the level of flavonol glycosides in black currants was increased significantly during berry ripening. With increasing degree of ripening, the content of quercetin and kaempferol was found to be enhanced for both the investigated years in strawberry cultivar Honeoye, whereas a smaller difference was seen in the cultivar Senga Sengana [38]. The above reported studies by different authors support our present findings which reveal that as the maturity progresses the contents of flavonol also increases. Values (mean ± SD) are averages of three samples of each fruit, analyzed individually in triplicate (p < 0.05); ND = not detected; Different letters in superscript represent significant differences in ripening stages; Σ HBA = sum of benzoic acid derivatives; Σ HCA = sum of cinnamic acid derivatives; Σ PHA = sum of phenolic acids. The major phenolic acids found in mulberry species were: p-coumaric acid, chlorogenic acid and p-hydroxybenzoic acid (Table 3). Morus laevigata and M. nigra contained higher amounts of p-coumaric acid and vanillic acid while M. macroura and M. alba showed p-hydroxy-benzoic acid and chlorogenic acid as the major phenolic acids. The overall trends of phenolic acids in mulberry species were similar to those recorded for strawberry ( Table 2). The concentration (mg/100 g DW) of vanillic acid increased as maturity progressed from un-ripened to fully-ripened stages in the tested mulberry species: M. laevigata (8.5-21.1) M. macroura (3.2-16.1) M. alba (1.7-5.7) and M. nigra (6.1-18.3), respectively. Among different mulberry species, M. laevigata was found to be higher in p-coumaric, ferulic, p-hydroxy-benzoic, chlorogenic and gallic acids with a contribution of 15.9-27.3, 12.4-17.2, 1.1-7.3, 3.4-12.9 and 5.2-14.2 mg/100 g DW, respectively at un-ripened to fully-ripened stages. Memon et al. [66] described the composition of phenolic acids in mulberry (Morus laevigata W., M. nigra L., M. alba L.) fruits grown in Pakistan: Chlorogenic (20.5 mg/100 g) and p-hydroxybenzoic acids (15.3 mg/100 g) were the predominant compounds in M. alba whereas p-coumaric acid (8.7 mg/100 g) was found to be higher in M. nigra. However, different phenolic acids were evenly distributed in M. laevigata. These data on Morus species are in agreement with those we determined in the present analysis. Phenolic compounds are important bioactives and their content in fruits represents an important fruit quality parameter [67]. Some earlier studies [33,51,55] showed that consumption of the strawberry and mulberry fruits may have a positive impact on the human health, which might be linked to the amounts of polyphenolics in these fruits. The increasing importance of functional ingredients in food pushes plant sciences to increase health-promoting phytochemicals in fruit crops. Higher intakes of flavonoids and other antioxidant compounds from food are associated with a reduced risk of cancer, heart disease, and stroke. Some experimental studies indicate that several plant flavonols, such as quercetin, myricetin, and rutin, are more powerful antioxidants than traditional vitamins and have antitumor properties. The challenge is how to increase the levels of these beneficial phytochemicals in different foods for optimal nutrition. Currently the use of modern biotechnological techniques, such as genetic engineering, to produce transgenic plants with enhanced amounts of valuable bioactives [68] as well as exogenous applications of organic osmolytes, such as glycerinbetaine and proline, to increase the levels of antioxidant and phenolics in different food crops [69,70] are fascinating. Collection of Samples In this study, fruit samples of strawberry (Fragaria × ananassa Duch) cultivars (Korona and Tufts) and mulberry (M. alba, M. nigra, M. macroura, M. laevigata) at un-ripened, semi-ripened and fully-ripened stages were collected from the Lahore and Faisalabad region during April-July, 2009. The selection of the fruits at different maturity stages was based upon their color and texture ( Table 4). The fruits of strawberry and mulberry were hot air dried to constant mass. The dried samples were ground (80 mesh size) and then preserved in polyethylene bags. Three different samples for each of the fruit cultivar at each maturity stages were assayed. Reagents In the research work, p-coumaric, vanillic, chlorogenic, p-hydroxybenzoic, ferulic and gallic (phenolic acids standards), kaempferol, quercetin, and myricetin (flavonol standards), and ter-butylhydroquinone (TBHQ) were acquired from Sigma-Aldrich (St Louis, MO, USA). HPLC grade methanol, acetonitrile and all other chemicals used in this study were purchased from Merck (Darmstadt, Germany). Stock samples of flavonol and phenolic acids were prepared in methanol at concentrations of 200 mg/L. Working samples were diluted with the corresponding mobile phase to 10 mg/L. Samples were passed through a 0.45 µm nylon filter membrane (MSI) before injection. Both stock and working samples were stored in a refrigerator at 4 °C in darkness. The calibration curves were constructed using peak area vs. concentration. Dry Matter Determination In view of varying degrees of fruit moisture among the species analyzed, all calculations were made on a dry matter basis. Dry matter determination was made according to AOAC procedure (method 925.10). Briefly, 5 g of the sample was dried in an electric oven at 105 °C until a constant weight was recorded. Sample Extraction for Antioxidant Assay The ground material (10 g) of strawberry and mulberry fruit at each maturity stage was extracted separately with 100 mL of 80% aqueous methanol (80:20) for 6 h at room temperature in an orbital shaker (Gallenkamp, UK). The extracts were separated from the residues by filtering through Whatmann No. 1 filter paper. The residues were re-extracted twice with the same fresh solvent. The recovered extracts were combined and freed of solvent under reduced pressure at 45 °C using a rotary evaporator (EYELA, SB-651, Rikakikai Company Limited, Tokyo, Japan). The crude extracts were quantitatively transferred into a sample vial and stored in a refrigerator until used for further experiments. Determination of Total Phenolics Content (TPC) The amount of total phenolics was determined by using the previously mentioned method of Chaovanalikit and Wrolstad [71], with slight changes. Briefly, the crude extract (1 mg) was mixed with tenfold diluted 2 N Folin-Ciocalteu reagent (1.0 mL) and 0.5 mL de-ionized water. The mixture was kept at room temperature for 10 min, and then 0.8 mL of Na 2 CO 3 (7.5% w/v) was added. The mixture was heated in a water bath at 40 °C for 20 min and then cooled in an ice bath; absorbance was measured at 760 nm using a spectrophotometer. Amounts of TP were calculated using a gallic acid calibration curve within the concentration range of 10-100 ppm (R 2 = 0.9986). The results were expressed as gallic acid equivalents (GAE mg/100 g DW). All samples were analyzed thrice and results averaged. Determination of Total Flavonoids Content (TFC) The total flavonoids were measured colorimetrically following a previously reported method [72]. In summary, the crud extract (5 mg) of each selected fruit was diluted with 5 mL distilled water in a 10 mL test tube. Initially, 0.3 mL of 5% NaNO 2 was added to each test tube; after 5 min, 0.6 mL of 10% AlCl 3 was added; after 5 min, 2 mL of 1.0 M NaOH was added. Absorbance of the reaction mixture was measured at 510 nm using a spectrophotometer. TFC were determined as catechin equivalents (CE mg/100 g DW). Three readings were taken for each sample and results averaged. Extraction and Hydrolysis for Quantification Extraction/hydrolysis of flavonols and phenolic acids was carried out using the method described by Sultana and Anwar [39]. In summary, 25 mL of acidified methanol containing 1% (v/v) HCl and 0.5 mg/mL BHT as an antioxidant were added to the ground fruit material (5 g) in a refluxing flask. Then 5 mL of HCl (1.2 M) was added and the mixture was stirred at 90 °C under reflux for 2 h to obtain aglycons of flavonol glycosides and to convert bound phenolic acids into free forms. The extract was cooled to room temperature and centrifuged at 1500 g for 10 min. The upper layers were taken and sonicated for 5 min to remove any air present in the extract. The final extracts were filtered through a 0.45-μm (Millipore) filter before they were analysed by HPLC. Phenolic acids and flavonols were separated and quantified following HPLC conditions. Conditions Used for Phenolic Analysis A Hibar ® RP-C18 column (250 mm × 4.6 mm, 5 µ particle size) from Merck Company (Merck KGaA, 64271 Darmstadt, Germany) thermostated at 25 °C was used for separation. The mobile phase (50% tri-fluoroacetic acid (0.3%), 30% acetonitrile and 20% methanol) for flavonols and 40% tri-fluoroacetic acid (0.3%), 40% acetonitrile and 20% methanol for phenolic acids were added at a flow rate of 1.0 mL/min. The mobile phase was filtered through a Nylon membrane filter (47 mm, 0.45 mm) and was degassed by sonication before elution. Isocratic and gradient elution and detection at 360 and 280 nm were selected for separation and detection of flavonols and phenolic acids, respectively. The compound identification was carried out by comparison of their retention times with those of authentic standards. The additional identification was carried out by spiking the extracts with phenolic standards. Statistical Analysis Each fruit (strawberry and mulberry) was analysed at each maturity stage and in triplicate. Data were reported as mean ± SD. Analysis of variance (ANOVA) was performed using Minitab 2000 Version 13.2 statistical software (Minitab Inc., State College, PA, USA). A probability value of p < 0.05 was considered as a statistically significant difference. Conclusions The quantitative and qualitative differences between TPC and TFC in strawberry and mulberry fruits during ripening were observed, which depend on different fruit cultivars. Mostly, a trend to increasing amounts of these constituents was recorded. Of the fruits analyzed in the present study, Morus laevigata and Korona strawberry exhibited commendably higher levels of flavonols and phenolic acids, which support their functional food use. Thus, the results of the present study support the antioxidant and nutraceutical potential of these fruits indigenous to Pakistan. However, further investigations involving more detailed in-vitro and in-vivo studies are required to ascertain an inclusive phenolic antioxidant system of these fruits and develop their application for specific food or nutraceutical purposes.
2014-10-01T00:00:00.000Z
2012-04-11T00:00:00.000
{ "year": 2012, "sha1": "a0ff44be7b617d2bd085feadbbcd12ba85dee7a3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/13/4/4591/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0ff44be7b617d2bd085feadbbcd12ba85dee7a3", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
252618406
pes2o/s2orc
v3-fos-license
A new species of Paratropis Simon, 1889 (Araneae: Paratropididae) from Guyana Taxonomy and Systematics . A new species of Paratropis is herein described and illustrated, namely: Paratropis minusculus n. sp. based on males, females and immatures from Potaro-Siparuni, Guyana. Male e female of P. minusculus differ from those of all other species of the genus by having six eyes and by spinneret apical segment domed. In addition, we presented the first record of paratropidid species from Guyana, contribute to the knowledge of local biodiversity. The genus Paratropis is composed of six species and can be diagnosed in the family Paratropididae by soil encrusted on body, eye tubercle highly elevated, legs I of male without tibial spur, claw tufts absent and by having four spinnerets (raven 1985, 1999; perafán et al. 2019; dupérré & tapia 2020). The paratropidids are small and fossorial spiders of biology and ecology still little known. They can be found in rainforests, cave, near a stream river, and in montane forests, in microhabitat as under fallen logs and under boulders on the ground (raven 1999;Bertani 2013;valdez-Mondragón et al. 2014;dupérré 2015;perafán et al. 2019;dupérré & tapia 2020). The present work aims to describe a new species of Paratropis named here Paratropis minusculus n. sp. based on males, females and immatures from Potaro-Siparuni, Guyana. In addition, we present the first record of a paratropidid species from Guyana, contributing to the knowledge of the country biodiversity. MATERIAL AND METHODS Specimens were examined in 70% ethanol using a stereomicroscope Leica M80. All photographs and measurements were taken under stereomicroscope Leica M205A and a Leica application suite V4.10. All measurements are in millimeters. The left male palp was featured and illustrated in prolateral, ventral and retrolateral view. After dissection, female spermathecae were cleaned in clove oil (pure) for 30 minutes. The total length was taken with the spider in the dorsal position. It was measured from the clypeus edge to posterior end of the abdomen. Chelicerae and spinnerets were not included. Description. Male Holotype (MCZ 47063): Total length: 3.3; carapace length 1.8, width 2.0; abdomen length 1.5, width 1.4. Coloration in alcohol: In general, the body coloration is brown dorsally and pale yellow ventrally, all body is encrusted with soil particles. Carapace: Caput slightly raised, encrusted with soil particles, and with small spines along midline and in lateral margins, eye tubercle elevated, fovea transverse. Eyes and eyes tubercle: PME absent; Tubercle 0.3 high; 0.31 long; 0.42 wide; clypeus 0.05 long. Sizes and inter-distances: AME 0.14; ALE 0.10; PLE 0.10; AME-AME 0.08; AME-ALE 0.02; ALE-PLE 0.02. Chelicerae: Encrusted with sand and dirt dorsally, and with setae on apical part. Cheliceral furrow with teeth on both margins in two juxtaposed rows, promargin and retromargin with 8 teeth. Endites: Longer than wide, length 0.72, width 0.32, with conical projection anteriorly, without particles of soil, with 12 cuspules on left endite and 15 on right one. Labium: trapezoidal, not encrusted with particles of sand and dirt, with 11 cuspules. Sternum: rounded, length 0.9, width 1.0, encrusted with sand and dirt, with long setae and three oval sigilla (Figure 2). Legs: Leg lengths in Table 1. Leg formula 4123, leg I without tibial spur, femur, patella, tibia, metatarsus and tarsus encrusted with sand and dirt. Claws: Tarsi with long STC lacking teeth on tarsi I-III, with teeth on tarsus IV. ITC absent from all legs. Trichobothria: palpal tibia, palpal tarsus, metatarsi and tarsi of legs with two, tibiae I, II, III with four, tibia IV with three trichobothria. All trichobothria on tarsus and metatarsus "protected" by a spine, and in tibia IV "protected" by a pair of spines. Palp: Bulb pyriform, embolus transparent, shorter than palp tibia and with an accentuated curved tip (Figures 3-4), palpal cymbium with numerous setae on apical part, palpal tibia with soil particles and with long setae ventrally, retrolaterally and prolaterally; patella with numerous curved setae dorsally. Abdomen: Brown, encrusted with sand and dirt (Figure 1). Spinnerets: PMS 0.16 long, 0.08 wide, 0.06 apart; PLS 0.12 basal, 0.06 middle, 0.09 distal; mid-widths (lateral) 0.15, 0.015, 0.010, respectively, apical segment domed (Figures 5-6).
2022-09-30T15:05:52.354Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "7211e26b49ed95fad8fb83ee03c9572ac8a49d1c", "oa_license": "CCBYNCSA", "oa_url": "https://www.entomobrasilis.org/index.php/ebras/article/download/1004/1519", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "10e95e1d4d6aebffe8c5e0c898b9ffbdf43ae2fb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
236585163
pes2o/s2orc
v3-fos-license
The stray sheep of cyberspace a.k.a. the actors who claim they break the law for the greater good The development of cyberspace has brought about innumerable advantages for the mankind. However, it also came with several serious drawbacks; as cyberspace evolves, so does cybercrime. Since the birth of cyberspace, individuals, groups and whole nations have been engaging in computer-related offences of various significance and impact, trying to exploit systems’ vulnerabilities, disseminate malicious software and steal data or funds. The concept of a hacker has entered the collective consciousness and become an intrinsic element of popular culture. However, there are hackers, or rather, cyberspace actors, who challenge this common view. This paper presents three types of such people, namely hacktivists, members of cyber militias and Internet trolls. Although they all use the Internet to break the laws or rules, their internal motivations are not always utterly sinister; actually, some of them firmly believe that their actions are for the greater good. This paper is structured as follows: Firstly, the general profile of a hacker is presented. Then, the state of the art is outlined, concerning other papers dealing with the motivations behind cyber threat actors. Following that, the three aforementioned groups of cyberspace actors are contrasted with the profile of a ‘typical’ hacker. Then, the profiles of a typical representative for each of the group and their motivations are indicated, followed by the final conclusions. Introduction The development of cyberspace has brought about innumerable advantages for the mankind. However, it also came with several serious drawbacks; as cyberspace evolves, so does cybercrime. Since the birth of cyberspace, individuals, groups and whole nations have been engaging in computer-related offences of various significance and impact, trying to exploit systems' vulnerabilities, disseminate malicious software and steal data or funds. The concept of a hacker has entered the collective consciousness and become an intrinsic element of popular culture. However, there are hackers, or rather, cyberspace actors, who challenge this common view. This paper presents three types of such people, namely hacktivists, members of cyber militias and Internet trolls. Although they all use the Internet to break the laws or rules, their internal motivations are not always utterly sinister; actually, some of them firmly believe that their actions are for the greater good. This paper is structured as follows: Firstly, the general profile of a hacker is presented. Then, the state of the art is outlined, concerning other papers dealing with the motivations behind cyber threat actors. Following that, the three aforementioned groups of cyberspace actors are contrasted with the profile of a 'typical' hacker. Then, the profiles of a typical representative for each of the group and their motivations are indicated, followed by the final conclusions. The main contribution of this paper is concentrating on the hackers and actors who are 'the stray sheep' of cyberspace, i.e., the ones who are either misguided, ignorant or feel so strongly about something that they rationalise their actions, rather than 'typical' wrongdoers, who break the rules in full knowledge and with malicious intent. Hackers Based upon a review of the literature, there does not seem to be a universal scientific definition nor consensus on the term 'hacker' [1]. The today's mainstream usage of the word mainly refers to online criminals, understood as highly skilled individuals who are capable of subverting computer security to 'crack in'. Commonly, they are portrayed as bright, intelligent, curious loners and brilliant geeks who lack social and communications skills, though. Although some of the stereotypes have been partially confirmed scientifically [2], constructing the profile of a 'typical' hacker is in fact not possible, due to the fact that they may come from every background, may be driven by several distinct causes, work in an organised group or be lone wolves [3]. The researchers who study the intentions behind illegal cyber activities also do not seem to have come to the same conclusions. The most general set of motivations was described by [4] who divided them into three groups: malicious intent or vandalism, greed and the quest for challenge [5,6]. Nevertheless, not all the cyberspace actors whose actions fall under the umbrella of hacking fit the definition or the profile. The most distinct types of actors are hacktivists, internet trolls and the members of cyber militias. The overview of the main distinctive features of the three groups is presented in Fig. 1: State of the art The scientists who have been dealing with the subject of the motivations that cyberspace actors are driven by have applied various ways of dividing and organising them. Li [5] reviews the motivations of illegal cyber activities in general, enlisting almost thirty reasons to break the law in cyberspace. However, the article does not organise them by the type of actor. Ablon [9] discusses the motivations of cyber threat actors as well as some ways they monetise the stolen data. In this paper, the four groups of cyber threat actors have been discussed, namely: cyberterrorists (theoretical), hacktivists, state-sponsored actors and cybercriminals. Thus, there is no distinction between the actors who break the law with malicious intent and those who may believe their actions can be justified. Sigholm [10] presents the various motivations of a large number of the non-state actors, excluding trolls. Ohlin et al. [11] discuss the motivations behind some of the actors; interestingly, hacktivists and 'patriotic hackers' are treated as two separate groups. This paper partially bases on a previous paper [8], which deals with raising the awareness of the less known cyberspace actorsnation states, hacktivists, cyberterrorists and trolls. Hereby, we discuss the hacktivists/trolls parts, which have been significantly extended and supplemented by adding the in-depth insights on the motivations of the actors. In addition to this, the motives behind the cyber militia members have been discussed. To the authors' best knowledge, the actors who believe they hack for the 'greater good' have not been collectively discussed yet. Hacktivists The term 'hacktivism' was coined by combining the terms 'hacking' and 'activism', in the early days of the World Wide Web [12] [13]. Back then, hackers mostly congregated on Usenet and message boards. As many of them were leftwing, anti-capitalist, anti-corporate idealists, their messing with people via the Internet soon switched to politically motivated hacks, inspired by diverse social and political issues [14]. This leads to a few groups rising to stardom, gaining international recognition and 'cyber-attacks are said to have entered a new phase' [15]. The methods they have employed are mostly basic, lack originality or sophistication [15]. Oftentimes, the tools and techniques involved are widely available and their deployment requires little to no technical skill [16]. The methods comprise: -Denial-of-service attacks, -Information theft, leaks and doxxing, -Website defacements, -Virtual sit-ins. Additionally, hacktivists attempt other kinds of cyber sabotage, including Internet resource redirects and website parodies [11]. Research shows that hacktivists overwhelmingly favour attacking websites. It is so, as websites are often the most publicly facing aspect of a company or an organisation. Hacktivists are said to prefer DDoS (distributed denial of service) attacks which lead to site crashes. Thus, hacktivists deny access to a particular service or website. In addition, this kind of attack may also mean a huge hosting bills the website owner is presented with [10,17]. When it comes to stealing data, hacktivists usually do it in order to expose sensitive files or records belonging to a target publicly over the Interneta method typically deployed to expose targets with something to hide, such as large corporations or governments [14]. This kind of activities is also related to the so-called doxxing. The word 'dox', meaning documents or 'docs', refers to the action of revealing private, often highly personal information that allows identifying someone, on the Internet. The data may comprise a person's real name, date of birth, phone numbers, addresses and so on. Hacktivists doxx their opponents, such as public figures, celebrities or individual members of targeted organisations, in order to intimidate them or bring embarrassment upon them. They also are known for website defacements, i.e., tampering with the site's appearance or data integrity. This can range from daubing 'political graffiti' across Internet to corrupting systems, e.g., manipulating polls and skewing online votings [18]. Finally, hacktivists resort to virtual sit-ins, i.e., voicing their opinions by simultaneously accessing a website multiple times, creating disruption of the target website. It is geared toward slowing down, or even crashing a target website, thus preventing access to its regular users [19]. The transporting of activism to the digital space has transformed it. As protesters became anonymous and their causes borderless, civil disobedience turned into disruption. The difference isnow, just a few skilled individuals are enough (or even enjoy more significant power) to cause more disruption with a single click than masses of people occupying streets [20]. Some of the most well-known hacktivists and hacktivist groups include the following (Tables 1 and 2). Profile Constructing the profile of a typical hacktivist is not that simple. This mixed group of people may consist of individuals ranging from script kiddies to professional black hats, from bored teens to rogue non-state actors and from lone cybervigilantes to cyber-groups [10]. 'They may range from local units composed of no more than a dozen persons to large transnational organisms with several satellite sub-groups' [11]. What they do have in common is that they are almost always personally anonymous, yet they seek a collectively distinguishable recognition [13]. Also, most of them connect through a variety of non-mainstream social networking services, such as forums and message boards like '4chan', wikis like 'Encyclopaedia Dramatica' or specific IRC channels [10]. As hacktivists often work alone, their attacks are extremely difficult to predict or even respond to quickly and whether they are a network administrator, a mid-level IT person or even a college student, there is no way of knowing in advance who they are or when they will strike [17]. Motivation Hacktivism merges traditional political activism with the Internet; disgruntled individuals or groups use hacking to bring about political or social change, and cyberspace • Muslim hacking collective • They have been targeting Amaq, the main online outlet and 'official' news agency of terror organisation ISIS/Daesh. They leaked the data of the Amaq's newsletter's subscribers and repeatedly took Amaq's website down [27] becomes the medium that allows them to express their discontent [3,21]. In addition to this, hacktivism can indirectly be utilised to reach hidden, underlying goals of political, military or commercial character; in some sense, hacktivists may be perceived as a cyberspace equivalent to the groups carrying out acts of civil disobedience, such as Greenpeace, and their actions, typically, have no lasting effect on their targets beyond reputation [11] [16]. Hacktivists (unlike typical cybercriminals) are usually not motivated with financial profit [17]. Rather, one may say they are motivated by a cause, 'burning rage inside them', no matter if they wish to embarrass celebrities, highlight human rights, wake up a corporation to its vulnerabilities or go after entities whose ideologies they do not find agreeable, who they feel do not align with their political views or practices [10,28]. Hacktivists may also steal and disseminate sensitive, proprietary or, sometimes, classified data in the name of free speech [10]. It is worth noting that, unlike most other hackers, hacktivists do crave publicity; this is why they often enter public, popular social media platforms, like Facebook, Twitter or YouTube. Hence, they are eager to, e.g., share the data they have stolen [15]. Internet trolls Another kind of Internet actors are the so-called Internet trolls. Although the idea of 'trolling' has been known for many years, there is a lack of academic consensus on the matter, owing to the fact it is a complex phenomenon [29,30]. Generally speaking, the term 'trolling' has been used to describe all types of malicious or harassing activities in the Internet, both verbal and behavioural ones, with the latter mostly happening in the sphere of online gaming [30,31]. However, beyond this basic agreement, almost every researcher has coined their own definition of trolling. For instance, Bishop [32] defines trolling as the 'act of posting a message (…) that is obviously exaggerating something on a particular topic', 'for the entertainment of oneself, others or both' [33]. Herring et al. [34] indicate that it 'entails luring others into often pointless and time-consuming discussions'. Another definition points it out that a troll 'posts a deliberately provocative message (…) with the intention of causing maximum disruption and argument' [35]. Cambria et al. [36] define trolling as 'emotional attacks on a person or a group through malicious and vulgar comments in order to provoke response'. Shachaf and Hara [37] call trolling 'repetitive, intentional and harmful actions that are undertaken in isolation and under hidden virtual identities (…) consisting of destructive participation in the community'. Hardaker [38] says trolling is 'the deliberate use of impoliteness/aggression, deception and/or manipulation (…) to create a context conductive to triggering or antagonising conflict'. Finally, Golf-Papez and Veer [39] define it as 'deliberate, deceptive and mischievous attempts that are engineered to elicit a reaction from the target(s), are performed for the benefit of the troll(s) and their followers and may have negative consequences for people and firms involved'. As one may notice, although the older definitions of trolling concentrated on stirring up discussions mostly for fun and amusement, the newer ones point it out that trolling aims to do emotional harm. The most recent ones emphasise the disruptive and deceptive nature of the acts of trolling. It may be thus stated that Internet trolling has become much more serious, harmful and potentially dangerous than it initially used to be. In fact, its potential as a tool of spreading deceptive and made-up content has already been utilised by many individuals and organisations. In the recent years, a new sub-group of trolls have caught the media's attention: the political trolls. They are usually 'user accounts whose sole purpose is to sow conflict and deception', their intent being 'to harm the political process and create distrust in the political system' [40]. Political trolls may be further divided into three groups: political bots masquerading as real users (spreading spam and harmful links), organised trolls (including hate and persecution campaigns) and the ones who spread 'fake news' [41]. Recently, the media have revealed several notorious cases of state-sponsored political trolling. For instance, before the 2016 US presidential elections, thousands of troll accounts injected false tweets or fake news in support or against certain candidates, aiming at creating discord and hate [42,43]. The accounts were traced back to Russia and allegedly funded by the Russian government [40]. Russian trolls were also highly active in Australia, in the years 2015-2017. Their actions included, e.g., spreading tweets undermining support for Australian government in the light of its response to the downing of flight MH17 [44]. In August 2019, Polish Deputy Justice Minister Łukasz Piebiak resigned after it had been revealed he allegedly arranged and controlled a hate campaign and sought to discredit judges who were critical of the government's judicial reforms; it was done by planting media rumours about the judges' private lives. The incident sparked a massive outcry in the country [45,46]. Profile As with hacktivists, the profile of a 'typical' troll is difficult to establish; this is mainly due to the fact that the definitions of a troll vary to a great extent. Even so, there are some common traits to be found, which repeat across numerous study results and papers on the matter. For example, researchers consistently refer to the 'dark tetrad', i.e., specific psychological traits that many Internet trolls possess. The tetrad is constituted by narcissism, psychopathy, Machiavellianism and everyday sadism. Narcissism is the excessive sense of self-love and selfadmiration, psychopathy means the absence of empathy, Machiavellianism is used to describe a detached, calculating, manipulative attitude, and everyday sadism means that a person enjoys cruelty that is present in everyday culture; the cruelty may be part of violent films or video games, or refer to real-life events, like police brutality. Of the four traits, it is sadism which is believed to be the most closely associated with trolling behaviours on the Internet. Oher studies suggest that cyber trolls are often characterised by low self-esteem, conscientiousness and internal moral values. The results of the study aimed at revealing the demographic of trolls suggest that a typical troll is male and lacks affective empathy. In addition, the so-called online disinhibition effect may contribute to some people becoming trolls. The idea behind it is that some people, if they think they are anonymous, tend to dissociate from the harassment [47,48]. Motivation The motives of internet trolls also greatly vary and depend on the kind of troll. Papers suggest several possible motivations, such as everyday sadism, a need for attention, trying to boost one's low self-confidence, lack of empathy, a desire for amusement or simply the fact that trolls' victims differ from them. However, trolls in a way may also be fighting for their 'greater good', as one study suggested that trolls are motivated by the fact that their actions create a kind of online community. Engaging in hate speech and online harassment is a way of cementing or building the status in the group, gaining a sense of belonging. An alternative view suggests that part of the trolls are not atypical or antisocial; in fact, they are regular Internet users who simply engage in copycat behaviour, that is they mimic the trolling behaviours they are exposed to in social media. The motives for gaming trolls, besides the ones presented above, include responding to being trolled by other players, being bored or in a negative emotional state and wishing to win no matter what. In other words, they are motivated by personal enjoyment, taking their revenge on other trolls, or thrill-seeking [49,50]. Finally, it is also worth mentioning that studies have shown Internet trolls tend to rationalise their behaviour, i.e., downplay the consequences of their actions, and thus minimise their blame and hurt others without guilt, so againtheir motives may not be utterly mischievous, at least in their own eyes [47,48]. Just the oppositeeven if they might be perfectly aware of the fact that what they do is wrong, they feel their actions are justified, sometimes in a very twisted way. Cyber militia Cyber militias are defined as 'a group of volunteers who are willing and able to use cyber-attacks in order to achieve a political goal' [51]. The definition also encompasses the ways the members of a militia contact and gather: 'the members communicate primarily via Internet and, as a rule, hide their identity'. The anonymity is usually achieved by adopting hacker aliases. Cyber militias may be permanent or be formed ad hock. The definition emphasises the fact of the members being volunteers, as they participate in the cyber militia of their own free will. They are not contractually obliged to it. Usually, they do not receive any money for their actions (there are exceptions to this; sometimes the leaders of a cyber militia are paid salaries [52]). In addition to this, a member of a cyber militia decides upon their level of commitment. They may also leave it whenever they wish. This is the main difference between the members of cyber militias and the people who join a government-run cyber-attack unit. Ottis [51] also indicates that the word 'political' in the aforementioned definition 'refers to all aims that transcend the personal interest of the volunteer. This includes religious views, nationalistic views, opinions on world social order etc.' [51,53]. According to the researcher, most cyber militias meet the following criteria: -The communication within a militia is centralised; the communicating, planning and coordinating a cyberattack campaign usually relies on on-line forums and instant messaging services. -Usually, there is no direct state support or control of the militia. If there is direct state support, the unit should be considered an organic part of the state rather than the cyber militia. -The members are loosely connected in real life; the leadership/core group may be personally acquainted, but the rest of the members usually do not know any other members, or know a few of them. Forum posts allow identifying the roles certain members play in the militia. They can be divided into two categories: 'officer' rolesleaders, trainers, suppliers, etc., and 'soldiers' and 'camp followers'. The leaders motivate to act, coordinate actions and give the directions of attacks. The trainers give instructions of all kinds, including the ones concerning reconnaissance and attacks, as well as covering them. The suppliers are responsible for providing scanners, malware, attack kits, etc. 'Soldiers' are the ones who take active part in attacks. They usually remain quiet on the forum or are ordered to report the results of their actions. Lastly, the camp followers follow the forum threads out of curiosity but do not take part in any campaigns [51]. One of the most well-known cases of the employment of cyber militia is Estonia, where volunteer hackers were recruited to respond to cyber-attacks. Those civilian defence corps grew out of the aftermath of a 2007 attack, when banking, government, news and other websites were taken online and the authorities put the blame on Russian operatives. According to experts, the attacks have been one of the worst cases of state-sponsored warfare to date. Although the Estonian cyber militia hackers are mostly civilians, they have been trained to handle this kind of assaults on hospitals, banks and military bases, as well as on, e.g., voting systems. Their commander says that the threat is taken as a given. His militia consists of all kinds of white-hat types, including amateur IT workers, economists, lawyers and so on. Some of their actions include running drills with troops, doctors and air traffic controllers, and gauging officials' responses to realistic attacks, for example, by sending out e-mails with sketchy links or dropping infected USB sticks. Allegedly, a CD labelled with a picture of Russian porn star in a bathing suit proved very effective bait for military officials. As a result, at present, the country's military computers turn off after having detected an unknown disc or USB drive. Officially, the militia is part of Estonia's national guard. Estonian's cyber militia has inspired many security officials elsewhere, including countries like France, Latvia and the USA [52]. China has also relied heavily on cyber militias. According to researchers, the collective membership of cyber militias in China has already amounted to over 10 million people. Most probably, the goal of the cyber units is to provide logistic support and rear area security for active duty unitssimilarly to militias in general. One of the most well-known faces of the Chinese cyber militias are the infamous, popular, nationalismdriven 'patriotic hackers' [54]. In the United States of America, one of the cyber militia, Missouri National Guard Team, has recently launched a nonprofit organisation in order to share their network security monitoring system 'built by cyber warriors for cyber warriors' [55]. In Ohio, a bill has been introduced that is going to create a civilian cyber militia, the task of which would be to protect the state's critical government agencies and election systems. If the bill is passed, a new volunteer unit would be created under the authority of the Ohio adjutant general and operate at the same level as National Guard. The Ohio Cyber Reserve would recruit 'individuals who are interested in improving Ohio's cyber posture'. In India, in 2011, Information Technology Minister Kapil Sibal called for a community of ethical hackers to help defend Indian networks. Reportedly, India has been considering using patriotic hackers for offensive operations, too [56]. There is a lot of controversy surrounding cyber militia. Although experts enlist the possible positive outcomes of employing the 'members of the cyber militia, recruited among a pool of civilians with the requisite forensic and IT skills', it is feared that the members of militias may use their skills and knowledge against other states with no authorisation, or even turn them back on home networks. Militias may also ignore orders, especially during a crisis. As Segal sums it up, 'patriotic geeks might be the answer to a lot of policy challenges. But in terms of cybersecurity, it may be best to either bring them completely into the fold, or keep them at arm's length' [56]. Profile Ottis [7] has distinguished three models of cyber militia: the Forum, the Hierarchy and the Cell. The models refer to the operating principles of the militia, and the properties and relationships between the members of cyber militia. The Forum consists of people who do not know one another in real life but are interested in a particular subject, meet online in a web forum, IRC channel, social network and interact there. The place is easily accessible over Internet, easy to find and provides visibility to the agenda of the group; it may be also used to recruit new members. The Forum mobilises in response to an event that is important to its members; it is more ad-hoc than permanent. It forms quickly; its attacks are hard to analyse and counter. However, as it comprises mostly of people inexperienced in cyber-attacks, it is highly reliant on the instructions and tools provided by the more experienced members of the Forum. Moreover, due to the nature of the utilised communication channels, it is relatively easy to infiltrate and de-anonymise. Another model of a volunteer cyber force is a hacker cell. It encompasses several hackers who perform attacks on a regular basis; this may last over extended periods of time. The members are experienced in the use of cyber-attacks, some of them may be even involved in cybercrime. They are likely to know one another in real life; the cell is often built on mutual trust. The Cell, in response to a long-term problem, may exist for a long period of time but is able to mobilise very quickly and difficult to infiltrate. Finally, the Hierarchy is the most organised model, suitable, e.g., for state-sponsored groups. The model resembles other military units, with sub-units specialised in some specific task or roles (like reconnaissance, infiltration, breaching and training). As the actions of a state-sponsored militias are attributable to the state (by definition), it is crucial that the militia is able to be controlled. However, it must be noted that not every cohesive group that adopts a command structure is sponsored by the state. State-sponsored militias may also require identified membership. The militias of this type may exist for a long time, even if no conflict occurs, and engage in training and recruitment in the 'peacetime'; if they are sponsored by the state, apart from money, they may expect infrastructure, cooperation with law enforcement or intelligence community. Motivation In order to engage in cyber militias' activities, a person must be driven by patriotism, or at least concerned about the matters of a particular nation. They may also be motivated by political reasons, e.g., strongly oppose a foreign country's governmental policies or disagree with them. Ottis [7] also suggests that some hackers engaged in cyber militia are confident in their skills and proud of their achievements. Some of them, after performing an attack, may even leave their aliases or affiliations, in order to claim bragging rights and thus boost their ego [7]. Nation-state actors It must be noted that when speaking about the cyberspace actors who believe they do good, one could also mention the nation-state actors. This kind of actors has been presented in detail in [8,[57][58][59]. However, this paper does not categorise nation-states as the 'stray sheep', as they are aware of what they are doing and their actions are deliberate, calculated and planned. With the nation-state actors, the ethical issue is of a different nature. Although their nation may grant them the right by law to carry out their undertaking, what they do might be strictly illegal in other countries, especially the ones they act against. Discussion and conclusions The presented kinds of cyberspace actors do differ from the image of a hacker-cybercriminal that has become mainstream owing to news items and pop culture in general. All of the actors rationalise their actions in various ways and are able to pinpoint their motives and reasons behind them, despite the fact the cyber-attacks they carry out are objectively illegal. Their actions might spark even more controversy, as substantial part of their activities are generally of lawless nature. Thus, they divide the public opinion. The actors themselves often claim their actions are for the greater good, e.g., in order to encourage better security or a more responsible custodianship of personal data, but does the fact that they wish to raise awareness about a particular matter and bring about some kind of social or political change make them 'good'? Some people actually applaud vigilante hackers who take the law into their own hands; is it enough to explain their actions, though? [14,15]. Actually, how people categorise hacktivists, trolls and cyber militia members depends mostly on whether they sympathise with the same causes they do [20]. As Lohrman [60] describes this moral and ethical dilemma: 'There is an evolving definition of right and wrong regarding hacking. For example, I may think that Edward Snowden stealing NSA records was wrong. However, I may also agree that the information he disclosed was valuable to society to help protect online privacy. Although I do not believe that the ends justify the means, millions of Americans now believe that Snowden was a hero. Bottom line, they think his illegal actions were justified' [60]. However, what if the hackers' targets are not only faceless institutions, businesses and governments but also 'regular' individuals, and then, what was meant to be transparency turns into harassment [20]? With all the three discussed groups, there is the tension caused by the fact that what seems to be right to do, does not necessarily have to be legal or ethical. Ironically, sometimes perfectly legal actions are not ethical, or just the oppositepeople may tolerate the law being broken if they believe the cause is worth it. However, the three groups vary in the intensity of these tensions, the balance between the ethics, morality and law, and the severity of the possible damage and harm their members are capable of doing. Limitations and further work This paper bases on a comprehensive review of recently published literature, including scientific articles, as well as various other sources. However, cyberspace is fickle and one might soon witness major shifts in the ecosystem, whether they be related to technical advancements or global political situation. The matter needs constant monitoring, following trends and drawing conclusions anew, in order for the divisions, names and drawn profiles to stay up-to-date and relevant. Thus, it has been planned for the research presented in this paper to be continued, and the changes in the ecosystem reported accordingly, if they do happen. The implications for cybersecurity The today's cyberspace is a complex and complicated ecosystem. Lumping the various cyber threat actors together might prove to be as unwise as underestimating the threat in general. Being aware of the various motives of the actors, the forces that drive them, the ways they form groups, etc., is also significant from the cybersecurity's point of view. Cybersecurity and cybercrime are forever and inherently connected; one cannot exist without the other [61]. For cybersecurity experts, being one step ahead of wrongdoers requires knowing as much as possible about them; for instance, being of the ways in which political trolls or hacktivists congregate could be enough to infiltrate them and get the information on the actions they are planning; being profoundly knowledgeable about the attack vectors they apply helps adopt the speediest or most comprehensive solutions. Lastly, not all the hackers are equally dangerous; some of them are rather more of a nuisance than a real threat. Encouraging increased awareness of the matter amongst cybersecurity experts may in turn help them produce the most appropriate response and eventually contribute to creating a safer, more friendly cyberspace. Final conclusions This paper has attempted to construct the profiles of 'typical' hackers who are not driven by malicious intents and present the motivations behind the cyberattacks they perform. However, it is not possible to paint a simple, black and white image of the matter due to its remarkably diverse and complicated nature. Cyberspace is constantly developing. As a consequence, the future will surely bring even more difficult situations and ethical dilemmas related to hackers. It seems unavoidable, even if they break into systems with the best intentions. Apart from the dilemmas of purely ethical nature, there is also the question of penalising certain behaviours in cyberspace. Even if people agree with the hackers' ideology or sentiment, and rationalise their actions, the law is the law and crime is to be punished. This issue does not only pose a considerable challenge for policymakers and law enforcement but also creates another ethical dilemma; there might not be a simple answer to. As it turns out, some hackers might be inflicting considerable harm to others, and yet, they will be warmly applauded for that. In cyberspace, the 'greater good' sometimes is a surprisingly relative concept. Funding Open Access funding enabled and organized by Projekt DEAL. This work is funded under SIMARGL project, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 833042. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2021-08-02T00:05:56.687Z
2021-05-10T00:00:00.000
{ "year": 2021, "sha1": "e086980ff827dc218add99a1696241cad727ac0f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00779-021-01568-7.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "5acc78e1cede1026be3de3c5f5a3a1f72d3ed494", "s2fieldsofstudy": [ "Law", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
264632592
pes2o/s2orc
v3-fos-license
Neotypification of Fusarium chlamydosporum - a reappraisal of a clinically important species complex Fusarium chlamydosporum represents a well-defined morpho-species of both phytopathological and clinical importance. Presently, five phylo-species lacking Latin binomials have been resolved in the F. chlamydosporum species complex (FCSC). Naming these phylo-species is complicated due to the lack of type material for F. chlamydosporum. Over the years a number of F. chlamydosporum isolates (which were formerly identified based on morphology only) have been accessioned in the culture collection of the Westerdijk Fungal Biodiversity Institute. The present study was undertaken to correctly identify these 'F. chlamydosporum' isolates based on multilocus phylogenetic inference supported by morphological characteristics. Closer scrutiny of the metadata associated with one of these isolates allowed us to propose a neotype for F. chlamydosporum. Phylogenetic inference revealed the presence of nine phylo-species within the FCSC in this study. Of these, eight could be provided with names supported by subtle morphological characters. In addition, a new species, as F. nodosum, is introduced in the F. sambucinum species complex and F. chlamydosporum var. fuscum is raised to species level, as F. coffeatum, in the F. incarnatum-equiseti species complex (FIESC). Fusarium chlamydosporum is commonly isolated from soils and grains in arid and semi-arid regions (Burgess & Summerell 1992, Kanaan & Bahkali 1993, Sangalang et al. 1995, and from plant material displaying disease symptoms that include crown rot (Du et al. 2017), blight (Satou et al. 2001), dampingoff (Engelbrecht et al. 1983, Lazreg et al. 2013) and stem canker (Fugro 1999). This species has also been implicated in human and animal fusarioses (Kiehn et al. 1985, Martino et al. 1994, Segal et al. 1998, Kluger et al. 2004, Azor et al. 2009) and together with members of the FIESC, account for approximately 15 % of fusarioses in the USA ). As with most Fusarium spp. associated with human fusarioses (Al-Hatmi et al. 2016), treatment of F. chlamydosporum infection is complicated due to multidrugresistance, but amphotericin B and posaconazole have been shown to be effective (Pujol et al. 1997, Azor et al. 2009). In addition, several strains of F. chlamydosporum are known to produce the mycotoxins beauvericin, butanolide, moniliformin, trichothecene (Rabie et al. 1978, O'Donnell et al. 2018, other secondary metabolites such as chlamydosporol (Savard et al. 1990), chitinase (Mathivanan et al. 1998), cellulase (Qin et al. 2010), and other unnamed compounds (Soumya et al. 2018, Wang et al. 2018. Recently, Soumya et al. (2018) isolated and characterised the red pigment produced by F. chlamydosporum in culture, and found that this long-chain hydrocarbon with unsaturated groups possess cytotoxicity towards human breast adenocarcinoma cells MCF-7, and could be exploited in cancer therapeutics as well as in the cosmetic industry. The first critical multilocus phylogenetic study to include a large number of F. chlamydosporum isolates by revealed four phylo-species (FCSC 1-4) within a group of clinical and environmental isolates initially identified as F. chlamydosporum, one of which included the ex-type of F. nelsonii (as FCSC 4;). Following this study, O'Donnell et al. (2018) identified a fifth phylo-species that was able to produce the mycotoxins beauvericin, butanolide and moniliformin. However, both studies refrained from providing names to the four unnamed phylo-species as no type material was available for F. chlamydosporum s. str. to serve as reference point. Over the years, a number of F. chlamydosporum isolates (which were formerly identified based on morphology only) have been accessioned in the culture collection (CBS) of the Westerdijk Fungal Biodiversity Institute (WI), Utrecht, The Netherlands. However, given the paucity of key informative morphological features of especially Fusarium spp. (Nirenberg 1990, Lombard et al. 2019, the present study was undertaken to correctly identify these 'F. chlamydosporum' isolates based on multilocus phylogenetic inference supported by morphological characteristics. Isolates Fusarium isolates (Table 1), initially identified and treated as F. chlamydosporum, were obtained from the culture collection (CBS) of the WI in Utrecht, The Netherlands. DNA isolation, PCR and sequencing Total genomic DNA was extracted from 7-d-old isolates grown at 24 °C on potato dextrose agar (PDA; recipe in Crous et al. 2019) using the Wizard® Genomic DNA purification Kit (Promega Corporation, Madison, WI, USA), according to the manufacturer's instructions. Partial gene sequences were determined for the calmodulin (cmdA), RNA polymerase largest (rpb1) & second largest subunit (rpb2), and translation elongation factor 1-alpha (tef1), using PCR protocols and primer pairs described elsewhere (O'Donnell et al. 1998, Lombard et al. 2019. Integrity of the sequences was ensured by sequencing the amplicons in both directions using the same primer pairs as were used for amplification. Consensus sequences for each locus were assembled in Geneious R11 (Kearse et al. 2012). All sequences generated in this study were deposited in GenBank (Table 1). Phylogenetic analyses Initial analyses based on pairwise alignments and BLASTN searches on the Fusarium-MLST (www.wi.knaw.nl/ fusarium/), Fusarium-ID (http://isolate.fusariumdb.org/ guide.php; Geiser et al. 2004) and NCBI's GenBank (https:// blast.ncbi.nlm.nih.gov/Blast.cgi) databases were done using rbp2 and tef1 partial sequences. Based on these comparisons, sequences of relevant Fusarium species/strains were retrieved (Table 1) and alignments of the individual loci were determined using MAFFT v. 7.110 (Katoh et al. 2017) and manually corrected where necessary. Three independent phylogenetic algorithms, Maximum Parsimony (MP), Maximum Likelihood (ML) and Bayesian inference (BI), were employed for phylogenetic analyses. Phylogenetic analyses were conducted of the individual loci and then as a multilocus sequence dataset that included partial sequences of the four genes determined here. For BI and ML, the best evolutionary models for each locus were determined using MrModeltest v. 2 (Nylander 2004) and incorporated into the analyses. MrBayes v. 3.2.1 (Ronquist & Huelsenbeck 2003) was used for BI to generate phylogenetic trees under optimal criteria for each locus. A Markov Chain Monte Carlo (MCMC) algorithm of four chains was initiated in parallel from a random tree topology with the heating parameter set at 0.3. The MCMC analysis lasted until the average standard deviation of split frequencies was below 0.01 with trees saved every 1 000 generations. The first 25 % of saved trees were discarded as the 'burn-in' phase and posterior probabilities (PP) were determined from the remaining trees. The ML analyses were performed using RAxML-NG v. 0.6.0 (Kozlov et al. 2018) to obtain another measure of branch support. The robustness of the analysis was evaluated by bootstrap support (BS) with the number of bootstrap replicates automatically determined by the software. For MP, analyses were done using PAUP (Phylogenetic Analysis Using Parsimony, v. 4.0b10;Swofford 2003) with phylogenetic relationships estimated by heuristic searches with 1 000 random addition sequences. Tree-bisection-reconnection was used, with branch swapping option set on 'best trees' only. All characters were weighted equally and alignment gaps treated as fifth state. Measures calculated for parsimony included tree length (TL), consistency index (CI), retention index (RI) and rescaled consistence index (RC). Bootstrap (BS) analyses (Hillis & Bull 1993) were based on 1 000 replications. Alignments and phylogenetic trees derived from this study were uploaded to TreeBASE (S24459; www.treebase.org). Morphological characterisation All isolates were characterised following the protocols described by Leslie & Summerell (2006) and Lombard et al. (2019) using PDA, oatmeal agar (OA, recipe in Crous et al. 2019), synthetic nutrient-poor agar (SNA; Nirenberg 1976) and carnation leaf agar (CLA; Fisher et al. 1982). Colony morphology, pigmentation, odour and growth rates were evaluated on PDA after 7 d at 24 °C using a 12/12 h light/dark cycle with near UV and white fluorescent light. Colour notations were done using the colour charts of Rayner (1970). Micromorphological characters were examined using water as mounting medium on a Zeiss Axioskop 2 plus with Differential Interference Contrast (DIC) optics and a Nikon AZ100 dissecting microscope both fitted with Nikon DS-Ri2 high definition colour digital cameras to photo-document fungal structures. Measurements were taken using the Nikon software NIS-elements D v. 4.50 and the 95 % confidence levels were determined for the conidial measurements with extremes given in parentheses. For all other fungal structures examined, only the extremes are presented. To facilitate the comparison of relevant micro-and macroconidial features, composite photo plates were assembled from separate photographs using PhotoShop CSS. Phylogenetic analyses Approximately 500-650 bases were determined for cmdA and tef1, 1 845 bases for rpb1 and 1 800 bases for rpb2. Sequence comparisons of the rpb2 and tef1 gene regions generated in this study against those in the Fusarium-MLST, Fusarium-ID and GenBank databases revealed that only 14 isolates belonged to the FCSC. Of the remaining 9 isolates, three were identified as members of the F. incarnatum-equiseti species complex (FIESC) and six belonged in the F. sambucinum species complex (FSAMSC). For the BI and ML analyses, a K80 model for cmdA, a GTR+I+G model for rbp1, an HKY+G+I model for rpb2 and an HKY+G for tef1 were selected and incorporated into the analyses. The ML tree topology confirmed the tree topologies obtained from the BI and MP analyses, and therefore, only the ML tree is presented. Culture characteristics: Colonies on PDA reaching 90 mm at 24 °C after 7 d. Colony surface rose to rosy vinaceous to sulphur yellow, with abundant aerial mycelium, dense, woolly to cottony. Odour absent. Reverse livid red to rose. On SNA, colonies membranous to woolly, white to pale rosy buff, with abundant sporulation on the surface giving a powdery appearance; reverse pale rosy buff. On CLA, aerial mycelium sparse with abundant pale luteous to pale orange sporodochia forming on the carnation leaves. On OA, colonies membranous to cottony, white to rosy buff, with abundant sporulation on substrate giving a powdery appearance. Notes: Fusarium nodosum is closely related to F. armeniacum, F. langsethiae, F. sibiricum and F. sporotrichioides in the FSAMSC. Fusarium armeniacum characteristically does not produce polyphialidic conidiogenous cells (Burgess et al. 1993), distinguishing this species from F. nodosum. The remaining three species readily produce abundant globose aerial conidia (i.e. microconidia), which were rarely seen for F. nodosum. Etymology: Named after Peru, from where this fungus was collected. Culture characteristics: Colonies on PDA reaching 90 mm at 24 °C after 7 d. Colony surface fulvous to ochreous in the centre becoming coral to vinaceous towards the margin, with abundant aerial mycelium, dense, woolly to cottony, sometimes granular due to abundant sporulation on medium surface. Odour absent. DISCUSSION A key component of modern taxonomic studies of the genus Fusarium is multilocus phylogenetic inference due to the numerous cryptic species now known to be present in the various species complexes. Therefore, the availability of type material plays a vital role in providing stability to a dynamic taxonomic system as is seen in Fusarium literature today. The FCSC is no exception as at least four unnamed phylo-species have been identified in the past (O'Donnell et al. , 2018, which were initially identified as F. chlamydosporum. Phylogenetic inference in this study resolved four additional phylo-species to the five already resolved by O'Donnell et al. ( , 2018, of which three could be provided with names (F. humicola, F. microconidium and F. peruvianum) here, and one single lineage (NRRL 13338) initially treated as F. nelsonii ), remaining to be named. Neotypification of F. chlamydosporum in this study has allowed us to provide names for the remaining unnamed phylo-species: FCSC 1 = F. chlamydosporum; FCSC 2 = F. atrovinosum; FCSC 3 = F. spinosum; FCSC 5 = F. sporodochiale. The ex-neotype strain (CBS 145.25) of F. chlamydosporum was found in this study to have deteriorated since 1925, and produced only a few aerial conidia (i.e. microconidia) on CLA, and none on PDA, SNA or OA. The same was observed for strains CBS 615.87 and CBS 677.77, indicating that strains of this species could deteriorate quickly during long-term storage. Booth (1971) also studied the (now) ex-neotype of F. chlamydosporum and concluded that this species is a nomen confusum as he was unable to distinguish it from F. camptoceras at that time. Gerlach & Nirenberg (1982) accepted F. chlamydosporum as a distinct species and rejected Booth's (1971) argument. However, Marasas et al. (1998) provided an emended description for F. camptoceras, clearly distinguishing it from F. chlamydosporum. The F. chlamydosporum clade (FCSC 1) included for the most part clinical isolates, but also isolates obtained from plants (banana and taro), thrips and soil (Table 1), indicating that this species has a broad ecological range. The remaining clinical isolates clustered in the F. atrovinosum (eight isolates) and F. spinosum (one isolate) clades. Both these latter species also included isolates obtained from plants and soil, reflective of a possible broader ecological range. The number of clinical isolates in each of these three species may not be a true reflection of their ecology, as this only represents the sample of sequence data available in public databases such as GenBank, FUSARIUM-ID and Fusarium MLST. Isolates CBS 511.75, CBS 119843 and NRRL 13338 were resolved as single lineages in this study. All three these single lineages were also resolved in the individual analyses of the four loci used in this study (results not shown). Therefore, we introduced the names F. microconidium (CBS 119843) and F. peruvianum (CBS 511.75) for two of these single lineages, with a name pending for NRRL 13338 following morphological analysis. Pairwise sequence comparisons of the tef1 and rpb2 sequences of MRC 35 (MH582448 & MH582208, respectively) and MRC 117 (MH582447 & MH 582074, respectively), identified by O'Donnell et al. (2018) as FCSC 5, with those of the ex-type of F. sporodochiale (CBS 220.61) showed 99 % sequence similarity for both loci compared to the 96 % similarity found with the neo/ex-type isolates of F. atrovinosum (CBS 445.67), F. chlamydosporum (CBS 145.25) and F. spinosum (CBS 122438), which were the closest phylogenetic neighbours. Therefore, we are able to link both CBS 220.61 and CBS 199.63 to FCSC 5 in this study. The tef1 and rpb2 sequences for both MRC 35 and MRC 117 were not available at the time, and could therefore not be included in this study. To our knowledge, the ex-type strain of F. chlamydosporum var. fuscum (CBS 635.76;Gerlach 1977) has not yet been included in any phylogenetic study until now. However, it was surprising to observe its placement in the FIESC, clustering with CBS 430.81, an isolate known to represent the phylo-species FIESC 28 ). As no Latin name has yet been assigned to FIESC 28, we decided to raise this variety to species level with a new name, F. coffeatum. Two additional isolates preserved as F. chlamydosporum in the CBS culture collection also clustered within the FIESC. Isolate CBS 127131 proved to belong in the F. lacertarum clade, whereas CBS 101138 clustered within the FIESC 24 clade ). Both these isolates failed to produce sporodochia on CLA under UV-illumination, but produced abundant aerial conidia (i.e. microconidia), chlamydospores and a dark red pigmentation on the various media used here, similar to those associated with F. chlamydosporum. These characteristics probably resulted in the erroneous identification of these isolates. Several isolates also clustered within the FSAMSC, with CBS 462.94 falling within the F. sporotrichioides clade. This isolate also failed to produce sporodochia on CLA but produced abundant aerial conidia (i.e. microconidia) and the characteristic red pigment in culture. However, no chlamydospores were observed. Either this isolate has been misidentified or became contaminated with F. sporotrichioides over time. The remaining four "F. chlamydosporum" isolates (CBS 200.63, CBS 201.63, CBS 698.74 & CBS 119844) formed a highly supported clade, distinct from the F. armeniacum, F. langsethiae, F. sibiricum and F. sporotrichioides clades, and were named as F. nodosum. The F. nodosum clade also included an isolate (CBS 131779) previously identified as F. sporotrichioides (Davari et al. 2013). It is not clear why these isolates were initially preserved in the CBS culture collection under the name F. chlamydosporum. The most noticeable overlapping character observed for these isolates with F. chlamydosporum, was the production of dark red pigments on PDA. These isolates all readily produced abundant sporodochia on CLA and no chlamydospores were found. The FCSC now includes nine phylo-species, for which eight were provided with Latin binomials in this study. Although subtle morphological differences could be found among these eight newly named taxa, phylogenetic inference using the recommended Fusarium identification gene regions rpb1, rpb2 and tef1 should be used for accurate identification (O'Donnell et al. 2015).
2019-07-26T08:08:01.752Z
2019-07-04T00:00:00.000
{ "year": 2019, "sha1": "137195749c3b011e9a662e9bdf02789c8d4c6218", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc7241675?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "35c57de3a28503c074ee730eed545d9e644de13d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
267042628
pes2o/s2orc
v3-fos-license
A contribution to the French validation of the clinical anxiety scale amongst health care workers in Switzerland Background Anxiety disorders are frequent but remain often underdiagnosed and undertreated. Hence, valid screening instruments are needed to enhance the diagnostic process. The Clinical Anxiety Scale (CAS) is a 25-item anxiety screening tool derived from the Hamilton Anxiety Scale (HAM-A). However, this scale is not available in French. The General anxiety disorder − 7 (GAD-7) scale, which has been validated in French, is a 7-item instrument with good psychometric properties. This study contributes to the validation of an adapted French version of the CAS, using the GAD-7 as the reference. Methods A forward-backward English-French-English translation of the CAS was performed according to standard practice. The French versions of the CAS and GAD-7 were completed by 127 French speaking healthcare professionals. CAS internal consistency was assessed using Crohnbach’s alpha, and test-retest reliability was tested after 15 days in a subsample of 30 subjects. Convergent validity with GAD-7 was assessed using Pearson’s correlation coefficient. Test-retest reliability was explored using one-way random effects model to calculate the intra-class correlation coefficient (ICC). Results French CAS showed excellent internal consistency (Cronbach’s alpha 0.97), high convergent validity with GAD-7 (Pearson’s R 0.81, p < 0.001), and very good test-retest reliability (ICC = 0.97, 95% CI 0.93–0.98). Conclusion The proposed French version of the CAS showed high reliability and validity that need to be further investigated in different populations. Introduction Anxiety disorders i.e., experiencing symptoms of excessive fear and worry that result in behavioural disturbances, are leading mental health problems [1].According to a recent epidemiological survey by Yang et al. [2] the number of persons newly diagnosed with anxiety disorders has increased over the last 30 years.Moreover, the burden of anxiety and major depressive disorders raised further during the COVID-19 pandemic [3]. Anxiety disorders are associated with adverse health outcomes, contribute to poor quality of life and increased mortality [4].According to the Global Burden of Diseases 2019 Study, anxiety disorders are amongst the leading causes of disability, responsible for about 28.7 million disability-adjusted life years [5].In Switzerland we are witnessing a rise in the incidence of psychological distress, and 11.9% of the women ant 7.5% in 2022, suffer from anxiety disorder [38]. Early diagnosis and intervention may reduce the disease burden and improve the quality of life of patients affected by anxiety [6].Yet, despite the high prevalence and substantial disability associated with these disorders, they often remain underdiagnosed and undertreated [7][8][9].Hence, there is a need for valid and accurate screening instruments to enhance the diagnostic process. Several screening instruments have been developed to effectively identify patients with anxiety disorders, however only few are adapted for local languages and culture.The broadly used scales for anxiety (such as HAD or GAD7) have been validates in French population [39][40], but none in the French speaking Switzerland. One of the most widely used scale is the General Anxiety Disorder-7 (GAD-7), a seven-item instrument with good psychometric properties [42] (Cronbach's alpha 0.92, test-retest reliability, intra-class correlation 0.83).The convergent validity of the GAD-7, was demonstrated by its correlations with two anxiety scales: the Beck Anxiety Inventory (r = 0.72) and the anxiety subscale of the Symptom Checklist-90 (r = 0.74) [13]. Since its development by Spitzer et al., GAD-7 has been validated in different populations such as psychiatric patients, patients from primary care clinics, patients affected by specific diseases or health problems such as epilepsy, heart failure, or traumatic brain injury, as well as in the general population [14][15][16][17][18][19][20].GAD-7 is translated in more than 50 languages.The scale is short and easy to complete.The GAD-7 targets mainly the General Anxiety disorder and does not include specific features of other types of anxiety disorders such as panic, phobias, and post-traumatic stress disorders.Here, we selected the GAD-7 as our standard reference due to its high specificity and sensitivity for detecting anxiety [22].This choice reduces the risk of potential bias stemming from the inclusion of psychotic symptoms or depression, as demonstrated in previous research with other scales as the SCL-90R [36].Therefore, the GAD-7 is well-suited for identifying anxiety within a general population and aligns with the objectives of our study. The Clinical Anxiety Scale (CAS) is a 25-item tool derived from the Hamilton Anxiety Scale (HAM-A).HAM-A is still used in clinical practice and in research and comprises items covering a wide range of anxiety including multiple somatic symptoms as well as some depressive symptoms.HAM-A contains 14 items, each scored on a scale of 0 (not present) to 4 (severe).This scoring system results in a total score range of 0 to 56, which reflects the varying intensity of anxiety symptoms However, some concerns have been raised on its inaccuracy in discriminating somatic anxiety from antidepressant side effects [10], and for its time-consuming and potentially unreliable administration by physicians [11,27]. CAS is simpler in comparison to HAM-A.CAS comprises predefined questions without subscales, making it straightforward for self-administration, unlike HAM-A which need to be administered by a health care professional.It targets essentially anxiety symptoms, thus excluding the potential bias of questions related to depression.CAS assesses the level of anxiety arising from identified situations or events.Compared to GAD-7, CAS is longer and combines a wider range of questions concerning panic and phobia, and few somatic symptoms of anxiety [12].Thus, CAS could be especially useful to detect specific types of anxiety rather than general anxiety disorder. The psychometric properties of CAS established in the original validation article [12] are very good.CAS achieved a Cronbach Alpha Coefficient of 0.94.Its discriminant validity of 0.77 was better, compared to other scales (Index of Family Relations, General Contentment Scale, Psycho-Social Screening Package, Mobility Inventory Agoraphobia, and Michigan Alcoholism Screening Test).It is well correlated and has a good concurrent validity with the anxiety subscale (HAD-A) of the HAD (Hospital Anxiety Depression scale) (correlation coefficients 0.69-075), [31][32][33] and good temporal stability [34]. Despite these positive characteristics, CAS is only available in English.Furthermore, studies examining the factorial structure of this scale are lacking. The aim of this study was to develop and validate a French version of the CAS, to examine the internal consistency, the factorial structure with principal component analysis, as well as to assess the construct validity using GAD-7 as a reference and evaluate its test-retest reliability. Translation of the scale Two bilingual experts performed a forward-backward English-French-English translation of the CAS.The two translated forms displayed very good similarity.The final version was reviewed by a bilingual psychologist and subsequently used in the study. Procedure and participants The CAS includes 25 items, with answers ranging from 1 (rarely) to 5 (very often).After reverse scoring of the positively formulated items, the final scores range from 0 to 100 with higher scores indicating higher anxiety.The final score was calculated according to the formula provided in the original CAS validation paper [21].A cut-off of 30 or more defines clinically significant anxiety." [12,21]. The GAD-7 comprises 7 items and the global score ranges from 0 to 21, with higher scores indicating higher anxiety.A score of 8 or more is usually proposed to define clinically significant anxiety [22]. Although 142 subjects were eligible for the study, only 127 subjects were recruited as 15 refused to participate.All participants were health care professionals working in the Lausanne University Hospital (CHUV) in different divisions: geriatrics, internal medicine, and psychiatry.The inclusion criteria were age 18 years or older, native French speakers or fluent in French, agreeing to participate.The participants were recruited between 21st September 2021 and 02nd February 2022. Data on participants' age, gender, and professional role was collected.All the participants completed the two self-administrated scales (CAS and GAD-7), using individual paper questionnaires.The two scales, completed in random order [35], had an identical response rate. CAS test-retest reliability was examined in 30 participants (24%) who were asked to complete again both questionnaires 15 [43] days after their initial assessment, the sample size needed was calculated according to Walter et al. [36].The time needed for the self-administration of the two questionnaires was measured subsequently in six subjects. Statistical analysis The sample size was estimated according to Tabachnick et al. [23] -five subjects were needed to validate each item of the analysed scale, resulting in a sample size of 125 participants. To evaluate the adequacy of the data for Factor Analysis, the Kaiser-Meyer-Olkin (KMO) Test and the Bartlett sphericity test were carried out.Subsequently, we carried out an exploratory principal components analysis with Varimax rotation with Kaiser normalization was performed on the responses to the 25 items of the CAS, to identify its factorial structure.Principal component and not factorial analyses was chosen because of the nature of the data obtained, the data did not exhibit clear underlying factors, and our goal was to capture as much variance as possible with a smaller number of variables.The Varimax rotation was chosen to avoid cross loadings on more than one dimension thus simplifying the factor structure and making each factor more interpretable in isolation.All CAS items were allowed to freely load during exploratory factor analysis to identify all factors present.Next, each factor loading was compared to determine the magnitude of difference.Differences in magnitude greater than 0.03 was set as the threshold for a stable factor structure [45]. The correlation between CAS and the GAD-7 was evaluated by Pearson's correlation coefficient. Test-retest reliability was assessed using a one-way random effects model to calculate the intraclass correlation coefficient (ICC), The intraclass correlation coefficient (ICC) is defined as a ratio of variability between subjects to the total variability including subject variability and error variability; as the error term decreases, the ICC moves from 0 to 1 indicating perfect reliability [24]. All the analyses were performed using SPSS 27.0 for Windows. Results Overall, 127 of the 142 eligible health care professionals completed both questionnaires (response rate 89.4% for both instruments).Participants' mean age was 35 ± 11 years, 61% were women, 45.6% were nurses, 22.0% physicians, 32.2% from other health professions (physical and occupational therapists, medical secretaries, medical and nurse students). There was no significant difference in the CAS and GAD-7 scores between men and women as well as between the different professional categories (data not shown). The mean time to complete the CAS and the GAD-7 were 120 s and 45 s, respectively. The KMO (0.851) and the Bartlett sphericity tests (p < 0.001) indicated that the sample size was adequate and suitable for factor analyses. The principal component factor analysis revealed seven principal components, however as there was only 1-item loading on the seventh factor ("I am free from senseless or unpleased thoughts"), the principal component analyses was carried out forcing the items on 6 loading according to Costello et al. [37].This 6-factor structure of the CAS explained 66.83% of its total variance. The first factor, which explained 27.3% of the variance, encompassed the seven positively formulated questions related to "not worrying".Factor 2 which explain 17.12% of the variance had significant loadings on nine items related to Anxiety.Factor 3 significantly loaded on eight items associated to panic and phobia that explained 8.31% of the variance.Factor 4 included five items associated to Panic-Related Symptoms that explained 5.44% of the variance.Five items associated to Physical Symptoms loaded significantly on Factor 5 that explained 4.5% of the variance, and finally only four items associated to Antidepressant and Tranquilizer Use on Factor 6 that explained 4.2% of the total variance, respectively. The result of our study supports a 6-factor structure of the CAS, each of which is associated with different components of anxiety.These factors can provide valuable insights into the multidimensional nature of anxiety, as follows: Factor 1: General Anxiety "I feel calm"; "I feel confident about the future", "I feel relaxed and in control of myself ", "I feel generally anxious".Factor 1 seems to be associated with general or non-specific feelings of anxiety.These items reflect a sense of overall anxiety or a lack of calmness and confidence about the future.This factor may capture a more generalized state of anxiety.Factor 2: Tension and Nervousness "I feel tense"; "I feel nervous"; "I feel nervousness or shakiness inside".Factor 2 appears to be associated with feelings of tension and nervousness.These items reflect the psychological and physiological manifestations of anxiety.Factor 3: Fear and Avoidance: "I feel suddenly scared for no reason"; "I feel afraid to go out of my house alone"; "I feel afraid without good reason"; "Due to my fears, I unreasonably avoid certain animals, objects, or situations".Factor 3 is associated with fear and avoidance behaviours.These items reflect unfounded fears and avoidance of various situations and objects, suggesting a specific type of anxiety related to phobias and avoidance behaviour. Factor 4: Panic-Related Symptoms and Agoraphobia: "I have spells of terror or panic"; "I feel afraid in open spaces or in the streets"; "I feel afraid I will faint in public"; "I experience sudden attacks of panic which catch me by surprise".Factor 4 appears to be related to panicrelated symptoms and agoraphobia-like anxiety.These items represent experiences of sudden panic, fear in open spaces, and concerns about fainting in public.Factor 5: Physical Symptoms and Avoidance, "My hands, arms, or legs shake or tremble"; "I get upset easily or feel panicky unexpectedly"; "Due to my fears, I avoid being alone, whenever possible".Factor 5 is associated with physical symptoms of anxiety, including trembling limbs and avoiding being alone due to fear.This factor may be related to social anxiety or specific phobias with physical symptoms.Factor 6: Medication Use, "I use tranquilizers or antidepressants to cope with my anxiety"; Factor 6 is primarily associated with the use of medication (tranquilizers or antidepressants) as a coping strategy for anxiety.This factor reflects a different aspect of managing anxiety. Discussion The aim of our study was to develop and validate a French version of the CAS, to make it available for detection of anxiety in the French speaking population [25].The results showed that this translated CAS version had high reliability and validity (Reliability coefficient value > 0.9, validity coefficient > 0.4). An original contribution of the present study is to provide new insight on the factor structure of the CAS.Indeed, information on principal component analysis (PCA) is not available for the original English scale [21].Contemporary studies using CAS are sparse and focused on the correlation with other anxiety scales. The CAS, with its 6-factor structure, provides a comprehensive assessment of various components of anxiety, ranging from general anxiety and tension to specific fears, panic-related symptoms, and coping mechanisms.Understanding these factors can help clinicians and researchers better target and address the diverse aspects of anxiety in individuals. Further analysis in different populations is needed to confirm the proposed this structure of the CAS and its French version. The French CAS showed excellent reliability.Internal consistency was high, indicating it is highly homogeneous, as reported for the English version [21].Similarly, test-retest reliability was also very high (r > 0.9) [41] the present study, emphasizing CAS stability over time in the absence of new events. Further studies are welcome to investigate the sensitivity to changes of the French CAS amongst subjects developing new symptoms of anxiety. Our results show an excellent convergent validity with the GAD-7 ( r = 0.81) thus confirming the link between the core symptoms of general anxiety disorders evaluated by the GAD-7 and the psychological and somatic symptoms of anxiety explored with the French CAS. The initial validation of the scale was performed in a mixed population of subjects with and without a clinical diagnosis of anxiety or related disorders [21].However, the present study differs as participants were health care workers, and the specific characteristics of this cohort, may limit the generalizability of our results, in particular the high homogeneity of the participant sample could potentially favor high correlation coefficients, thereby creating a limitation in the interpretation of the study results.Nevertheless, our results are consistent with other studies performed in the general population with different scales (HAD and STAI) [28][29][30].Overall, 11.8% of the participants reached the cuff-off of CAS for anxiety, not dissimilar to those studies on the general population of the same age.Further research is needed to investigate the diagnostic validity of this instrument in subject affected by anxiety disorders. Overall response rate for the questionnaire was high, however, the use of the scale in general population, not affected by anxiety, can also explain the high response rate. A positive characteristic of the CAS is the self-administration and speed of completion that allow it to be used on a large scale in clinic.The present study is thus reassuring about the feasibility to use this CAS French version.In particular, the time needed to fill in the questionnaire was less than 3 min, confirming that this scale could likely be used in routine clinical practice by the general practitioner's as well as in hospital setting, similarly to the English version.This is especially interesting when comparing to other, more time-consuming instruments such as the HAM-A [26]. Conclusion The validation of the French version of the CAS allows clinicians to assess anxiety disorders in a quick and efficient manner.The instrument is well accepted and could be included in routine clinical practice.Further studies are needed to clinically validate this scale in different populations, including in older patients. Table 1 Rotated Varimax factors structure of the French version of the CAS
2024-01-20T14:07:14.609Z
2024-01-19T00:00:00.000
{ "year": 2024, "sha1": "411c3a4d26499db7082081102e98795ef14d6375", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "932417128ed80de379adcfaf49e5c5716b9f6cf6", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
253679130
pes2o/s2orc
v3-fos-license
Design of optimum grillages using layout optimization Grillages are often used to form bridge decks and other constructions. However, following a period of intensive research activity in the 1970s, comparatively little attention has been paid to optimizing the layout of grillages in recent years. In this contribution a new numerical procedure is proposed which takes advantage of the adaptive solution scheme previously developed for truss layout optimization problems, enabling very large scale problems to be solved. A key benefit of the proposed numerical procedure is that it is completely general, and can therefore be applied to problems with arbitrary loading and boundary conditions. Also, unlike some previously proposed procedures, the sizes of individual beams can readily be discerned. To demonstrate its efficacy the numerical procedure is applied to a range of grillage layout design problems, including load dependent problems which could not be solved using traditional methods. It is shown that important phenomena such as “beam-weaves” can be faithfully captured and new high-precision numerical benchmark solutions are provided. viewed the plastic design of grillages in a continuous setting, considering a notional slab comprising an infinite number of fibre-like beams. An optimum fibrous slab can be considered as analogue to an in-plane Michell structure, which is well known in the structural optimization research community; further development of the theory of Michell structures (Michell 1904) has been described by workers such as Chan (1967), Hemp (1973) and Lewinski et al. (1994a, b). Also, a numerical means of identifying Michelltype structures using the "ground structure" approach was proposed by Dorn et al. (1964) and further developed by workers such as Gilbert and Tyas (2003), Sokol (2014), and Zegard and Paulino (2014). Any structural design optimization problem can be posed in either equilibrium (primal) or kinematic (dual) form, where for a grillage the problem variables are usually moments and rotations in the equilibrium and kinematic forms respectively. However, in the paper by Rozvany (1972a) neither the equlibrium nor kinematic problem formulations are solved directly; instead a displacementbased, fully analytical method of finding the solution of the kinematic problem is proposed which essentially stems from the stress-strain optimality relation linking the solutions of the equilibrium and kinematic forms. The associated optimization problem is also confined to being applied to fully clamped slabs subject to an arbitrary, though always downward, loading. Significantly, for this class of problem the optimum layout is load-independent. This remarkable feature, combined with Rozvany's kinematic method, provided a means of obtaining universal exact optimum grillage layouts for problems involving downward loads for both single and multiple load cases. It should however be noted that this does not furnish the optimal distribution of beam widths. For this one obviously needs to know the magnitudes of the particular loads involved, and to use the governing equilibrium equation to determine the corresponding optimal bending moment field and thus the beam widths. However, the papers by Rozvany (1972a) and by Lowe and Melchers (1972) do not describe systematic means of recovering the beam width distribution. In subsequent decades Rozvany's analytical kinematic method was applied to slabs with a range of other boundary conditions, including simply supported edges and combinations of free and simply supported edges, and also simply supported and clamped edges (Rozvany et al. 1973;Rozvany and Hill 1976;Prager and Rozvany 1977;Rozvany and Liebermann 1994). For each of the aforementioned cases the method proved capable only of solving problems involving exclusively downward loading, as it explicitly relies on the fact that the optimum layout is load-independent. The method was then implemented by Hill and Rozvany (1985) in a computer program which allowed automatic generation of analytical optimum layouts for arbitrary polygonal slabs with partially clamped and simply supported boundaries. The authors presented exact optimum layouts for an impressive range of complex polygonal domain shapes. It should be noted that although Rozvany's method is capable of treating interior clamped supports, it cannot account for interior simple supports. This is because uplift may occur if such supports are present, which in turn means that there is no longer a universal kinematic solution, common for all types of downward load. This is also the case if mixed downward / upward loadings are present, or if point moment loadings are present. Less trivially, this also applies to slabs with partially clamped and free edges. The load-sensitivity of many real-world problems encouraged researchers to seek general numerical methods. In the paper by Sigmund et al. (1993) the ground structure method was, apparently for the first time, used to obtain solutions to the grillage compliance minimization problem. (A "ground structure" comprises a network of structural members interconnecting nodes laid out on a grid from which the subset of members defining the optimum structure is sought, after Dorn et al. 1964.) By using the DCOC method in combination with linearly tapering beam finite elements the authors found a number of new grillage layouts, showing solutions for problems involving clamped and free edges; later the method was applied to problems involving mixed downward and upward loadings (Rozvany 1997). Low resolution ground structures were used, in part because of the available computing capabilities of the time. However because the method does not take advantage of modern adaptive solution schemes (e.g. Gilbert and Tyas 2003), the scale of problems that can be tackled even now appears to be limited. More recently Zhou (2009) proposed a method which involved recovering principal moment trajectories, but this is likely to be rather cumbersome in practice. In summary, although the analytical approach initiated by Rozvany and co-workers had reasonably broad applicability, and allowed new insights to be drawn, it left a wide range of grillage optimization problems unsolvable, due to their inherent load sensitivity. Moreover, even for grillage problems which could be solved, the optimum beam width distribution was not identified. In the present paper the authors propose that a ground structure approach is adopted, and that techniques now well established in the field of truss layout optimization (e.g. see Gilbert and Tyas 2003;Sokol 2014) or limit analysis via discontinuity layout optimization (see Smith and Gilbert 2007;Gilbert et al. 2014) are applied. Here a plastic design formulation is used by posing two mutually dual linear programming forms: equilibrium and kinematic. The goal is to minimize the volume of material for specified applied loading. This leads to a simple linear formulation which can be used in conjunction with an adaptive solution scheme to solve problems involving ground structures consisting of many million beams, thus generating optimum layouts closely approximating the analytical solutions found by Rozvany et al., though with a far greater range of applicability. Equilibrium formulation Consider a ground structure (Dorn et al. 1964) consisting of a design domain discretized using n nodes and b beams, as shown on Fig. 1. For beam i, assume its cross sectional area varies linearly from a i1 to a i2 . The total volume V of the structure can be written as: [a 11 , a 12 , a 21 , a 22 , ..., a b1 , a b2 ] T are, respectively, vectors of beam lengths and areas. For each node, moment equilibrium needs to be enforced in the x and y directions and force equilibrium in the z direction. Denoting m i1 and m i2 as the moments at the Ground structure for a design domain, in this case a simple square domain discretized using n = 9 nodes and b = 36 interconnecting beams (including overlaps) two ends of beam i, the local equilibrium matrix can be expressed as: where θ i is the angle of beam i to the x + axis, and l i is its length. Also, assuming that the beam cross-sections are of uniform depth, let m + p and m − p denote the limiting moments per unit area. The yield condition of beam i can thus be written as: The grillage layout optimization problem can therefore be written as: Adaptive solution scheme When a fully-connected ground structure is used the number of beams b grows rapidly with the number of nodes n, limiting the size of problem that can be solved (since in this case b = n(n − 1)/2). This issue was addressed for truss layout optimization problems by Gilbert and Tyas (2003) who proposed an adaptive solution scheme, later further developed by Sokol (2014). This scheme employs an initial sparsely connected ground structure and uses constraints from the dual problem to check whether the solution could potentially be improved by adding additional members, as part of an iterative process. As problem formulation (4) is very similar to the truss formulation used by e.g. Gilbert and Tyas (2003), the same basic technique can be applied in this case, where the dual formulation of (4) involves maximizing virtual work: where, W is the virtual work and u collects the virtual rotations in the x and y directions and out-of-plane displacement in the z direction. Also the following constraint must be satisfied: which imposes limits on the maximum and minimum virtual rotation that can occur in each beam. Note that u is obtained automatically after solving (4), and (6) is only guaranteed to be satisfied in beams that are present in (4). This means that potential beams, not currently represented in the problem, may violate (6); in this case the beams most in violation should be added to problem (4) to prevent this violation in the next iteration. The process repeats until no violation is found in (6); for further details of the algorithm readers are referred to Gilbert and Tyas (2003) and Sokol (2014). Commentary The grillages considered herein are assumed to be rigidjointed, but with torsional resistance neglected. This assumption, also made by Sigmund et al. (1993), is justified by the fact that for most cross-sections used in practice the torsional resistance is low compared with the bending resistance. This is particularly true for open cross-sections, such as I-beams. In the latter case by varying only the flange width the linearity of the formulation is preserved. Now consider beam i such that m i1 · m i2 < 0, i.e. the bending moment changes sign across the length of the beam. Notice that restricting the cross sectional area function to vary linearly from a i1 = |m i1 | /m p to a i2 = |m i2 | /m p will not lead to an optimal beam being generated in this case, since each intermediate point along the length of the beam is overdesigned (e.g. consider the intermediate point where the bending moment vanishes). This can potentially be addressed in two ways: (i) via use of a nonlinear relation for the grillage volume (1) (see Bolbotowski 2018); (ii) ensuring that a large number of nodes and interconnecting beams are employed in the problem, such that any inaccuracy is small. A drawback with (i) is that it requires the use of computationally expensive non-linear optimizers and thus (ii) is adopted here. However, note that the single load case plastic design problem considered here is only equivalent to the corresponding compliance minimization problem when (i) is adopted; the same holds for the grillage-like continuum addressed by Rozvany et al. The above argument suggests that, when m + p = m − p = m p , the volume V in (4a) approximates to the scaled (by m p ) integral of the absolute value of the bending moment diagram taken over the grillage. Thus when a high resolution ground structure is adopted the equilibrium form (4) can be viewed as a discrete version of the continuous problem addressed in Rozvany (1972a), Save and Prager (1985) and others. In the field of truss optimization it is well-established that when a single load case is involved there must always exist a statically determinate optimum truss solution; see for example Achtziger (1997). The simplest proof of this statement revolves around existence of a basic solution to the underlying LP problem. As this also applies to the grillage optimization problem (4), it follows that there must always exist a statically determinate optimum grillage. Finally, although thus far attention has focussed on single load case problems, it is well known that the plastic truss layout optimization can be extended to treat multiple load case problems (e.g. see Hemp 1973), and, though beyond the scope of the present contribution, it is worth pointing out that the grillage layout optimization formulation described herein can be similarly extended. Numerical examples The proposed numerical method was programmed independently in both Mathematica 11.1.0.0 and Matlab 2015a, respectively using the default Mathematica solver and Mosek 7 to obtain solutions to the LP problems involved. In all cases tried the results obtained from the two programs were identical for all quoted significant figures, though for the larger problems the Matlab / Mosek combination was favoured due to lower associated run times. All quoted CPU times are single core values obtained using a workstation equipped with Intel Xeon E5-2680v2 processors running 64-bit CENTOS Linux. The efficacy of the method is demonstrated through application to a range of numerical example problems. For sake of simplicity beam moment capacities were in all cases taken to be equal for sagging and hogging, i.e. m + p = m − p = m p , and nodes were evenly distributed over each problem domain, with pressure loads approximated using point loads applied at these nodes. In this case the magnitude of each point load was calculated by taking into account the area of the surrounding domain, taking the load applied at an intermediate node along an edge to be half that applied at an interior node, and the load applied at a corner node as one quarter. For example, considering the domain shown in Fig. 1, and assuming a uniform pressure load of total magnitude pL 2 is applied, the loads applied to nodes A, B and I would be pL 2 /16, pL 2 /8 and pL 2 /4 respectively. Note that because of the presence of supports the grillage designed would in this case only need to carry a load of pL 2 /4, leading to an underestimate in the volume of material required. To address this load discretization error, and also the nodal discretization error that limits the range of layouts that can be identified, and hence tends to overestimate the required volume of material, most problems described were solved using a sequence of increasing nodal divisions, enabling approximations of the exact values to be obtained via extrapolation (see Appendix A for details); these latter approximations are quoted in the main text, whilst tabulated results are presented in Tables 1-3 of Appendix A. However, in the interests of visual clarity the graphical results presented correspond to problems with a moderate number of nodal divisions. Beams are drawn in blue and red to indicate sagging and hogging respectively in the graphical solutions, with line widths proportional to beam cross-sectional areas. In the interests of visual clarity, beams with very small cross-sectional areas have been filtered out. Symbols used in the paper are illustrated in Fig. 2. The symbols used by e.g. Rozvany (1972a) for region type are used herein to describe the analytical optimum layouts; see for example Fig. 3a. Specifically, a design domain can be divided into regions where each region is labelled to indicate the optimum directions of beams of possibly non-zero cross section, whether sagging ("+" symbol) or hogging ("−" symbol) moments are involved. The circle symbols denote so called "indeterminate regions", where the optimum beam direction is arbitrary. Due to the presence of indeterminate regions it was found that the numerical layouts obtained often became complex in form. This is because the interior point method used to solve the underlying LP problem (4) will normally identify a solution that combines all possible designs in Fig. 4b. To address this, the length vector l can be modified by adding a constant joint cost / length (j = ±10 −6 unit length), i.e.l i = l i + j . Joint costs were first introduced by Parkes (1978) as a simple means of rationalizing optimum trusses. Here a very small joint cost is used to ensure the numerical layout is pushed towards a basic LP solution to increase visual clarity. Furthermore, numerical tests showed that the clearest visual results could be obtained when j is taken as a small positive value for hogging beams and a small negative value for sagging beams. Notwithstanding this, all optimum volumes presented herein were computed without employing a joint cost. Benchmark examples A range of example problems are presented, starting with problems for which closed-form analytical solutions are available. It should however be noted that although countless analytical optimum layouts were presented in the papers by Rozvany et al., optimum volume values were rarely quoted. This is due to the fact that loads were generally not specified, since the layouts derived had universal applicability for arbitrary (downward) load. However, since the optimum displacement field can be recovered from an analytical layout, the optimum volume V can be computed from the virtual work done by given load W , although the process can be laborious and hence analytical volumes will only be provided for selected example problems. Square domain with simple supports The first example considered herein involves a square design domain with simple supports, as shown on Fig. 3a. This problem is one of the oldest and simplest to derive analytically, e.g. see Morley (1966). The solution is optimum for arbitrary downward load; one square R ++ region is present along with four triangular R +− regions. For comparison an optimum layout for a uniform pressure load was generated via the new layout optimization method, see Fig. 3b. Since the R ++ region is indeterminate the numerical solution presented is in fact one of an infinite number of possibilities, where here the pressure load is transferred to two beams of significantly larger cross section. The four triangular regions appear to be R + type rather than R +− as proposed analytically since only sagging beams are present, with the orthogonal hogging beams vanishing. This apparent discrepancy, along with other subtle issues associated with numerical layout optimization of grillages, will be considered in the next section. Square domain with clamped supports at corners The second example involves a square design domain with clamped supports at the corners, as shown in Fig. 4. This serves to illustrate the effect of the joint cost used to rationalize the solution. The analytical layout shown in Fig. 4a is proposed based on the approach described by Rozvany (1972b) for arbitrary downward load. Aside from four R +− Fig. 3 Square domain with simple supports: a optimum layout derived analytically by e.g. Morley (1966) Square domain with clamped supports at corners: a analytical optimum layout according to Rozvany (1972b); new result obtained by numerical layout optimization for uniform pressure load; b without joint cost regions the optimum grillage comprises five indeterminate regions, four R −− regions and a single R ++ region. For comparison an optimum layout for a uniform pressure load was generated by layout optimization, initially without using a joint cost, as shown on Fig. 4b. The numerical representation of the R +− regions coincides perfectly with the analytical design, whereas the indeterminate regions involve numerous overlapping beams in different orientations, thus rendering the numerical solution of little practical value. However, by re-running the problem with joint costs the solution is greatly simplified, as shown on Fig. 4c. The R ++ region is now transformed to a regular grid and the R −− regions to cantilever fans radiating out from each of the four point supports; similar fans will be observed in the vicinities of clamped point supports (or concave corners of supported boundaries) in subsequent examples. However, it is evident that the introduction of a joint cost has appeared to transform the R +− regions into R + regions, as occurred in the previous example. This is because the optimum beam width distribution is not necessarily unique for a given applied load. Thus in the example shown in Fig. 4 the particular representation of the indeterminate square R ++ region influences whether or not hogging beams are present in the adjoining R +− region. However, the lack of a unique beam width distribution can also be demonstrated in simpler problems, without R ++ or R −− regions. For example, consider the case of two opposing cantilever beams of equal length subjected to a shared load at their tips; also Fig. 12 serves as a further example. Although a statically determinate optimum grillage is guaranteed to exist, the structure shown in Fig. 4b is clearly not statically determinate, due to the non-uniqueness of the solution. The use of a joint cost does not necessarily fully remedy this, e.g. see Fig. 4c. Square domain with external clamped and interior point supports The next example involves a square design domain with clamped external supports and four interior clamped point supports. In the paper by Rozvany et al. (1973) the problem was deemed load-independent and consequently an analytical layout was given for all downward loads; see Fig. 5a where sagging or hogging indeterminate regions are depicted in solid ink. Figure 5b shows the new numerical solution obtained, assuming a uniform pressure load is Table 1 of Appendix A, with the number of adaptive member adding iterations required to obtain a solution for a given nodal discretization shown together with associated CPU times. Square domain with four column supports The next example involves a square design domain with free external boundaries and four supporting columns in the interior, represented by square clamped supports. Assuming arbitrary downward load, Fig. 6a shows the optimum layout derived analytically by Rozvany (1972a) for this particular problem. An analytical solution u of the kinematic form can be uniquely derived based solely on the layout from Fig. 6a which is independent of the load, provided this is always downwards. For example, if a uniform pressure load of magnitude p is applied then, based on duality arguments, the exact volume V exact of the optimum grillage can be computed from the virtual work done by the load as follows: where denotes the design domain. Both the function u and the integral are computed in Appendix B. The numerical solution for this case is shown in Fig. 6b; the optimum volume V num = 13.33pb 4 /m p is derived from the values tabulated in Table 2 of Appendix A. The close correlation between the numerical and analytical solutions is clearly evident, both in terms of computed volume and grillage layout. Note that for this example only two adaptive member adding iterations are required to obtain a solution (see Table 2 in Appendix A) because the initial ground structure already contains most critical members; this also leads to relatively low associated CPU times. Domains with free and simply supported edges Problems involving domains with both free and simply supported edges were investigated by Rozvany and Liebermann (1994). The associated optimization problems were challenging mathematically, with the formulas describing the optimum directions of the beams given in implicit integral form which had to be solved numerically. Here a right-angled isosceles triangle domain with a simply supported base edge and a simple point support in the right-angle corner is considered, as shown in Fig. 7. The optimum grillage layout for this problem found by Rozvany and Liebermann (1994) for arbitrary downward loading is given in Fig. 7a. Note that the layout is not trivial as the beams do not radiate from the simple point support. A numerical solution was obtained for the uniform downward pressure load case and is presented in Fig. 7b. The close resemblance between the analytical and numerical solutions is clear. Square domain problem The next example involves a square domain comprising two clamped and two free edges, as shown in Fig. 8a. This problem was previously considered by Rozvany (1972b), though an exact analytical solution was not derived (even then the class of optimum grillage problems for clamped / free boundary conditions was recognized as being difficult). The same problem was revisited by Sigmund et al. (1993), who presented numerical solutions obtained using a ground structure-based approach combined with FEM. Despite the insights these generated a general analytical method for grillages with clamped and free edges has still not been (a) (b) Fig. 7 Triangular domain with free and simply supported edges: a optimum layout derived analytically by Rozvany and Liebermann (1994); b new result obtained by numerical layout optimization for a uniform pressure load, V num = 0.09505ph 4 /m p found, highlighting a clear gap in the grillage optimization theory developed by Rozvany et al. In Fig. 8 a range of problems are solved, for cases involving point and pressure loads. Domain with hole problem The next example involves a domain with a hole and free and clamped edges, as shown in Fig. 9a. Solutions were obtained for three different loading scenarios, involving either point or pressure loads. The optimum layouts for the problems involving point loads, presented in Fig. 9b,c, are perhaps of particular interest since they clearly indicate how the load finds its way through an optimum grillage around the hole back to the supports. Uplift effect One of limitations of the computer software tool produced by Hill and Rozvany (1985) was that it could not model internal simple supports because of the potential for uplift, rendering the optimum grillage layout dependent on the position of the load(s) involved. An example is shown in Fig. 10. In this case the internal support divides the design domain into two parts: one subjected to pressure load p 1 , and the other p 2 . When p 1 and p 2 are both applied, the optimum grillage layout is shown in Fig. 10b. If only one of them is applied, different results are obtained, as shown in Fig. 10c and d, indicating the load dependant nature of the problem. Uplift effects are present in the problems shown in Fig. 10b and d, where in (b) the load p 1 effectively cancels out some of the bending effects caused by p 2 , leading to a lower volume than in (d). Partially downward and partially upward load Applying mixed downward and upward loads yields yet another class of problem for which load-independent optimal layouts cannot be found. Analytical results for a modest range of such problems were published in a short paper by Rozvany (1997); however a general method of treating such problems analytically has not yet been developed. The example presented in Fig. 11 gives a good insight into the nature of such problems; here two point loads are to be transferred to four simple point supports. Figure 11a shows the solution for all-downward load; in this case two separate simply supported beams of total volume V dd = 0.25P L 2 /m p prove to be optimal. However the optimum layout shown in Fig. 11b for the case when one of the loads is upward clearly involves interaction between the two forces, thus considerably reducing the optimum volume, to V du = 0.1875P L 2 /m p . (e) (f) Fig. 8 Square domain with two free and two clamped edges: a problem definition (domain has dimensions L×L, with point loads applied on the diagonal); b P 1 = P , P 2 = 0, V 1 = 0.1408P L 2 /m p (which can be shown to coincide with the exact solution); c P 1 = 0, P 2 = P , V 2 = 0.4093P L 2 /m p ; d P 1 = P , P 2 = P , V 1+2 = 0.5315P L 2 /m p ; e uniform pressure load p = P /L 2 , V = 0.07067P L 2 /m p ; f "beam-weave" phenomenon Point moment load Formulation (4) permits point moments to be applied directly, thus yielding another class of load-dependent problem. An illustrative example involving a rectangular domain with a point moment load remote from a support is shown in Fig. 12; domains of constant width and varying height are considered. The key observation is that, despite filling the entire height of the domain with an optimum layout, the optimum volume remains constant, at V = M L/m p . This indicates the indeterminacy of the layout in each case (since e.g. design (c) is also a viable solution to problems (a) and (b)). Non-optimal design of beams with end moments of different signs As mentioned in section 2.3 the solutions obtained via the proposed numerical method will overestimate the true solution in cases where the bending moment function changes in sign along the length of one or more beams. However, none of the optimum layouts presented in section 3 contained any beams where this was the case. Numerical experiments involving other problems showed that when such beams were present, use of a higher nodal refinement remedied this. This is to be expected, since two shorter beams can always be chosen to meet at the point of contraflexure in a long beam, at least approximately. Load dependent layouts in grillage optimization The class of grillage optimization problems solved fully and analytically by Rozvany et al. share an essential property: independence of the layout from load, providing the latter is always applied in a downward direction. This means that there is a displacement vector u that solves dual problem (5) for every downward load, or alternatively, that there exists a displacement vector u that maximizes the out-of-plane displacement of every point (node) simultaneously. In contrast the optimal layouts for the problems considered in Sections 3.2 to 3.5 are load dependent, and there are currently no analytical methods that can be applied to such problems. In this context it is worth revisiting the triangular domain problem initially investigated in Section 3.1.5. According to Rozvany and Liebermann (1994) the analytical layout shown in Fig. 7a should be universal for all downward loads. However, this can be checked by using the numerical method developed herein to explore a range of different loading scenarios. Thus consider for example the case of Fig. 13a. Comparing Fig. 13a with Fig. 7a it is evident that the beam directions differ, suggesting that this problem is not load independent after all. To verify this finding the theory of grillage-like slabs can be invoked; see e.g. Rozvany (1972a). With the given point load P duality theorems can be used to show that the analytically derived layout of Fig. 7a yields a lower bound volume V ≈ 0.18P h 2 /m p . Conversely the numerical solution of Fig. 13a is associated with a oneline bending moment field M that furnishes an upper bound volume V = 0.25P h 2 /m p . In order to prove that the exact volume V exact = V , and the associated exact moment field M exact = M, it is sufficient to guess a displacement function u such that curvature constraints are met and the optimality relation between M and u holds, i.e. -the principal curvatures κ I , κ II produced by u satisfy the point-wise inequalities: −1/m p ≤ κ I , κ II ≤ +1/m p and -the left free edge is one of the principal trajectories of curvature field κ and the principal curvature κ I is equal to +1/m p along this edge respectively. Naturally the function u must also satisfy the support conditions. It can easily be verified that the function in question can be given by an extremely simple closedform expression (where x, y are Cartesian coordinates, as indicated in Fig. 13a): which implies M exact = M, u exact = u and V exact = V = 0.25P h 2 /m p , and further that the numerical solution given in Fig. 13a is in fact the exact solution for the grillage-like slab problem with a single point load. The u field resembles a slab being twisted around the y axis, as shown in Fig. 13b. In fact, the same field u is also found when a point moment is applied at the point support; Fig. 13c shows the corresponding layout. (It now becomes clear that a function of the form of (8) also furnishes a solution to the problem described in Section 3.5.) It is also of interest to now consider the case of two point loads applied symmetrically midway along each the free edges; this yields a volume V = 0.354P h 2 /m p and the layout shown in Fig. 13d. Here the optimum layout appears to be inscribed within the analytical layout proposed by Rozvany and Liebermann (1994); see Fig. 7a. Note that the optimum volume is considerably smaller than double the volume of the one-beam solution. However, if the magnitudes of the applied loads are changed, a non-symmetrical numerical layout is obtained; see Fig. 13e. Here the orientation of the sagging beams forming the fans noticeably diverge from the analytical layout shown on Fig. 7a. These numerical experiments, together with the analytically proposed function u ultimately show the loaddependence of the triangular domain problem initially investigated in Section 3.1.5. The load-dependence appears to be due to the same uplift effect that occurs in the problems considered in Section 3.3, where in that case the axis of uplift was an internal line of simple support. In the triangular domain problem every straight line passing through the point support and the interior of the domain is a potential uplift axis. From this argument it can be concluded that the presence of a simple point support, either placed on the boundary or in the interior of the design domain, is likely to lead to load-dependence in the grillage optimization problem. Note that the triangular domain problem was the only example given in Rozvany and Liebermann (1994) that considered a simple point support; the authors' focus was originally a class of problems with free and simply supported edges only, so all other optimum layouts derived therein can be assumed to be truly load-independent and hence correct. Finally, suppose that the triangular domain of this problem is transformed into a trapezium to allow the point support to be replaced with a very short simply supported edge. The solution when a single point load applied midway along the free edge is shown in Fig. 13f. It is evident that the new optimum layout now appears to be in agreement with the analytical layout derived by Rozvany and Liebermann (1994). This suggests that the anomaly in this case stemmed from an assumption that an infinitely short line of simple support could be taken to be equivalent to a point support. However, the former prevents rotation about the y axis, and allows a reaction moment about the same axis to be generated. This appears to be crucial in order for the optimum grillage to comprise beams which coincide with the analytical solution shown in Fig. 7a. Beam-weave phenomenon and including torsion In the solutions shown in e.g. Fig. 8e,f a thin region of orthogonally intersecting sagging and hogging beams occurs along each free edge. This phenomenon has previously been identified in the optimum grillage layouts found analytically for problems involving mixed free and simply supported edges (Rozvany and Liebermann 1994); the result from Fig. 7 with R +− -type free edges serves as an example. This "beam-weave", as it was called therein, is particularly difficult to approximate using the ground structure approach since, theoretically, it is supposed to be infinitely thin. Similarly, a beam-weave turns out to be an optimum means of transferring load along free edges; e.g., see Fig. 9. Here the role of the beam-weave becomes more apparent; essentially it attempts to mimic a single beam capable of transferring torsion. Consequently, one can observe that limiting the height of the domain in the problem shown in Fig. 12 would provide an infinitely thin, beamweave-like design which is essentially equivalent to a single member in pure torsion. The above suggests that it may be worthwhile to include torsion in the problem formulation after all, since the beamweave regions degrade the quality of the numerical layouts. However, the underlying problem formulation then becomes nonlinear, and means of obtaining a suitable linearized approximation of the problem will be the subject of future research. Conclusions A new numerical layout optimization method capable of identifying the minimum volume and associated optimal layout of a grillage has been proposed. Beam members which are tapered along their lengths between nodes have been employed to maintain the linear character of the problem. This means that highly efficient linear programming algorithms can be used to obtain solutions, with the adaptive "member adding" technique previously applied to truss layout problems enabling solution of large-scale problems, containing large numbers of nodes and interconnecting members. A key feature of the new method is its generality; it can be applied to problems involving arbitrary domain geometries and loading and support configurations, and can faithfully capture important phenomena such as "beam-weaves", which provide resistance to torsion when individual beams have negligible torsional resistance. When applied to problems for which exact analytical solutions exist it has been found that close approximations of these solutions can be found. However, analytical methods developed to date by workers such as Rozvany et al. can only be applied to problems for which the optimal layout is independent of loading. Thus the proposed method has also been applied to a range of load dependent problems, for which analytical solutions are currently not available. Interestingly the new method revealed that one problem in the literature which had been thought to be load independent (providing the load was always applied in a downward direction), is in fact load dependent, rendering the proposed analytical solution less generally applicable than previously thought. Acknowledgements The first author would like to thank the National Science Centre (Poland) for financing the Research Grant no 2015/19/N/ST8/00474 entitled: Topology optimization of thin elastic shells -a method synthesizing shape and free material design. The second and third authors acknowledge the funding provided by the UK Engineering and Physical Sciences Research Council, under grant no. EP/N023471/1. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons. org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Appendix A: Computing extrapolated volumes As described in Darwich et al. (2010), numerical solutions obtained from numerical layout optimization runs appear to follow a relation of the form: where V n is the numerically computed volume for n equally spaced nodal divisions, V ∞ is the volume when n → ∞, and k and α are constants. Using (9), a weighted least-square approach can be used to find the best-fit values for V ∞ , k and α, with the weighting coefficient taken as n. Numerical solutions are given in Tables 1-3. By duality arguments one can compute the volume of the optimum grillage (grillage-like slab) from the following equality where u is an out of plane displacement function that solves the dual displacement form and p is a load function. As implied in Rozvany (1972a) the four column slab problem considered in section 3.1.4 enjoys a solution u that is independent of the load p provided the latter is always downwards. The analytical layout given in this paper and presented in Fig. 6a furnishes principal trajectories of curvature field κ associated with u and of principal curvatures being equal to ±1/m p . To facilitate comparison of the volume V exact with numerical results, the load p is assumed to be uniformly distributed. Now, by making use of information on the curvature function, the solution u can be recovered region by region; in addition the function u must be continuous and continuously differentiable in each point of the domain. As the layout is bisymmetrical, 1/4 of the domain is considered; for region partition and coordinate systems see Fig. 14. For sake of simplicity m p is taken as unity.
2022-11-20T14:25:38.094Z
2018-03-28T00:00:00.000
{ "year": 2018, "sha1": "97cf59c59847c583295d8e18433c6da51a82b0a5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00158-018-1930-6.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "97cf59c59847c583295d8e18433c6da51a82b0a5", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
210861690
pes2o/s2orc
v3-fos-license
Experiences and Attitudes of People with HIV/AIDS: A Systematic Review of Qualitative Studies The aim of this article was to explore the experiences and attitudes of people with HIV/AIDS. A systematic review of qualitative studies was carried out. Twenty-seven articles were included, with sample sizes ranging from 3 to 78. Articles from North America, South America, Central America, Europe, and Africa were included. Five topics emerged from the synthesis: feelings about the diagnosis of HIV/AIDS; stigma and HIV/AIDS; changes in sexual behavior after becoming infected; living with the virus; and pregnancy and motherhood in seropositive women. The moment of diagnosis is of vital importance for these people due to feelings such as disappointment, sadness, fear, despair, lack of awareness, and pain. Social support is highly valued among these people and is linked to an improvement in these peoples’ quality of life. Different kinds of stigma accompany people with HIV/AIDS throughout their life, like social stigma, self-stigma, and health professionals’ stigma. Seropositive women who decide to become mothers can feel frustration because they cannot breastfeed. Spirituality helps some people to deal with the fact of being a virus or disease carrier. Introduction HIV is one of the main problems with regard to public health, with greater representation in developing countries [1]. The most affected region is Africa, where almost two thirds of new HIV infections can be found [2]. Worldwide, amongst the population with HIV, 54% of adults and 43% of children are currently being treated with antiretroviral therapy, with the global coverage of these medications for pregnant women or for women who are breastfeeding being approximately 75%. Being seropositive or having the disease tends to occur in several stages, among which are: the stage of diagnosis, where the person is normally in shock; and the stage of acceptance (positive adaptation) or denial (negative adaptation) [3]. From the beginning of the disease, when it was labelled the so-called "gay-syndrome", it was characterized by a huge burden of discrimination [4]. This stigma is a negative element that limits the individual's adaptation to the disease/seropositivity, as well as complicating the management and treatment of the disease; it also creates difficulties in the relationship with the population in general, and with health care professionals [5]. Currently, there are global proposals, such as the well-known 90-90-90, the Joint United Nations Program on HIV/AIDS [6], whose aims include raising awareness of the HIV/AIDS epidemic. This proposal, through an ambitious project, suggests that, by 2020, 90% of seropositive people must be diagnosed in the world, 90% of them must be treated, and 90% of them must be free of viral load. It also suggests that different governments and policies join forces to control the disease. In this sense, this research aims to explore the disease from the individual's own perspective and, therefore, to contribute to making the daily life of people who suffer from it visible. The purpose of this review was to explore the experiences and attitudes of people with HIV/AIDS. We consider that knowledge-based outcomes from this study can help improve decision-making on health strategies to cope with HIV, and also guide future research on the topic. Materials and Methods A systematic review of qualitative studies was developed. Our research included studies published in Spanish, Portuguese and English. We selected original articles oriented to qualitative methodologies, whose interest of study was to explore the perspective of people with HIV/AIDS. Studies regarding a pediatric population or focused on adolescence were excluded. We intended to show a broad view regarding the phenomenon under study, which incorporated works from a wide geographical context. That is why different search sources and databases were used, such as CINAHL, PubMed, Lilacs, Cuiden and Google Scholar. In the same way, descriptors from MESH, CINAHL/MeSH, Subject Headings and DeCS (for the Spanish Language) were employed, in addition to non-standardized terms. The English terms that we used were: Human Immunodeficiency Virus, HIV, AIDS, qualitative research or studies; whereas the Spanish terms were: VIH, SIDA, cualitativo. The search was conducted from January to March 2019, including publications until 2018. The oldest article included in the review was published in 2004. Search. Different search strings were designed with thematic, main, and free descriptors for the different databases. For instance, we used the following search string for the PubMed database: Initially, duplicate studies were excluded and after that, a screening process took place based on: (1) title; (2) abstract; and (3) the full text. The discrepancies regarding article selection were solved by consensus. Finally, the articles' methodological quality was evaluated. Hence, 27 articles remained in this review. Figure 1 shows the flowchart. Critical Evaluation. The articles which we considered to be relevant after reading the full text were evaluated via peer review. An evaluation of these articles' methodological quality was conducted through the CASPe program for qualitative research [7]. The items included in this guide are "present", "doubtful" and "not on record". Therefore, some eligibility criteria were proposed; first, that one of the elimination items does not have "not on record" (1,2,3), and also that the rest of the items do not have four or more "doubtful" or "not on record" (4,5,6,7,8,9). Item number 10 was not evaluated because it does not focus on the applicability of this research in concrete situations, which exceeds the discoveries' methodological evaluation and evaluation of relevance. The results of this phase are shown in Supplementary File 2. Despite the fact that 26 articles passed the quality evaluation according to the proposed criteria, we decided to include an article that did not pass it because of the relevance of its results in relation to the aim of the research. On this matter, regarding the syntheses procedures of qualitative studies was followed, which suggests giving priority to the discoveries' quality in the article selection [8]. Hence, the final number of articles included in the systematic review was 27. Data Extraction. Relevant data (author/s; participants' country, number and other features: man or woman -mothers or pregnant-; type of research; qualitative approach chosen in the research and field where it was developed, understanding "Hospital" as any hospital field and "Community" as any association, advice or monitoring clinic or health center) were extracted by the main author of this review and verified by the rest of the authors. any association, advice or monitoring clinic or health center) were extracted by the main author of this review and verified by the rest of the authors. Data Analysis. After repeated reading of the articles, we carried out the narrative synthesis. It consisted of joining the information by means of common topics, creating in this way different categories and subcategories when necessary. The results are presented here, taking the identified topics as the core idea, describing the main discoveries of the different studies in an integrated manner and incorporating, in some discoveries, direct quote of the studies' informant participants to show evidence of their narratives. This research is consistent with the guide "Enhancing Transparency in Reporting the Synthesis of Qualitative Research" [9] with the purpose of giving uniformity to the publications of qualitative synthesis studies. Results Of the 27 studies that met inclusion criteria, sample sizes ranged from 3 to 78, with participants coming from Canada, Ireland, Spain, Kenya, Malawi, Ghana, Ethiopia, South Africa, Brazil, Chile, Data Analysis. After repeated reading of the articles, we carried out the narrative synthesis. It consisted of joining the information by means of common topics, creating in this way different categories and subcategories when necessary. The results are presented here, taking the identified topics as the core idea, describing the main discoveries of the different studies in an integrated manner and incorporating, in some discoveries, direct quote of the studies' informant participants to show evidence of their narratives. This research is consistent with the guide "Enhancing Transparency in Reporting the Synthesis of Qualitative Research" [9] with the purpose of giving uniformity to the publications of qualitative synthesis studies. Results Of the 27 studies that met inclusion criteria, sample sizes ranged from 3 to 78, with participants coming from Canada, Ireland, Spain, Kenya, Malawi, Ghana, Ethiopia, South Africa, Brazil, Chile, Peru and Mexico. Moreover, 63% of the participants came from the community field, whereas 27% belonged to the hospital one. All of the articles used the interview in different forms, and, furthermore, only two of them used focus groups among their methods of data collection (Table 1). Five topics emerged after the narrative synthesis: feelings about the diagnosis of HIV/AIDS; stigma and HIV/AIDS; changes in sexual behavior after becoming infected; living with the virus; pregnancy and motherhood in seropositive women ( Table 2). Note: We opted to unify the references by mentioning the first and the second author, if there were two authors. From three authors or more, the first is mentioned and we add "et al." for the rest. Note 1: "" = Article included in the category. Source: own elaboration. Feelings about the Diagnosis of HIV/AIDS This category was identified in 33.33% of the articles, which includes the feelings that the study participants experienced after the diagnosis of HIV/AIDS, as well as the different attitudes that they took to face the situation. The feelings we can highlight are disappointment, sadness, fear, despair, lack of awareness, and pain. In some cases, these emotions lead to depression, or they might intensify it. Hence, feelings of frustration might appear as well due to not achieving the targets that the subjects have set in their life [16]. After the diagnosis of HIV/AIDS, it was highlighted that people were afraid of being alone, because the lack of awareness about it causes social exclusion toward the people infected. The acceptance of the diagnosis is difficult; however, it depends on the cultural and social traits of the person. In some cases, people opt for submission to the diagnosis and its consequences or to conformism. In some cases, they accept that the risky practices they have made in their life have resulted in the fact that they are carriers of the virus and they accept their mistake. One participant of the research by Carrasco et al. [15] expresses it in this way: "...when I was informed that I suffered from HIV, I felt an extremely huge sorrow, an extremely huge helplessness, but, at the same time, I was very calm because I admitted and accepted the mistake that I had made when I did not take care of myself...". However, it should be noted that there are many ways in which a person could become infected (transmitted from a HIV+ mother during birth, blood transfusion), so it does not follow that each person infected did something wrong, made a mistake. Furthermore, even people felt that they were a mistake, it seems unlikely that every one of them would get to a stage where they all "accept their mistake". Many women discover their HIV status (seropositive) in prenatal care or when the children they have had get sick. This leads to an intensification of all of the feelings previously described and, above them, the fear of transmission to their children, in cases where they are pregnant. After the diagnosis, we can notice in the articles' results that the advice on and treatment of HIV/AIDS helped participants to accept their situation, avoiding in this way feelings of hopelessness or exclusion. Moreover, they helped to increase their responsibility in regard to self-care, which guarantees longevity and the fact of trying to lead a normal life. Stigma and HIV/AIDS This category was identified in 44.44% of the articles, which was, in turn, divided in three subcategories: social stigma, self-stigma and health professionals' stigma. Social Stigma Social stigma is linked to family stigma; that is, the feeling of prejudice against people with HIV/AIDS which, in many cases, results in the social exclusion of the people who suffer from it. Social stigma is a common issue that people infected with the virus suffer, although it is more highlighted in developing countries. For instance, in these countries, women do not undergo a diagnostic test for fear of being judged by people. Therefore, they have to go to other villages to undergo the test because they are afraid of being isolated from the community. Another characteristic of these countries is that women are the ones who undergo these tests, so that men can blame them (even if they are the main carriers) when they are HIV-positive. Seropositive people are still labelled and judged every day, being treated as promiscuous, homosexual or less honorable, which is linked in many cases to the virus being transmitted through sexual contact. HIV-positive people or people with AIDS make new circles with infected people because they feel free of any judgement. This results in the fact that these people close old social or family circles and they do not divulge the diagnosis. Stigma causes a serologic silence as a means of protection, as well as to avoid discrimination and prejudice. When the serological status is revealed to close people or to relatives, people infected with HIV feel liberated and, generally, accepted. Nevertheless, there are some families that prefer this news to be kept in the privacy of their home to avoid being judged by close people, such as their neighbors. This family acceptance has an influence on the increase in the quality of life, acceptance of the virus/disease and better adherence to antiretroviral therapy in HIV-positive people. In the case of homosexual people with the virus, they frequently suffer the so-called "double stigma": one because of their sexual orientation, and another because of being seropositive. Self-Stigma Most HIV-positive people or people with AIDS, besides suffering discrimination and prejudice by others, also suffer these feelings toward themselves. The fear of transmitting the disease to their relatives or to people in their environment is a common feeling in these people. They even go so far as to take exaggerated hygiene measures or to use different pieces of cutlery to the rest of the family. They also experience feelings of guilt and embarrassment, as well as the belief that the disease is a divine punishment because of their risky behavior some time ago. In the research by Peñarrieta de Córdova et al. [31], one participant stated that: "...every bad act leads to a bad consequence... It is the price that I am paying because of everything that I have done...". The feeling of being useless, not respectable to society, and undesirable to other people causes social isolation and the retirement of social circles. Health Professionals' Stigma Some seropositive people take the views of the health professionals who treat them as a reference point because of all the knowledge which they have concerning health. That is why some of these professionals' practices or attitudes can make people with the virus internalize the discriminatory behaviors that some professionals carry out. In some cases, the fact that health professionals take additional measures as extra-safety precautions during procedures, when they provide clinical care and treatment, is mentioned. This is obvious from the clinical safety point of view; however, participants might misunderstand it in terms of stigma. Although the patients reveal that they felt more singled out and judged by these professionals in the past, they still sometimes perceive it in their clinical care. Other results make reference to the opposite. Health professionals support and reinforce people with HIV/AIDS, which leads to a better adherence to the therapy and to the fact that these professionals become people with whom they can relieve their feelings. This support is more emphasized and necessary when people know they are carriers, because of the psychological impact that this entails. Changes in Sexual Behavior after Becoming Infected This issue is addressed in 37% of the articles included in this review, so that the main changes, measures and attitudes regarding the sexual behaviors of people with HIV/AIDS are presented. Feelings of anxiety when talking about this matter, insecurity, fears caused by the possible refusal of the others when becoming intimate, decrease in desire and sexual appetite, and apathy and lack of interest are common among seropositive people regarding their sexual lives. It was highlighted that sexual pleasure and intimacy became affected after diagnosis of HIV/AIDS due to fear of transmitting the virus, guiltiness, and lack of freedom. In the majority of them, a change of behavior after the diagnosis prevails concerning the use of a condom. The goal of its use is to prevent the transmission to their sexual partners or to avoid repeated exposure to the virus. The use of a condom is a limitation for many seropositive people, maybe because of the loss of feeling or freedom of choice as they are "forced" to use them (as a preventive measure). This fact makes the adaptation of the individual to live with HIV difficult. In some cases, practices like sexual abstinence for fear of infection are reported. Other people deny accepting their seropositivity and they prefer to give up on sex, which even leads to the person's isolation on several occasions. In the research by Freitas et al. [20], one of the participants expressed: "I cannot be cured, so I stopped going out, I stopped dating, I isolated myself". Among these individuals, there is an inability to look for sexual partners with whom they can enjoy life. This is due to the fear of rejection after revealing their serological status, which causes anxiety and constant concern on this matter. On the other hand, the ideal of romantic love and confidence that exists among steady partners (those who are serodiscordant, that is, one of the individuals is carrier of the virus and the other is not) makes them feel less vulnerable to infection themselves, and they forget about the prevention measures. In developing countries, as can be seen in the study carried out by Sikweyiya et al. [32], men feel a loss of masculinity when they find out about the diagnosis, because they have to use a condom (which is one of the reasons why it is hardly used in these countries) and because they have to reduce the number of their sexual partners (polygamy). Another feeling expressed by men is sadness, which is linked to the impossibility to perpetuate their family name and, therefore, this results in a sense of castration. Another relevant issue consists of who is in charge of taking care of the prevention means or of accepting unsafe sexual behaviors. As the reviewed studies present, this responsibility can be understood in three different ways. On the one hand, the responsibility lies with the seropositive person, who has the "duty" to protect the others and to take care of themselves. This is the ethical and correct option. On the other hand, the responsibility is shared, that is, both people must decide whether to take precautions or not to avoid risks. Finally, many people defend the idea that the responsibility of looking after and protecting oneself is individual, as is indicated in one of the participants' statement that appears in the research by Fernández-Dávila et al. [17]: "The boy took it off from me (the condom). I didn't say anything. Because this depends on him. I do not think it was necessary that he said any word to me...". Finally, in spite of understanding concepts like safe sex and preventive measures, condoms are still not used as they should to avoid new infections. As Juárez and Pozo [25] specified in their research, people who are in antiretroviral therapy, despite the fact that the possibilities of infecting the rest have only diminished, feel invulnerable. This makes them relax and employ risky behaviors in their sexual practices. Living with the Virus This category was identified in 44.44% of the articles, where the confrontation strategies that people with HIV/AIDS apply in their lives are principally addressed. According to many of the participants of the studies, being seropositive, or a carrier of the disease, means that they increase their self-care, fight for their lives and love other people more in order to receive the necessary support. To overcome the diagnosis with the desire to continue living requires that people with HIV/AIDS make changes in their lifestyles voluntarily and with the full conviction that they are necessary actions to lead a "normal" life. Understanding how the disease functions and what it involves is fundamental for the participants of the different studies reviewed. What helps to put bad practices aside is to focus on healthy habits such as maintaining a positive attitude, moderate physical exercise, a healthy diet and trying to have an active social life. This helps to avoid depression, loneliness, isolation and hopelessness. Among the responsibilities that being seropositive entails, we can include taking medication (antiretrovirals), which help these people to retain their wellbeing as the age. Like Juárez and Pozo [25] mentioned in their research, participants who took medication noticed some improvement in their quality of life. That is why adherence to the therapy and good monitoring is important for them. In Oliveira's article [30], one of the participants states: "It is a responsibility, because you must take that medication, you must have medical monitoring and you must be careful because you can develop some other diseases". In developing countries, men who are infected by the virus believe that the search for social support to cope with HIV disease or with being HIV-seropositive is a sign of weakness. Nevertheless, some others express that, after being diagnosed, they had to change their life a lot and to adapt themselves to this new situation. Having to take medication made them feel prisoners and it made the acceptance and adaptation to life with HIV/AIDS difficult. Social support is valued highly among these people. The desire to have more social relationships in their lives is expressed, because this helps them to overcome the negative situation linked with the virus. Their close family and friends are an essential source of support, helping them to make their everyday life more bearable and to make their adaptation positive. Most seropositive people learn to give more value to life, family and friends, as they already know that they are fundamental pillars of support, just like one participant states in the research by Braga et al. [13]: "In this case, it happens that you give more value to life". Some others, however, avoid speaking about the disease/seropositivity and the feelings that it entails with people close to them, omitting in this way the problem and showing some maladjustment. Finally, another way in which participants attempt to confront the diagnosis and to live with the virus is to get close to religion, which emerges as an emotional support. Faith in some superior being fills these people with motivation, relief, self-improvement and strength. They ask for courage through prayer to not to fall into depression. In Neves and Gir researched [29], one participant declares: "I devoted myself to God's hands, he is going to give me the answer". However, other people in De la Cruz et al. [16] described that God punished them for something wrong that they had done in spite of being faithful believers, which creates some uncertainty in them. Pregnancy and Motherhood in Seropositive Women This category was identified in 37.03% of the articles. It includes comments of seropositive women, some of them being mothers, some others pregnant, and some others with the intention of having offspring. Several issues, such as the causes of becoming pregnant, breastfeeding, and the feelings they experience regarding pregnancy with their condition of seropositivity, are addressed. Among the different reasons that women have for continuing with their pregnancy, as most of them are not planned, we found a need to satisfy their spouses or count on their support. On occasion, family is another kind of support that helps them to continue. Nevertheless, some other times they advise them not to continue with the pregnancy to focus on their own health, because of their condition. Some ecclesiastic communities support these women and encourage them during the motherhood process. Lastly, the most important cause in this category is their own feelings and the availability of antiretrovirals. Some women who are diagnosed before pregnancy tend to be more negative about pregnancy due to the concern about vertical transmission. Many seropositive women should be conscious of the right to motherhood, because they have the same rights as any other women. Many participants of the studies included in the review are aware of this. However, some others, in spite of knowing this, prefer to refrain from motherhood for fear of transmitting the virus, even if it is desired. Regarding breastfeeding, we found distinguishable comments among women who are treated with antiretrovirals and who decide to breastfeed, either by choice or because of the social pressure, and women who, with regret, avoid breastfeeding for the baby's benefit. With respect to the first group, in many countries, especially in developing ones, it should be noted that breastfeeding is a cultural norm which continues through generations. Women who are virus carriers live with the difficulty of motherhood in these communities, as Acheampong et al. [10] described, and they normally suffer and feel pressured when they breastfeed the baby. Among the feelings that they experience, we can highlight the fear and dread of transmitting the virus to their children through their milk, anxiety because of the uncertainty of knowing if their children are contaminated or not by drinking their milk, and the feeling of guilt when they contract HIV because the responsibility is theirs alone. Hope for the use of antiretrovirals and the effect that these have when they reduce the burden to almost imperceptible levels have also been shown. In the second group, we find mothers who present feelings of failure, sorrow, helplessness or suffering. One participant of the study by Sousa and Gimeniz [33] points out: "It was my dream to have a child and to see him nurse... When he was crying, I could breastfeed him and see how he stopped crying". For the majority, breastfeeding was a symbol of motherhood. Because of the fact that these women are recommended not to breastfeed, to reduce the virus transmission to their children, the people around them have many prejudices when they see them feeding their children with infant formula. This means that these women do not reveal their diagnosis for fear of rejection, either of themselves or of their children in the future, having to lie on several occasions, as one participant explains in the article by Linder et al. [26]: "When people ask me if I don't breastfeed, I say that I have an inverted nipple and, although it is true, it is an excuse as well...". Many of them stated that the information they received from health care providers about why they must not breastfeed was very superficial. Finally, motherhood is perceived as a support in their lives, giving seropositive women a reason to continue living eagerly. They have hope of seeing their children grow up and this is a positive factor in the face of the disease/seropositivity. These mothers usually overprotect their children in order to avoid suffering and rejection from people, as was described by Spindola et al. [34] in their research. Motherhood is a positive factor with respect to the adherence to antiretroviral therapy as well, because the possibility of their children growing up healthily encourages mothers to follow health recommendations. Discussion Systematic reviews of qualitative studies provide a broad view of the experiences of people facing health problems. This research focused on analyzing the experiences and attitudes of people who live with HIV/AIDS, based on a wide review that includes works from several countries, with representation in North America, South America, Central America, Europe and Africa. The analysis allowed us to identify the common elements regarding the feelings when facing the seropositivity/disease diagnosis, the stigma, sexual behaviors, and motherhood, which consolidates the work's international relevance. This review updates the discoveries which were already generated in previous works of similar characteristics, although they were published more than 10 years ago [37,38]. On the other hand, the publication of systematic reviews in this field has progressed in recent years [39][40][41][42][43]; however, the ones that include qualitative research as a source of results or that focus on very specific aspects are scarce. This strengthens the review that is presented in this work, as it is based on qualitative studies, helping topics which were not treated before to emerge. Based on our results, we can highlight the stigma that people who are carriers of HIV suffer. Three types of stigma (social stigma, self-stigma and health professionals' stigma) were relevant in the results. In accordance with the research by Sandelowski et al. [38], in their metasynthesis, although they focus on the female population, one of the problems of revealing the diagnosis to other people is the prejudices that exist toward seropositive people. On the one hand, when they reveal it, they feel relief and their relationships now provide authenticity. On the other hand, it can also be a reason for social isolation because of the non-acceptance of the others. Barroso et al. [37], in their metasynthesis, supported the results obtained in the research about social relationships, as these are a foothold to better adapt to the virus or to the disease. On the other hand, Villa et al. [40] confirmed in their literature review that the psychological aspects of seropositive people are reinforced by the social support they receive, which helps their adherence to the therapy and improves their quality of life. This last idea was also shown by Tavera [41] in her systematic review. Regarding adherence to the therapy, in one review, Puigventós et al. [39] stated that, in general, people with HIV/AIDS adapted well to the therapy. In the cases in which they do not adhere, the reasons might be that they are social outcasts (stigma), that they are minors, or the lack of motivation, among others. In the results obtained, we observed that, for instance, motherhood is a positive factor (apart from social support) in better adherence to the therapy, reducing in this way the possibility of vertical transmission to the baby. Concerning this last matter, guides for HIV/AIDS management have been published [42,44] which are particularly interesting regarding the approach to motherhood, including recommendations which result in a reduction of vertical transmission. Among the ways to adapt to the disease/seropositivity, we found in the results of this research that religion or the practice of healthy habits favored its normalization in their lives. This is in line with the proposals suggested by other studies [37,41], which claim that spirituality helps some people to confront HIV/AIDS. In addition, it is stated that understanding the disease makes the adaptation favorable as well, as it helps them to carry out positive strategies, such as physical exercise, changes in their diet or safe sexual behaviors. Benito [43], in his systematic review, explained that physical exercise in people with HIV/AIDS makes them gain weight and is favorable to their psychological wellbeing. Based on the results obtained, we suggest the further exploration of themes like the experiences of pregnant women or of mothers and their sexual behaviors in future research, as the fact that some HIV carriers adapt to their situation and some others deny it, despite knowing the risks, can be underlined. Likewise, new research on the moment of diagnosis would be enriching, as it is a crucial moment because of the great psychological impact that it entails. This review is not exempt from limitations. First, even though the sources used for the studies' search are pertinent, they might not give an account of all of the relevant studies for the objective of this research. To compensate for this limitation, it should be pointed out that the database and the base of resources used are specific to the Health Sciences area (the area in which this research is circumscribed), so that they are widely known and relevant sources in this area. Although no exclusion criteria was applied on a geographical basis, there are regions such as Asia that are not represented in this review, which may be of interest in future research. In this regard, it would also be interesting in future research to locate studies published in languages other than those included in this review. On the other hand, an excessive number of duplicate documents were avoided, which could have happened with the use of other databases that might have a high degree of overlap with the ones used in this review. Another limitation of this review involves the synthesis procedure. We opted for a classic procedure of narrative synthesis, which limits the descriptive and explanatory ability of the studied phenomenon. It would be relevant to progress toward metasynthesis procedures in future research and, in this regard, the discoveries of this review might be the base on which this future research can be oriented. Finally, it would be interesting to specifically include in future research an analysis that could differentiate the findings of the studies analyzed based on various factors. For example, how people feel about being diagnosed is likely to be qualitatively different when there are few options compared to when people can live long lives with effective treatment. It is important to note how the country of origin is related to such feelings, as those in under-developed countries may have less access to treatment, which could impact their feelings about the diagnosis. Conclusions Most of the people who are carriers of the virus have common feelings when they are informed about their seropositivity. Among them, we found disappointment, sadness, fear, despair, lack of awareness and pain. Sometimes, the diagnosis might lead to depression and social isolation. Social culture and environment are determining factors regarding the acceptance of the diagnosis. Intimacy and sexual pleasure are affected after the disclosure of the diagnosis; some seropositive people feel a decrease of their sexual appetite and, in general, there are changes in their sexual behaviors (e.g. use of condoms). In other cases, they opt for abstinence or, on the contrary, for risky practices, despite knowing their consequences. In the case of pregnant women, many of them find out about the diagnosis when they get pregnant. Some others decide to have children in spite of being carriers and the causes that drive them to do this might be the satisfaction of being mothers or the need to satisfy their partners. Being mothers is a positive factor to fight HIV/AIDS, because it gives them the strength to continue and see how their children grow up. Furthermore, breastfeeding generates distinguishable comments between the ones who desire it and find themselves forced to breastfeed because of the social pressure, for whom the use of antiretrovirals relieves their fears in the face of transmission danger, and the ones who opt for not breastfeeding, showing an evident helplessness. The fact of being seropositive implies a high degree of stigma, as the prejudices against people with HIV/AIDS are evident. Some people lean on their intimate social circle to confront the disease or seropositivity. Religious practices are a positive factor in which they take refuge as well. Practical Implications In the moment of diagnosis, the approach of healthcare providers is of vital importance, because of the impact that the news about being carriers causes on people. Thus, enough support and advice must be offered to these people to avoid future isolation and for appropriate therapeutic adherence. In the same way, correct health education is key to avoiding risk behaviors. It is relevant, from the social environment, to ensure the inclusion of these people in society, avoiding social exclusion. Regarding pregnant women, an early diagnosis of HIV status makes it possible to adopt measures that drastically reduce the risk of mother-to-child transmission. The information provided about the risk of breastfeeding babies must be complete. The fact that these mothers fully assimilate the information before they make a decision must be checked, avoiding uncertainty, sadness or feelings of helplessness. Finally, the COCHRANE collaboration recognizes that "evidence from qualitative studies that explore the experience of those involved in providing and receiving interventions, and studies evaluating factors that shape the implementation of interventions, have an important role in ensuring that systematic reviews are of maximum value to policy, practice and consumer decision-making" [45]. Therefore, this review offers an understanding of the perceptions and feelings of people with HIV/AIDS, and thus it can help to improve the implementation of interventions focused on people and guide public health policies or the development of protocols and clinical practice guidelines, which are in tune with the UNAIDS proposal worldwide [6].
2020-01-23T05:23:15.529Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b6ca57381a872531a77889adb6c64dd2bb7ca9f8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/2/639/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6ca57381a872531a77889adb6c64dd2bb7ca9f8", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
32017196
pes2o/s2orc
v3-fos-license
Modelling and Verifying an Object-Oriented Concurrency Model in GROOVE SCOOP is a programming model and language that allows concurrent programming at a high level of abstraction. Several approaches to verifying SCOOP programs have been proposed in the past, but none of them operate directly on the source code without modifications or annotations. We propose a fully automatic approach to verifying (a subset of) SCOOP programs by translation to graph-based models. First, we present a graph transformation based semantics for SCOOP. We present an implementation of the model in the state-of-the-art model checker GROOVE, which can be used to simulate programs and verify concurrency and consistency properties, such as the impossibility of deadlocks occurring or the absence of postcondition violations. Second, we present a translation tool that operates on SCOOP program code and generates input for the model. We evaluate our approach by inspecting a number of programs in the form of case studies. Introduction In this chapter, we start with the motivation of this thesis and describe our contributions, before we present the research hypothesis and list the goals that we want to meet in order to consider this thesis a success. An overview of the thesis structure follows, before we close this chapter with a description of previously published work present in this thesis. Motivation With the shift to multiprocessor and multicore systems, concurrent and parallel programming becomes an important part of object-oriented software development. While object-oriented models and languages allow programming at a high level of abstraction when writing sequential programs, they often rely on low-level constructs for concurrency, such as locks, semaphores, and threads. These constructs are very error prone and difficult to use correctly. Simple Concurrent Object-Oriented Programming (scoop) is a concurrency model and language that extends Eiffel with concurrency mechanisms. The model hides low-level constructs in its implementation, and instead provides the user with simple to use constructs that allows concurrency to be expressed at a high level of abstraction. In particular, lock management and thread creation is no longer expressed explicitly. It is still possible to introduce concurrency bugs with the high-level constructs from scoop, most prominently deadlock. Naturally, these bugs are difficult to detect, as they may not occur in every program execution. With program verification approaches, it is possible to prove the correctness of implementations and make sure that no concurrency related bugs exist for the modelled semantics and input program. Currently, there exist several formalisations of scoop [24,14,1,16]. They do not focus on verification but rather resolving language ambiguities. They can not be used for model checking, due to the state-space explosion problem inherent to concurrency models. In addition, existing approaches for the verification of scoop programs [3,23] focus on deadlock prevention and work on either annotated source code or with manually translated model input. In this thesis, we propose an alternative approach to verify scoop programs. First, we present a graph-based model that focuses on the concurrency mecha-nisms in scoop, leaving out advanced object-oriented features. This leads to a compact model with a strong formal foundation. Second, we add object-oriented features from scoop to the model, obtaining an expressive formalisation that allows representing scoop programs more directly as model input. The models are implemented using the state-of-the-art model checker groove. We then present a translation tool that works directly on scoop source code. With this tool, we are able to translate a subset of scoop programs and generate input for the model checker. By putting these parts together, we provide a fully automatic tool that allows verification by model checking. We focus on verification of properties like deadlock or pre-and postcondition violations. By focusing on the core of scoop and abstracting away from internals of the formalisations, we are able to reduce the state-space sizes. We discuss why our abstractions and optimisations do not change the expressiveness of the modelled scoop subset. Research Hypothesis and Contributions The research hypothesis is as follows. A subset of valid scoop programs can be modelled using a graph transformation system. These programs can, without modification of the source code, be automatically translated to input graphs for the transformation system. Using verification by model checking, it is possible to verify a number of properties such as absence of deadlock or absence of precondition violations for a given input program. To satisfy the hypothesis, we specify the following goals for this project: • Provide a formalisation of a subset of scoop as a graph-based model using the groove toolkit. • Create a translation tool that operates on scoop source code and generates input graphs for the model. • Make informal soundness arguments for the translation and model. • Provide a simple tool that allows verification of certain properties like deadlock freedom or absence of precondition failures with a single step by specifying scoop source code and model parameters. • Evaluate the created translation tool and graph model by inspecting a number of scoop programs that use its concurrency features, as well as a thorough discussion of the characteristics and performance of the toolchain. Thesis Overview Chapter 2 gives an overview of the scoop model and its primary implementation. Chapter 3 briefly describes the theoretical background of this thesis and gives a detailed description of groove, the main tool used to implement the graph models. The SCOOP Model The goal of scoop is to enable concurrent programming at a high level of abstraction, without relying on low-level constructs like locks, semaphores, and threads. To achieve this, scoop adds a new keyword s e p a r a t e to the Eiffel language, which allows expressing concurrency relations between objects as follows. scoop introduces the notion of a processor, which is an abstract thread of execution that is able to execute instructions sequentially. A processor is the handler of a number of objects, and object references can point to objects that are handled by the same processor (non-separate references) or objects that are potentially handled by different processors (separate references). The set of objects handled by a given processor is called a region. In the source code, one can annotate types (in particular in feature declarations and formal arguments) as s e p a r a t e, expressing that the reference points to an object potentially handled by a different processor. With the concept of processors, the semantics of feature calls are different. If a client executes the call a . f ( b1 , b2 , \ ldots ), with target a and arguments b1 , b2 , \ ldots, then the following cases can be distinguished: • If the target a is handled by the current processor, then the call is applied immediately. • If the target a is handled by a different processor, then the client logs the call with the supplier. The call is then enqueued in the request queue of the handler of the supplier and processed at some point in the future. In the second case, depending on whether the call is a command (a call that does not return a value) or a query, the client waits for the supplier to execute the request. In the first case, the client can continue execution without waiting. In the second case, the client needs the result (e.g. as a value in an assignment, or as a value for parameter passing), and therefore waits until the supplier returns the value, making the call sequential. In order to avoid data races, scoop only allows calls on separate targets which are formal arguments of the enclosing routine. When executing a routine, the scoop runtime waits for exclusive access to the request queues of the handlers of the separate arguments. Once the request queues are locked, the routine starts executing and, since no other processor has access to the locked request queues, the requests logged by the routine are guaranteed to be executed in order and without interleaving requests from other processors. Shared memory, another source of data races, does not exist in scoop, since object data can only be modified using procedures and not directly accessed from outside (e.g. a statement like foo . id := 0 is forbidden if id is an attribute). Contracts, i.e. class invariants and routine pre-and postconditions are an integral part of Eiffel. Preconditions are Boolean assertions that must hold before the body is executed. If a precondition does not hold, a runtime error occurs. In scoop, the semantics of preconditions change. While statements involving non-separate objects behave like before, expressions involving separate objects can become wait conditions. If a wait condition does not hold, the processor simply waits until it holds instead of generating a runtime error. For example, a consumer might have a precondition in its c o n s u m e routine that states that the inventory must have an item ready, as seen in Listing 2.1. Since the inventory is separate and its state can be modified through requests from other processors, the consumer simply waits until the inventory is not empty anymore. Locks are acquired before the preconditions and wait conditions are evaluated, but released if a wait condition does not hold yet, giving other processors the possibility to enqueue request on the handlers of the targets. Wait conditions are a powerful and expressive synchronization mechanism. The lack of explicit locking makes this kind of synchronization particularly easy to use. A Running Example Throughout this report, we will use a running example to demonstrate the contributions of this project, in particular translation to and verification with our formal models in groove. The Dining Philosophers Problem is a well known problem that involves several entities interacting and is well suited for demonstrating concurrency models. In this problem, a number of philosophers sit at a round table, with a fork in between each pair of adjacent philosophers. The philosophers each perform two activities in a loop: thinking and eating. In order to eat, a philosopher needs to pick up both the left and the right fork before eating, and put them down afterwards. The goal is to devise an algorithm that abides these rules and does not get stuck in a deadlock. Listing 2.2 shows the P H I L O S O P H E R class of a scoop implementation (which we adapted from an implementation in the EVE [22] source code repository) of the problem. During his time at the table (feature live), a philosopher eats t i m e s _ t o _ e a t times. Notice how there is no code handling picking up and putting down the forks. Instead, this is done implicitly: the eat routine takes two objects of type s e p a r a t e FORK as arguments. Once a philosopher is inside the (empty) eat body, he has exclusive access to the processors handling the left and right forks, simulating picking up both forks and thus not allowing other philosophers getting access to the forks. Since both forks are arguments in the same routine, their respective processors are locked atomically, which guarantees that no deadlock can occur. While scoop allows concurrent programming at a high level of abstraction, it can still be difficult to spot problems related to concurrency. For example, an unexperienced scoop programmer may have implemented the eat method as shown in Listing 2.3. In this implementation, a philosopher first picks up the left fork, and then the right one. An execution may take place where each philosopher picks up its left fork and waits for the right one to become available, which never happens; the program is stuck and a deadlock has occurred. id : INTEGER 16 --Philosopher ' s id . 17 18 t i m e s _ t o _ e a t : INTEGER 19 --How many times does it remain for the philosoph e r to eat ? 20 21 eat ( left , right : separate FORK ) 22 --Eat , having acquired left ' and right ' forks . 23 do 24 --Eating takes place . p i c k u p _ r i g h t ( right_fork ) 12 end 13 14 p i c k u p _ r i g h t ( right : separate FORK ) 15 --Both forks have been acquired at this point . Related Work A first description of scoop appeared in 1993 [11] and an updated description was published in 1997 [10]. A prototype of scoop has been implemented between 2005 and 2008 at ETH Zürich, and an implementation maintained by Eiffel Software 2 is currently distributed with the EiffelStudio IDE. Since its introduction, several formalisations of scoop have been proposed [2,14,16,24,15]. We consider the work done by Morandi et al. [14,13] in this thesis. Graph Transformation Systems & GROOVE In the course of this project, we have been working with the GRaphs for Object-Oriented VErification (groove) [19] toolkit, which is a set of tools based on a strong formal foundation in Graph Transformation Systems (gts) that can be used for modelling, simulation, and verification. In this section, we give a short informal introduction to the gts theory groove is based on and then discuss the groove toolkit in detail. We showcase its features by providing a gts for our running example, the dining philosophers problem. A graph transformation is, informally speaking, the process of altering an input graph to get an output graph by using rules that describe the manipulation. There are a number of different approaches to graph transformation, which provide a wide range of semantics of rule applications. From an operational standpoint, the approaches differ in how rules are defined and in the situations in which they can be matched and applied. One such approach is the algebraic approach, discussed in [5], which is used by the groove toolkit. The Algebraic Approach In the algebraic approach to graph transformation systems, pushout constructions (from category theory) are at the core and are used to allow gluing graphs together. The two main approaches, Double-Pushout (dpo) and Single-Pushout (spo), allow for a compact and abstract representation of graph transformations. What follows is an informal overview of these approaches. The dpo Approach In the dpo approach, a graph transformation is described as a rule consisting of three graphs, L, K, and R. The graph K describes the interface of the rule, i.e. the parts of the graph to be matched and preserved. The left-hand side L\K of the rule describes the part that is to be deleted, and the right-hand side R\K the part that is to be created. An application of the rule described by L, K, and R on the graph G, shown by example in Figure 3.1, is performed by applying the following steps. from nodes and edges in L to nodes and edges in G. In Figure 3.1, this mapping is expressed by node identifiers, where nodes in L are mapped to nodes in G with the same identifier. 2. Construct a graph D from G by removing the matched edges and nodes in L\K from G. The combination of L and D at the interface nodes and edges from K (in our example nodes 1 and 2) is called glueing and results in G. 3. With a similar combination of D and R using the interface K, the output graph H is obtained. The spo Approach In the spo approach, specifying K is omitted. Instead, a rule consists only of the left-hand side L and the right-hand side R. An example application in the spo approach is shown in Figure 3.2. The following steps are necessary to apply a rule in the spo approach. 1. Obtain the common interface K = L ∩ R. 2. Find a morphism from L to G, as in the dpo approach. 3. Delete L\K from G, and join R\K, using the common interface as glueing nodes and edges. If there are dangling edges (i.e. edges that have a source or a target, but not both) remaining, delete them as well. They key difference between spo and dpo is that in the spo approach, dangling edges are allowed in the final graph, which is not possible in the dpo approach. Figure 3.2 shows a situation where a dangling edge (the edge between nodes 3 and 4 in G) remains after deleting L\K, which is then deleted as well. groove allows configuring whether applications which delete dangling edges are allowed. If so, dangling edges simply get deleted after the application to make sure the resulting construct is a valid graph. Otherwise, when only cases without dangling edges are allowed, spo and dpo are equivalent form an operational point of view. In our models, we never allow applications with dangling edges, which requires us to specify all edges incident to a deleted node on the left-hand side of a rule, ensuring that no edges get deleted "by accident". Orange nodes denote the common interface K, blue edges and nodes the parts of L that are to be deleted, and green edges and nodes of R the ones that are to be added. Note that the edge between nodes 3 and 5 in G is a dangling edge after the deletion of L\K in G, and is therefore deleted as well. GROOVE The groove toolkit is written in Java and consists of a number of components. The Simulator is a GUI tool that provides features to create and edit Graph Production Systems (gps). It is particularly useful for designing a system as it provides immediate feedback on how the system behaves. One can apply rules to start graphs and explore a Labelled Transition System (lts) either by manually choosing rules to apply one after another, or by automatically exploring the state-space for a certain amount of applications. With a finished gps, using the Simulator to model check various start graphs can become cumbersome and automating the task becomes difficult. For this scenario, the Generator was created, which is a command-line tool that explores the state-space of a given gps. Like the Simulator, the Generator can use different strategies for exploration, such as breadth-first-search or depth-firstsearch. The Generator also allows specifying Linear Temporal Logic (ltl) and Computational Tree Logic (ctl) formulae (a thorough discussion of ltl and ctl can be found in [9]) and searching for counterexamples. The Generator provides various metrics such as the size of the lts, feedback about ltl and ctl formulae, and profiling information. Other components that can be used as standalone applications are included in the above two tools. The Model Checker can be used to verify ltl and ctl properties for labelled transition systems created by the Generator, but is included in the Generator as well. The Viewer is a simple GUI tool that can render graphs from a gps and is used as part of the Simulator. Graph Production Systems groove stores its Graph Production System in .gps folders. Such a folder consists of the following components (stored as individual files). • Production rules are stored as .gpr files and encode graph transformations 12 in the sense of the spo approach. They are rendered as a single graph using colour codes to distinguish left-hand side, interface, and right-hand side, as well as other properties of the rule. • Type graphs are stored as .gpy files. If active, groove only allows using rules and start graphs that conform to them. Multiple type graphs can be active at a time. • Start graphs are stored as .gst files and represent stating points for the exploration. • The system.properties file contains a number of configuration entries, most notably whether dangling edges should be allowed, the name of the active start graph, the active type graphs, and the exploration strategy to be used. An important system property is whether rules can be matched injectively or not. If so, distinct nodes that are matched from a source must have a distinct node in the target graph. Otherwise, multiple nodes in the rule can be mapped to the same node in the target graph. The configuration of this property can be overridden for individual rules. The individual files conform to the Graph eXchange Language (gxl) file format, which is an xml format that specifies graph information. It is used in groove to store individual graphs and associated properties in the files mentioned above. Using an xml representation of gpss makes pre-and postprocessing of groove input and output respectively very accessible and easy to handle. To illustrate how the various components of a gps work together, we model the dining philosophers problem as a simple gps in groove. Note that the representation in this section is unrelated to scoop or the formal models we introduce in Chapters 4 and 5, and instead is a standalone model of the problem. Type Graphs Type graphs determine the form of other graphs in the system, in particular the form of rules and start graphs. While the feature is optional, it is rather useful when working on a system, as graphs that do not conform to the specified type graph are highlighted in the Simulator, which helps to detect typos and other errors. Figure 3.3 shows the type graph for a groove solution to the dining philosophers problem. It specifies that a philosopher can be hungry (using an optional node flag) and has a hunger integer value attached. The only edges in this system are edges from philosophers to forks. Not only has a philosopher edges to its left and right forks, but it can also have a lock on them, expressed by the lock edge, indicating that a philosopher has picked up the forks. Graph Representation The groove Simulator augments graph representations from a simple directed graph with edge labels to a more compact, readable format. As mentioned earlier, rules are represented as one single graph with nodes and edges of different kinds of nodes and edges (in particular, readers, erasers, creators, embargoes, and conditional creators). Figure 3.4 shows the start graph for a configuration of the dining philosophers problem with four philosophers. On the left-hand side, the start graph is shown as a directed graph with labelled edges. In the middle, the condensed form that groove uses is shown, where self-edges are collapsed into the nodes, and on the right-hand side the graph is rendered in groove with internal node identifiers (note that they are not a part of the model). We use the groove representations of graphs throughout this report, as the resulting graphs are intuitive and contain the same information as before. We occasionally enable node identifiers in order to be able to refer to individual nodes easily. In this model of the dining philosophers problem, each node that has a selfedge labelled type:Philosopher-we say "a node of type Philosopher" in this case-has two outgoing edges to nodes of type Fork, one of them labelled left and the other one labelled right. In addition, philosophers contain an integer value denoting the amount of times a philosopher wants to eat, encoded by the self-edges labelled let:hunger=2. The goal is to find rules that model the behaviour of the philosophers, namely grabbing the forks, eating, and putting them back down. Comparison of a start graph as a directed graph with edge labels (left) and as rendered in groove (middle and right). In groove, self-edges are collapsed and displayed inside the node. In addition, certain values are rendered differently, e.g. type: prefixes are omitted but the following value is printed in bold. Reader Edges and nodes that are displayed in black are the ones that are present in both sides of the rule. These are matched and preserved when applying the rule. Creator Creator edges and nodes are only present in the right-hand side of the rule, which means that they are not required for the rule to match but will be created upon application. Embargo These edges and nodes express a negative application condition. The rule only matches, if there is no match for these edges and nodes in the source graph. For example, in the pick up rule ( Figure 3.5) , we express with embargo nodes and edges that a philosopher should only lock both forks if they are not already locked. Eraser Finally, eraser edges and nodes (dashed blue) are elements that are only present on the left-hand side of the rule, which means that they are required for matching but will be deleted when the rule is applied. Operations Arithmetic operations can be performed using product nodes (rhombus shaped nodes), which take a number of arguments (via π edges) and point to a result node (operation edge, such as gt for greater than Figure 3.6: A philosopher eats if he is hungry. In the process, the hungry flag (self-edge with label flag:hungry) is removed, and a product node is used to decrease the hunger value by one. We first match all Philosopher nodes that have a left and a right fork attached with the ∀ >0 quantifier (which denotes that in order for the rule to apply, the subgraph "at" this quantifier (attached via @ edges) has to match at least once, as opposed to the ∀ quantifier where the rule can match zero occurrences of the subgraph). Then we require that there must exists a Philosopher node (dashed blue, attached to the ∃ node via @ edge which in turn is nested inside the ∀ >0 quantifier), which is in fact the same as the reader Philosopher node (expressed with the = edge) and where its hunger value is equal to zero. Since one of the Philosopher nodes is an eraser node, the matching philosophers get deleted upon rule application, modelling the collective leaving of the table. Rule Priorities In a graph state where more than one rule is applicable, it may be desirable to force a certain order when exploring the state-space. groove allows controlling the order of rule applications. Using simple rule priorities-integer values associated with a rule-a system applies rules with higher priorities before rules with lower priorities. For example, if our philosophers do not like to eat alone, we could assign the rule pick_up a higher priority than the rule eat, which means that as long as there are philosophers which are able to pick up their forks, they do so, and once no additional philosopher can pick up its forks anymore are the ones currently having the forks allowed to eat. groove also provides more advanced mechanisms for controlling rule applications. In particular, control programs allow specifying complex expressions with conditionals, loops, choices, and other control flow mechanisms. Since we do not use control programs in this work, we do not discuss them here. Instead, we refer the interested reader to [20]. Verification by Model Checking Now that we have modelled the dining philosophers problem, we can generate the state-space and inspect it. groove has many options for state-space exploration, for example it can do breadth-first search, depth-first search, random linear exploration, and other exploration types. A state is called final, if no modifying rule (a rule with erasers or creators) is applicable anymore. If we want to show that the dining philosophers example always results in the philosophers leaving the table (i.e. applying the leave rule before being in a final state), we could do this by generating the full state-space and inspecting paths and rule applications in the lts, which is a graph where nodes represent states and edge labels denote which rule leads to the outgoing state. An excerpt of an lts generated with our example can be seen in Figure 3.9. Obviously, inspecting an lts by hand is not feasible for larger state-spaces. Fortunately, we can specify ltl and ctl formulae in groove. In our example, we could verify that all executions end up with the leave rule being evaluated with the ltl formula F leave, expressing that, starting from the initial configuration, eventually the rule leave is applied. Since one can use arbitrary rules in ltl and ctl formulae, we can create rules capturing certain properties, such as a generic deadlock, of a system state and include them in the formulae. Related Work Our focus on gts is limited to the theory relevant to groove. An introduction to the algebraic approach, in particular to the dpo approach, can be found in [17], and a thorough discussion of the algebraic approach in [5]. While we focus on the groove features relevant to this thesis here, the groove User Manual [20] provides a more detailed description of groove features. A set of best practices when working with groove is presented in [25]. Several papers [19,18] discussing groove have been published, and groove is compared to other simulation and model checking tools in [6]. Towards a Concurrency Model for SCOOP As we have seen in Chapter 2, scoop is a rich programming model that provides a framework for concurrent programming and is equipped with advanced objectoriented features. While this is great from a user perspective, it also makes modelling of the complete language a difficult task. To conquer this difficulty, we first isolate concurrency related features from scoop to obtain a subset of the model called corescoop. This subset of scoop is formalised by the Concurrent Processor Model (cpm) [8], a gts based formal model. Thanks to the modular and extensible nature of the model, more features from scoop are added and eventually become cpm+oo as presented in Chapter 5. In this chapter, an overview of corescoop and a detailed description of cpm and its primary implementation in groove is given. In the next Chapter, we then present cpm+oo, which adds object-oriented features from scoop to cpm. CoreSCOOP We define a small subset of scoop called corescoop. In this subset, only basic object-oriented features exist. There are only three kinds of data: integers, Booleans, and references to processors. A processor can execute a simple method with statements that modify local data-such as assigning a sum of two local integers to another local variable-as well as asynchronous commands and synchronous queries, where the target must be a different processor. A method can not call other methods on the same processor as there are no local calls. Simple method calls can be simulated by inlining the called method, but this does obviously not work for recursive calls. The main part of corescoop-handling queries and commands-remains as in scoop. To enqueue a feature request in some processors request queue, one has to first obtain a lock to the queue of the target processor. While scoop handles locking implicitly by requiring separate targets to be controlled, corescoop handles locking explicitly, and locking can occur at any place in a method. cpm is a formal model for corescoop and follows the specification in [13]. In the next section, we present this formalisation and its primary implementation as a gts in groove. CPM cpm is a gts modelling the behaviour of corescoop. It allows simulating configurations with a number of processors, each one performing computations on integer, Boolean, and reference values. We discuss the system, in particular the production rules involved, in detail in the following sections, by separating concerns in the following groups: control flow, system state, and queries and operations. We then discuss how the rules are prioritized to achieve the desired behaviour. Control Flow In a cpm start graph, methods are stored as control flow subgraphs. Figure 4.1 shows the relevant subset of the type graph of cpm. Methods start with an initial state (nodes of type State with the init flag), which is labelled with the method name. From state nodes, outgoing edges (labelled in) lead to action nodes, which in turn have an edge (labelled out) leading to state nodes. A final state node does not have an outgoing edge and denotes the end of the method. Action nodes contain information about the type of action (e.g. assignment, processor creation, locking) and additional data relevant to the action (e.g. a query target or command parameters). There are a number of relevant rules, namely the following. action_. . . The cpm gts contains a number of action nodes. These nodes represent atomic units of work such as assignments, locking, commands, and so on, and can be compared to statements in a scoop program (although there are explicit locking actions that do not have a counterpart in scoop, where locking is done implicitly). Actions have the lowest priority, thus are applied when other rules, in particular scheduling rules for queues, can not be applied anymore. The following actions exists in cpm: • action_Assign_. . . group: These rules perform an assignment operation. The assignment operation has been split in a number of subrules in order to keep individual rules simple, as there are a number of different scenarios for assignments: assignments of references and primitive data, void assignments, assignments to fresh variables and used variables, assignments to the special Result (return) value. • action_Command: This action performs a command, and is shown in Figure 4.2. Since commands are always asynchronous (and therefore executed on a different processor), a Queue_Item is created and put on the processor that handles the target node. This enables queue management rules to be applied, which then eventually result in the target processor executing this particular request. Since the action_Command rule advances the in_method edge, the calling processor can continue execution (once action rules are enabled again). • action_Lock_1 and action_Lock_2: These actions acquire locks for one or two processors respectively, cf. Figure 4.3 for an illustration of the latter. Embargo nodes prevent the rules from being applied if a processor is already locked by another one. • action_Unlock: The counterpart to the lock actions consists of a single rule, as unlocking multiple locks does not have to happen atomically. • action_Unlock_Creator: When creating new processors, the created processor is locked by the creating processor. By convention, the next action of the creator is a lock action of the created processor. As a result, the creating processor has to wait until the creation procedure, which contains a Unlock_Creator action at the end, removes this lock. This mechanism simulates the behaviour of scoop, where creation procedures-even for separate objects-are executed sequentially. • action_New_Attached and action_New_Void: These rules create a new processor and point the designated reference variable to the newly created processor. Again, this task has been split into two separate rules for readability reasons and to avoid excessive usage of quantifier nodes. • action_Query: As opposed to the command action, the query rule only binds the result of the executed query to the target (by assigning the result to the Data_Var matching the store_to edge, see Figure 4.4). The Queue_Items are instead created by other rules (e.g. bexp_Query (Figure 4.6) for Boolean queries, which are discussed in the system state section). • action_Test: This rule performs a Boolean test by advancing only if the evaluated expression is true. The preceding state node has in certain situations two in edges, each pointing to a test action node, where one action node points to a Boolean expression, and the other one to its negation, as illustrated in Figure 4.5. This implements an if-else branching mechanism, and guarantees that the processor can make progress. • action_TestPostcondition: In case there is a configuration node that denotes that we want to check postconditions, this rule is applied when a processor is in a state preceding an action node with the test_postcondition flag. The rule matches if the test result evaluates to true, and puts the processor in a final state. config_CheckPostcondition In case there is a Configuration node with the check_postconditions flag, postconditions will be checked. To do so, this rule follows a final state along the check_postcondition edge to another state node by redirecting the in_method edge of the relevant processor. Postconditions are the only situation where a state is followed directly by another state. The rule does not match if the configuration node is not present, providing an intuitive way to enable or disable postcondition checking. System State The system state is concerned with processors, queue management, and handling of data. The relevant type graph is shown in Figure 4.7. Processors are at the core of cpm states. During their lifetime, they are either handling requests or they are idle. In the first case, they are executing a method at a certain position, denoted by the in_method edge. When requests are made by other processors, a Queue_Item is created which has a insert_into edge to the target processor, as can be seen in the rules action_Command (Figure 4.2) and bexp_Query (Figure 4.6). Once a queue item is created, a number of rules come into action that are responsible for queue management, namely the following: queue_Insert_EmptyBusy and queue_Insert_NotEmpty These rules can be applied when a queue request has been made (with an insert_into edge), and their effect is to simply put the item at the end of the queue. Figure 4.7: Type graph of the system state. Note that self edges are rendered by an arrow that leads from a label to a node (an example is the instance edge of the Param_Ref node). We render self-edges in this manner throughout this thesis. queue_Remove_ParamRef and queue_Remove_ParamData Nodes representing parameters are attached to the queue item upon creating it. These two rules prepare the call by removing the connection between queue item and parameter node, and attaching the parameter to the processor's data node. The next rules will handle the remaining part of the queue item. queue_Remove_. . . The remaining four rules in the queue_Remove group remove a query or a command request from the top of the queue and activate the processor to start execution at the given method. There are two rules for the case with one item on the queue and two rules for the case with more items on the queue. Queries and Other Operations While there exist a query flag for action nodes, the rule that advances over an action node only does so after the query has been evaluated and is essentially an assignment operation where the right-hand side happens to be a query. Similarly, other assignment operations also contain right-hand sides that need to be evaluated before the assignment can be performed. For example, in the assignment r_1 := r_2, the reference on the right-hand side must first be fetched. The group of rules in this subsection handle this, and they have higher priorities than the action rules in order to make sure that whenever an action requires arguments, they are fetched first. The type graph for operations, shown in Figure 4.9, contains the operation types. Since the operation types are encoded using flags, it would be possible to have, for example, an Op node with both the constant and the add flag, as it is not possible to force having exactly one flag. By convention, we do not support multiple flags for such a node. We set up the type graph to reflect how the various node types are intended to be used. Queries do not appear as action nodes themselves. Instead, they are attached to an assignment action. Similarly, integer and Boolean operations are not targets either, as they appear in either complex expressions or on the righthand side of an assignment. The relevant rules are the following: aexp_. . . Arithmetic expression rules evaluate integer expressions, by creating a Result nodes and attaching them to Op nodes. The following rules exists for arithmetic expressions: • aexp_constant: Creates a result with the value specified by the operation itself. • aexp_RetrieveParam: Fetches a parameter from the data handled by the current processor. • aexp_RetrieveData: Retrieves an integer data value from the current processor. bexp_. . . Analogously to the arithmetic expression rules, Boolean expression rules evaluate Boolean expressions. The following rules exist in cpm: • bexp_constant, bexp_RetrieveData: Analogous to the arithmetic expression variants. • bexp_GreaterThan, bexp_LessThan, bexp_IsEqual, bexp_not: Evaluates the binary operation and creates a result node with a Boolean result. bexp_Query The rule for Boolean queries creates a Queue_Item which will be inserted into the queue of the target processor, as illustrated in Figure 4.6. It is similar to the action_Command rule. The target processor will execute the request and once the result is available, it can be matched by the action_Query rule. getparam_Ref_. . . This group consists of rules for fetching method parameters for command actions. They perform the step of looking up the value of a reference or data variable and create a Param_Data instance, as illustrated in Figure 4.10 for the integer case. Queries and operations often require intermediate nodes. For example, we attach a Result nodes to an Op after evaluating it. Once the processor has used this result and moved past the state where it was required, we can safely remove it in order to keep the graph clutter-free. The following rules clean the graph in various situations. cleanup_DiscardParamData Once the processor is in a final state of a method, parameters are not required any more and are removed by these rules. In fact, we must remove them as otherwise the system may misbehave in subsequent method calls (e.g. if the next call has the same parameter names, two nodes of the same parameter exist and rules that match it have two possible applications). cleanup_exp_DiscardResults_Op cleanup_exp_DiscardResults_BoolOp Once a processor moves past an action that has an Op or a BoolOp node attached, the corresponding result nodes are not required any more and are removed by these rules. cleanup_FinalState_BoolQuery and cleanup_FinalState These rules are applied when a processor reaches the final state of a query or command respectively. The in_method edge is deleted, as well as associated edges and nodes related to the result value. Rule Priorities As mentioned earlier, the rules in cpm are not applied with the same priorities. This has various reasons. First of all, it enables control on what rules are applied in which situations. For example, cleanup rules have priorities such that they are performed before new actions are performed, ensuring that no "leftover" nodes stay in the graph (e.g. parameter instances from earlier commands and queries). If the cleanup rules do not have higher priorities, the graph may end up in a state where an action rule has multiple matches, in particular matches with old and invalid instance nodes. Another advantage of rule priorities is that they can be used to attack the state-space explosion problem. By assigning fine grained priorities to the rules that do not influence the modelled behaviour, we reduce the possible interleavings in an execution. For example, it does not matter in which order cleanup rules are applied, as once all matching cleanup rules are applied, the system always returns to the same state. By assigning each cleanup rule a unique priority value and thus forcing a fixed order, these local interleaving scenarios are eliminated. Of course one has to pay attention to which rules can have different priorities and which ones need to have the same priorities. Action rules generally should have the same priorities, since these nodes can create queue items and we are interested in the interleavings with unique queue item sequences. Our approach is in line with Zambons and Rensinks paper [25] on best practices in groove, where they suggest to use some form of rule scheduling whenever possible. While they mention that "the use of control programs is usually preferred over priorities", we think that priorities are sufficient and easier to maintain in our case. Table 4.1: Rule priorities. Note that an empty priority means that the rule has the same priority as the one above it, e.g. all error rules have priority 100. Start Graph This section revisits our running example of the dining philosophers by showing cpm in action. A dining philosophers start configuration for cpm is shown in Figures 4.11 on page 36 and 4.12 on page 37 (the graph has been split up for readability, but both make up a single graph and are represented in groove as such). Most of the nodes in Figure 4.11 belong to the control flow graph of APPLI-CATION.make, which is the root procedure in this example. It roughly translates to the code in Listing 4.1, with a simple loop that instantiates forks and philosophers and connects them accordingly. In addition to the method graph, there is also a Processor node. It is the handler of its data, which consists of a number of reference and integer variables. Note that the variables have generic names (such as v_1 for Boolean and integer data and r_1 for references), and that there is a mapping from cpm variable names to the variable names of the code in the listing. They are created, the unlock_creator action is performed, and afterwards they just exist in the system, but do not execute more requests (as no other processor ever performs a command or query on a fork). In the make method of the philosopher, the object is initialized by assigning the parameters to the processor's reference variables and data variables. Finally, the subgraph representing the live method is traversed to perform the main loop of the philosophers. Most notably, this subgraph contains actions to lock (representing atomically acquiring the forks and eating) and unlock the fork processors. Rule Applications With the start graph presented in the previous section, we can now inspect the behaviour of cpm. With the groove Simulator, it is possible to follow the state-space exploration visually. Applicable rules are pointed out to the user and the part of the graph that matches is highlighted. Figure 4.13 on page 38 shows the program right before the first creation procedure (a command) is performed. At this point, the rule action_Command (see Figure 4.2 on page 23) is the only rule that has a match and is highlighted in green. After applying the rule, the graph looks as depicted in Figure 4.14 on page 39. Of course, going through rules using the Simulator is a rather tedious task that may be a useful tool for developing, testing, and debugging such systems, but it is not for verification purposes. Fortunately, as we have seen in Chapter 3, groove provides utilities to verify for ltl and ctl formulae. This is where the error rules come into play. To verify whether the program deadlocks, one can simply try to find a counterexample for the formula which states that, starting from the start graph state, there is no future state where either one of the mentioned rules matches. Processor name = "APPLICATION" State init method = "APPLICATION.make" Action assign var = "v_1" CPM+OO: An Extension for Objects cpm with Object Orientation (cpm+oo) builds on top of cpm, and aims to bring back object-oriented concepts that have been intentionally left out from corescoop and cpm. While cpm focuses on the concurrency aspects of scoop, it can be difficult to map real-world scoop programs, with processors potentially handling multiple objects, non-separate calls within routine bodies, and other scoop features that are not directly modelled in cpm. These enhancements allow a more direct mapping of scoop programs to the graph model and clear the path for an automatic translation tool from scoop programs to cpm+oo (cf. Chapter 6). In this Chapter, we discuss various extensions made to the cpm model. We explain how the changes affect the behaviour of the system and make informal arguments for the preservation of soundness and completeness. Type Graph Overview We start by giving an overview over the changes of various parts of the type graph, which may seem overwhelming at first, as the type graph has changed significantly between cpm and cpm+oo. The goal of this section is not to provide a complete description, but instead relate the updated type graph to the various added concepts, which will then be explained in detail in subsequent sections. Processors, Frames, and Objects To model local calls and non-separate objects, we introduce the notion of stack frames and object instances to the model. A subgraph of the cpm+oo type graph with processor and object related nodes can be seen in Figure 5 At the core of the type graph is the Processor. Tied to it is the Queue_Item with its parameters and method name that is intended to be executed. The basic Data node has been replaced with the Object node by simple renaming (note that the type graph simply puts syntactic restrictions on the graphs, different semantics introduced in cpm+oo come only with the rule changes). A processor can now be the handler of multiple objects (whereas cpm restricted a processor to be the handler of exactly one data node). Introducing nested routine calls brings the requirement for some call stack representation. This is achieved by introducing the Frame node type and its attached nodes. A processor that is executing a program always works with a current stack frame, which handles local variables, passed parameters, a reference to the C u r r e n t object, and the state it needs to return to in case the current call is a nested call. When creating requests, a queue item has a stack frame attached, which contains passed parameters. Variables, Parameters, and Results The next group of related types is concerned with the representation and handling of variable declarations and bindings of values to parameters and query results. Figure 5.2 shows the relevant type graph. Variable declarations are, as in cpm, divided into three different subtypes: reference, integer and Boolean variables. While integer and Boolean data variables did not change, reference variables now point to objects instead of processors. In addition, reference variables now have a flag denoting whether it is declared as separate or not. Parameter nodes represent values passed to commands and queries. Parameters can be local variables (corresponding to variables in the local block) or attributes of the current object, as well as arbitrary expressions (Param_Expr with an expr edge to an operation node). Adding arbitrary expressions (as opposed to local variables and attributes only) as parameters allows representing complex expressions with a single action node in cpm+oo. This is not possible in cpm, where helper variables are required to simulate complex expressions. Actions The type graph for actions, shown in Figure 5.3, looks similar to the one in cpm. One important difference is that we use subtypes for the different kinds of actions as opposed to flags. This has the advantage that we can not have a single action node with multiple types, a property which can not be enforced in groove using flags. To support arbitrary expressions (e.g. sum := a1 . count + a2 . count ) as parameters and operands, we replace a number of string attributes with edges to nodes of type Super_Op (and its subtypes). For example, the unlock action node Action_Assign_Ref is pointing to a RefOp node, which means that we can use arbitrary expressions on the right-hand side of the assignment. The type Action_Token is a supertype of actions that are local to a processor, or, in the case of queries, potentially local to a processor. This is later used in a mechanism to mitigate the state-space explosion problem and is discussed in detail in Section 5.3. Operations The group of operation types (cf. Figure 5.6 on page 47) has undergone a number of changes during development of the cpm+oo model. Integer, Boolean, and reference operations (Op, BoolOp, and RefOp types) do now inherit from the same Super_Op supertype, which can be matched in certain rules to cover all kinds of operations. The different operations (e.g. "greater than" and "equals" operations) are represented as unique types, which is more restrictive than representing the operations with flags for one single node type. Other additions include types for the handling of local declarations as well as types for integer and reference type queries. Errors As in cpm, the ERROR type ( Figure 5.4) is used for recording information about detected issues with the program. This includes both undesirable properties in the behaviour of the program (e.g. a deadlock situation) as well as invalid configurations (e.g. multiple handlers for a single object). The latter kind of error is used to aid the development and evolution of the model and is designed to catch bugs in the model itself, not errors in the behaviour of the modelled runtime. Recording information in error nodes is useful for postprocessing. Since all rules have an ERROR embargo node, rules can only be applied as long as there is no error node. Once an error node is created, the system is in a final state. Our postprocessing tools can then simply go through all the final states and check whether there is an error node, and if so, generate output based on the context of the error. • Reset_Token and Action_Executed_Indicator: These two types are used for an optimisation technique that forces processors that are performing non-separate actions to advance as far as possible before yielding control to the next processor. We describe the state-space optimisation involving these types in detail in Section 5.3. Others • Configuration: A node of this type can be included in a start graph to enable special behaviour of the model. In particular, one can add the • Init: The initialization type allows specifying the root class and procedure. Local and non-separate Calls In cpm, calls are always performed by adding a new queue item to the request queue of a remote processor. Local calls are not supported, instead one has to perform inlining of the method bodies where local calls would occur. cpm+oo instead provides mechanisms for local routine calls (i.e. where the target is the current object) and non-separate calls (where the target may be a different object, but is handled by the current processor). To achieve this, we use the call stack representation introduced in the type graph, and introduce rules that handle non-separate calls. The corresponding rules for separate calls are derived from the cpm rules that create feature requests and are enhanced with the notion of a call stack and adapted to the object representation. In the following, we first discuss separate calls. We show how we the cpm rules to work with objects, frames, and other changes in the cpm+oo type graph. We then present features that are missing incpm, such as non-separate calls. Separate Calls The rules handling creation of feature requests, i.e. separate feature calls, are similar to what we have seen in cpm (rules action_Command and bexp_Query, see Figure 5.11 shows the rule ac-tion_Command_separate from cpm+oo. While the rule is much larger than its cpm counterpart, the semantic behaviour of the two do not differ much. There are several reasons why the cpm+oo rule requires more nodes. First, cpm+oo supports additional parameter nodes for a command (in particular, expressions, local references, and local data), resulting in more pairs of parameters and instances, but they follow the same structure as data and reference parameters in cpm. Second, to the lower left, there is a construct nesting an ∃ quantifier inside a ∀ quantifier. Since the parameter node is at the ∀ quantifier and its instance at the ∃ quantifier, this expresses that the rule matches, if for all Param_Expr nodes of the command, there exists an instance of type Param. This construct is used to enforce that the rule is only applied once all parameter expressions are evaluated (i.e. once instance nodes have been created). Once the requirements are met, a queue item is created, similar to what the cpm rule does. But instead of attaching parameters directly to the queue item, we create a Frame node, representing the prepared stack frame which can be put on top of the frame stack once the request is handled. Note that the action does not specify a reference denoting the call target as a simple string any more, instead it points to a parameter expression via the target edge which allows using complex expressions as targets (e.g. foo . g e t _ c o u n t e r () . count). Once a processor has queue items attached, the scheduling rules for queues are applied. These work analogously to the cpm counterparts. For example, Figure 5.7 shows the rule queue_Remove_SingleQueued in cpm, while Figure 5.8 shows the corresponding rule from cpm+oo. In cpm+oo, we not only point the processor to the routine that should be executed, but also set the active frame edge to the frame attached to the request, thus providing information about the routine arguments and the target object (this is needed since the processor may be handling multiple objects and needs to know to which one C u r r e n t refers to). Note that, since we attach certain information related to the request type to the frame instead of directly attaching it to the request, cpm+oo requires only two general queue_Remove_. . . rules (as opposed to different rules for queries and commands in cpm). After the execution of a request, several rules take care of cleaning up the frame and handling of possible return values. For example, the rule cleanup_Fi-nalState_Commands_Empty_Call_Stack, shown in Figure 5.9, shows how the stack frame is deleted after a command when the processor becomes idle. Non-separate Calls Thanks to stack frames, we can now also simulate non-separate calls and local calls (calls with target C u r r e n t). These commands and queries, as per the formal semantics, do not create a feature request that is enqueued in the processor's request queue. Instead, they are directly (and sequentially) executed by the calling processor. We stay with the command example, but the case for queries is again similar. The rule action_Command_non-separate, shown in Figure 5.12, handles command calls. Again, a stack frame is created. But instead of creating a new request, the processor updates its current frame pointer to the newly created frame, and starts executing the desired method body. The created frame points to the current frame via next edge. In addition, the created frame also includes an edge to the state after the command node, labelled return_state. This allows returning to the correct position once the command finishes and the stack frame is removed. Once a procedure has been processed and the processor points to a final state node, several high-priority scheduling rules may be applied, depending on the state of the call stack and the type of feature (command or query). These rules (which in fact are the same ones that handle the analogous case for separate features) start with the prefix cleanup_FinalState_. . . and pop the current frame from the frame stack. Figure 5.10 shows the cleanup_FinalState_Command rule that deletes a frame and instructs the processor to continue at the position after the call in the calling procedure, and resets the active frame edge to point to the original stack frame. Dynamic Object Creation and Variable Names When creating processors and associated data nodes in cpm, the rules ac-tion_New_Void and action_New_Attached are applied to create a fixed number of reference and data variable nodes. To map a certain program to cpm, one has to first determine how many variables are required to represent the program and then adjust these rules to make sure that enough variables are available for the mapping. If a program involves several classes, cpm does not distinguish between them when creating data and handlers. Instead, all processors get the maximum number of data and reference variables. To make direct translations of scoop programs easier and interpreting generated start graphs more intuitive, we introduce variable names in cpm+oo that can be directly mapped to the names in the source code. To achieve this, we encode the reference, Boolean, and integer attributes of classes in the start graph itself. An example of this can be seen in Figure 5.13, where we include all variables relevant to the P H I L O S O P H E R class. We call these constructs object templates. To make use of object templates, we modify the semantics for object creation slightly. The rules action_New_From_Template and action_New_Local_From-_Template handle object creation for attributes and local variables respectively. We attach the class name to Action_New nodes, denoting the type of object that we want to create. Then, the rule matches the template with the corresponding name, copies the template variables, and attaches them to the newly created object. The created object now contains attributes corresponding to its declared type. We can use arbitrary variable names as opposed to being restricted to generic ones as in cpm. In addition, objects now only have variables relevant to their types attached. Generic Operators In cpm, actions that require arguments, such as assignments or commands, are limited in that the arguments of these actions, including targets for queries and commands, can only be local data or reference variables, or simple operations like addition or negation. As a consequence, expressions that do not conform to this restriction have to be split up. For example, to represent the statement a c c o u n t. w i t h d r a w ( a c c o u n t. b a l a n c e), we require two actions, the first one being a query for a c c o u n t. b a l a n c e that is assigned to a (temporary) variable, and the second one being the command with this variable as an argument. To enable more generic expressions, we change the representation of parameter nodes (see Figures 4.7 on page 26 and 5.2 on page 43 for the relevant type graphs of cpm and cpm+oo respectively). In cpm+oo, we can use the supertype Param instead of using specific types. We still have the Param_Ref and Param_Data types, as well as analogous Param_Local_Ref and Param-_Local_Data types, which can be used to represent and fetch attributes and local reference and data values. In addition, and this is where the added flexibility comes from, we also provide a Param_Expr type, which has an expr edge to the Super_Op type. With this addition, we can now use arbitrary operations to fetch parameter data as opposed to only local and attribute values. Consequently, command actions and query operations do not specify the target via string any more. Instead, as seen in the type graphs in Figures 5.1 on page 41 and 5.6 on page 47, these nodes now have a target edge that points to a Param_Expr node which specifies the target. Once targets and parameters are evaluated, the behaviour is the same as if one would have used the Param_Ref and Param_Data types from cpm. Figure 5.14 shows how a RefOp node can be used to specify the target, as opposed to a simple string with an attribute variable name (which is how targets are handled in cpm). Lock Passing Lock passing is necessary to avoid deadlock in certain situations. For example, when an object a holds the locks of the handlers of b and c, and creates a query request on b that in turn requires locking of c, then b can not proceed until a releases the lock on c, which in turn can not happen until b completes the request. No processor can make progress, and a deadlock has occurred. To solve this problem, Morandi [13] defines a lock passing mechanism for feature calls. In his thesis, he defines feature calls as follows [13, p.19] (edited to reflect only relevant parts and for formatting): A client p performs the following steps to call a feature f with target expression e 0 and argument expressions e 1 , · · · , e n . • If the feature call is non-separate, i.e. p = q, then ask q to process the feature request immediately using its call stack and wait for termination. • Otherwise, add the request to the end of q's request queue. Wait by necessity. If f is a query, then wait for the result. 6. Lock revocation. If lock passing happened, then wait for the locks to come back. To achieve this behaviour, we introduce a number of rules that interact with each other. pass_locks This rule, shown in Figure 5.17, matches if at least one of the attached arguments is controlled, i.e., if the current processor (node n0) has a lock on some processor (node n1) that handles an object that is passed to the command. The target processor (node n2) receives all locks (lock edges) from the client processor. In addition, several edges are created for bookkeeping purposes. Edges labelled passed_lock point to processors that the client held a lock on and are required to know which locks to restore. The receiving processor has an edge restore_locks that points to the client, which allows giving back the locks to the correct processor later. Finally, an edge wait_for_restored_locks is added in order to make sure that the client does not continue execution before getting the locks back (which is ensured by a wait_for_restored_locks embargo edge in each action rule). restore_locks_. . . These two rules handle restoring rules for commands when lock passing has occurred. Figure 5.15 shows one of the rules, where the client (node n8) gets all the locks that have been passed earlier. To do this, edges restore_locks (which are created when creating the queue item with commands and queries that require lock passing) from the frame from the original request need to be present, which ensures that the locks get restored at the right point in the call stack even if the request triggers further feature calls. cleanup_Restore_Locks_Query Similar as before, this rule ( Figure 5.16) handles restoring locks of query requests. Note that we do not support separate callbacks as described in the original semantics. The above rules all have higher priority than action and query rules, which ensures that whenever the system is in a state where lock passing or revocation is required, it will apply the corresponding rule. It is therefore impossible for the system to miss passing the locks or continue without restoring the locks first. The latter is additionally enforced by adding embargo edges (wait_for_restored_locks in action and query nodes), which catches the case where locks are not properly restored due to no restore lock rule being applicable (this situation would indicate a problem with the cpm+oo model, not with the semantics or the inspected program). Figure 5.17: Rule pass_locks, that matches and is applied when at least one of the parameters is controlled. Distinguishing Preconditions and Wait Conditions In scoop, a statement in a r e q u i r e block involving separate arguments can either be a precondition or a wait condition, depending on the context from which it is called. If the processor executing the statement already held request queue locks to all involved processors of separate arguments before the call, then no other processor can enqueue requests to the queues of those handlers. As a result, the outcome of the statement can not change over time, and therefore it is a precondition. But in the case where the calling processor does not hold all locks, the result may change over time, and therefore the statement is a wait condition. In cpm+oo, we distinguish between preconditions and wait conditions. The Action_Test nodes that denote the path that is to be followed if a precondition or wait condition evaluates to False are always marked with a precondition_fail flag. This does not necessarily mean, as described above, that the statement is in fact a precondition. Instead, the rule action_Test determines, based on the participating parameters, whether the statement is a precondition or a wait condition. In the first case, the rule creates an ERROR node, stating that a precondition has failed, and as a result, the system is in a final state. But if the statement turns out to be a wait condition, the rule handles it as such by following the action to the next state. Eventually, the processor returns the locks (which gives other processors the possibility to modify the state of the involved objects) before acquiring them again and evaluating the wait conditions again. State-Space Optimisations The state-space explosion problem is omnipresent in concurrent systems and therefore it is also present in cpm and cpm+oo. Obviously, this is an issue that one can not get rid of completely. Still, it can be of practical value to try and mitigate the problem as much as possible. We implement a number of optimisations that aim to reduce the state-space problem by avoiding unnecessary interleavings. Similar optimisations are already present in cpm, such as fine grained rule priorities for certain rules. Of course, we have to pay attention to the interleavings that are left out with these optimisations. In the following, we present additional measures taken to decrease the size of the state-space and argue why we are confident that they are not problematic with regards to the properties that can be verified using the cpm+oo model. Quantifier Usage In groove, quantifiers can be used to create flexible rules, such as ones that match a type of node several times, or ones that express a logical "or", matching one part of the rule or another. While there are several situations where quantifiers are used in cpm, we extended this usage further in cpm+oo to help reduce the size of the state-space. The rule IntOp_RetrieveData (which is known as aexp_RetrieveData in cpm) is an example, shown in Figure 5.18, that uses a simple ∀ > quantifier with several nodes attached, which means that the rule now creates Result nodes for each Op_Retrieve_Data attached to the action that is matched. In a situation with multiple Op_Retrieve_Data, cpm+oo creates all result nodes in one single rule application, whereas in cpm, the state-space diverges at that point, exploring the different orders of creating these nodes, and converges once all result nodes have been created (since the resulting state is always the same, regardless of the order). Scheduler Optimisation Suppose we have a dining philosophers instance with only two philosophers (and thus with two forks). We are interested in the different interleavings that can occur with regards to locking, or more general, we are interested in the interleavings possible where philosophers and forks and their processors interact with each other by locking request queues. When simulating the instance with cpm or any version of cpm+oo, all the relevant interleavings are captured. Unfortunately, cpm and cpm+oo without scheduler optimisations also explore a large number of interleavings that we are not interested in, as they cover the same behaviour with respect to the outcome of the program. For example, one execution could execute the complete code of the first philosopher before the second one starts. In the next interleaving, the second philosopher might execute a (local) identifier assignment, then the first one executes everything, before the second one executes the remaining part. With respect to the interaction of processors and objects, these two interleavings are equal. More generally, we can split actions and queries in two groups: non-separate and separate. In the first case, everything is handled on the current processor, no other processors are involved. In the latter case, different processors may be included and therefore, we want to explore all possible executions (since the outcome of executing scoop programs is determined by the order in which requests are enqueued). The idea behind this optimisation is that as long as a processor is executing locally (e.g. a philosopher initializing by performing integer assignments for the identifier and number of times to eat attributes) without having an impact on any processor's request queue, we can advance it as far as possible. Once a processor is at a point where a non-separate action or query is about to be executed, it waits until all other processors have reached a similar position (or have finished executing and are idle). All processors that are not idle are potentially about to interact with other processors. At this point, it is important to explore all interleavings, as different orders in locking and enqueuing requests may result in different situations (e.g. in the dining philosophers example, when the first philosopher has a fork and is about to pick up the second, but the second philosopher is about to pick up the same fork, we want to explore both situations where one or the other philosopher "wins"). Implementation We implement this idea by introducing an execution token and organising the processors in a linked list. In a cpm+oo state, at most one processor has a token flag, which denotes that it is allowed to perform non-separate steps. To achieve this, we give the non-separate rules a higher priority and add the token as a requirement to match. These rules (e.g. rule action_Command_non-separate) can only be executed by the processor that has the token. Once a processor can not perform more non-separate actions, it passes the token to the next processor. This is repeated, until no processor can make non-separate progress any more. Once this is the case, the separate rules (such as action_Command_separate) can be applied. These do not require the token, which means multiple rules may be applicable in a given state, and since these rules all have the same priority, all interleavings are explored by the system. In the following, note that non-separate rules have priority 4 and separate rules have priority 1. The rules pass_token, pass_token_first, and reset_token are relevant for this mechanism. These rules with priorities 3, 2, and 0 respectively handle the movement of the token flag along processors and are shown in Figure 5.19. The token is cycled until there is one full cycle where no processor has made progress. To achieve this, we use a node of type Ac-tion_Executed_Indicator. Such a node is created whenever a non-separate action is performed, e.g. by the rule action_AssignRef. When the token is on the last processor, it gets moved to the first one (thus restarting the cycle) only if there is an indicator node. When the token has run through the list without an action having been performed, the rule is not applicable anymore, which enables rules with lower priority, in particular the separate rules. At this point, all different interleavings between separate rules are explored as intended. Similar to the Action_Executed_Indicator, separate actions create a Reset_Token node that indicates that a separate step has been performed, which means that some processor may possibly continue with non-separate steps. If such a node exists, the reset_token rule can be applied which removes the node and puts the token back on the first processor, restarting the cycle to perform non-separate steps. In case there is no such Reset_Token node, the rule cleanup_token is applied that removes the token from the processor holding it, ensuring that there are not several final configurations with the only difference being that the token is on a different processor. As we will see later in Chapter 7, this optimisation results in a huge improvement in the size of the state-space and allows us to verify programs where the state-spaces have been too big before. We argue that this mechanism does not leave out interleavings of interest, as all the possible sequences in which requests get added to queues are preserved. By forcing a certain order for local computations, where the order of execution has no influence on the outcome, we can avoid exploring a large number of states. Rules Here, we provide a complete overview of all rules and their priorities in the cpm+oo system. Together with the type graph we have presented, this makes up the complete gts of cpm+oo. While we discuss certain rules in detail, we do not show graphs for every rule, but instead refer the interested reader to the supplementary material repository [21]. In addition, we also present the priorities of the rules as we have done for cpm. We divide the rules in cpm+oo into the following categories and discuss them in the subsequent sections. Control Flow Control flow rules handle movement along feature graphs, in particular moving from a state via an action node to the next state. Table 5.1 summarizes these rules and their priorities. Most rules have direct counterparts in cpm, although new variants have been introduced to handle additional features like non-separate calls. What follows is a short discussion of the rules. action_Assign_Data This rule handles an integer or Boolean assignment operation where the target is an attribute of the current object. action_Assign_Local_Data Similarly, this rule handles integer and Boolean assignments to variables declared in the local block. The rule accesses local data stored on the frame instead data stored on the object, since local declarations are only valid within the context of the current call and therefore of the current call stack frame. action_AssignRef Analogously to the previous two, this rule handles reference assignments. While cpm uses a number of rules for reference assignments (action_AssignRef_Ref_. . . rules), the cpm+oo rule is not injective, and we make additional use of quantifiers to express alternatives, which makes it possible to cover all cases with a single rule. action_Assign_Local_Ref Analogously to action_Assign_Local_Data, this rule handles the case for reference assignments to local variables. action_AssignResult_. . . For each data type, cpm+oo has a rule for assignment to the special Result value, representing return values in queries. Figure 5.20 shows the rule for reference values. action_Command_non-separate action_Command_separate action_Command_separate_restore_locks The non-separate command rule creates a new stack frame, sets it up with parameters, and puts it on top of the frame stack of the current processor. In addition, the rule points the current processor to the designated feature. This represents a local call that is executed immediately. The separate cases on the other hand create a feature request and attach the created frame to it. The request is attached to the target processor and will then get processed by queue management rules. The calling processor can proceed since the separate rules only match if the target processor differs from the calling processor, which means that the command is asynchronous. action_CreateRoot Since cpm+oo allows specifying the root class and procedure, we need a rule that creates the initial object, which is what ac-tion_CreateRoot does. The configuration node with root class name and procedure name is deleted in this rule, ensuring that only one root object is created. The object is created according to the class template, similar to what the rule action_New_From_Template does. action_New_From_Template action_New_Local_From_Template To instantiate attributes and local variables, these rules match Action_New nodes and their context. The created object is then attached to the specified variable, where the first rule handles attributes and the second one local variables. action_Lock The lock action rule takes, as opposed to the lock rules in cpm, a variable number of references. The rule can only be applied if all handlers of the specified objects are not locked. Applying this rule implies that all locks are obtained atomically. Figure 5.21 shows the rule graph. action_Test A form of branching is provided with the test action, which works analogously to the cpm test action. action_Noop This rule simply skips an Action_Noop node and, as the name indicates, performs no real operation. action_TestPostcondition Like the cpm rule of the same name, this one advances a processor in a final state to the first state of the postcondition, if the graph is configured to check postconditions. action_Unlock_Creator action_Unlock_Creator_non-separate Since created objects are locked by their creators, they need to have an unlock action as their last action in creation procedures, which removes the lock, allowing the creator to continue execution (since the creator immediately, by convention, has to follow with a pair of lock and unlock actions for the same object that was just created). These rules handle the separate and non-separate case. action_Unlock_Expr If a feature obtains locks at the start, there are corresponding unlock actions that release them at the end of the feature. This rule handles the unlock actions. Not only does it release held locks, but in case the lock is not held (which can happen if several passed separate arguments have the same handler) the action becomes an empty operation. Aside from these rules, the rules involved in lock passing belong to this group. They are not repeated here, instead we refer to Section 5.2.4, where they are described in detail. System State The rules in the system state group, listed in Table 5.2, are concerned with queue management and graph maintenance. The former includes rules that insert queue items into the request queue and remove them when processing an item. The latter deal with various graph states with leftover nodes, e.g. when a processor has reached a final state and results need to be discarded. cleanup_exp_DiscardResults_BoolOp cleanup_exp_DiscardResults_Op cleanup_exp_DiscardResults_RefOp cleanup_exp_DiscardResults_RefOp_Void After evaluating an operation (in the form of Super_Op nodes), it has a Result node attached which is then used by the action to process. Once a processor moves past the action node, these rules are are applied to remove the Result nodes since they are not used anymore. cleanup_remove_Void In certain situations, it can happen that Void nodes are left without being connected to any part of the graph. These are removed by this rule in order to avoid creating several states in the lts that only differ in the amount of unconnected Void nodes. cleanup_Frame_Remove_controls Frames can have controls edges to processors which have been controlled prior to the call. This is required to determine whether a statement in the r e q u i r e block is a pre-or wait condition. cleanup_FinalState_. . . A number of cleanup rules are applied once a processor reaches the final state of the procedure it is executing. They perform a range of tasks, including removing the current frame or setting the result value such that the calling processor has access to it. The following rule exist in this set. • cleanup_FinalState • cleanup_FinalState_RefQuery_with_next_frame queue_Insert_EmptyBusy queue_Insert_NotEmpty When a client creates a request queue item, it does not actually insert the item directly into the request queue. Instead, rules like action_Command_separate simply let the Queue_Item point to the target processor via insert_into edge. The actual insertion into the queue, depending on whether it is currently empty or not, is performed with the queue_Insert_EmptyBusy and queue_Insert_NotEmpty rules respectively. queue_Remove_SingleQueued queue_Remove_MultipleQueued Once the request queue has items, these rules are used to remove a queue item from the request queue and instruct the processor to start execution at the designated procedure. The first one handles the case where exactly one item is on the queue, the second one cases with more than one item on the queue. prepare_lock_wait Before a lock action is performed, a wait edge is inserted from the processor executing the action to the processors it intends to lock. These edges are in particular useful for detecting deadlock with the rule error_deadlock. These edges are deleted once the target processors are locked. remove_wait_and_lock When a processor is in a state before a lock action, it first creates a wait edge that points to the processor it intends to lock. The rule action_Lock can only be applied if for all target processors, either the lock is already held, or a wait edge exists. In the former case, the graph is not modified any further. This rule handles the situation where a processor has both a wait and a lock edge, in which case the wait edge simply gets deleted. Queries and Other Operations As in cpm, we group rules related to queries and operations on integers, Booleans, and references together. Table 5.3 lists all rules in this group. A description of rules and rule families follows. BoolOp_Query_. . . The Boolean query rules handle separate and non-separate queries, where a new frame is created. In the separate case, a request queue item is created and attached to the target processor, whereas in the non-separate case, the frame is put on the current processor's frame stack and the processor is instructed to start executing the query. BoolOp_RetrieveData Similar to other RetrieveData rules in both cpm and cpm+oo, this rule fetches attributes of the current object of Boolean types. BoolOp_. . . Other Boolean operations include constants, conjunction, disjunction, equality, and others. These rules, with their arguments evaluated, perform the corresponding operation and attach the result to the matched BoolOp node. IntOp_. . . Similar to the rules handling Boolean operations, these rules handle various integer operations, such as simple addition. RefOp_. . . Analogously, a number of rules handle fetching references. This includes getting attributes or local references, but also creating query requests. getlocal_. . . The getlocal_Data and getlocal_Ref rules prepare instances for local variables, which are attached to the created frame later when a command or query is called. getparam_. . . Once the values of Param nodes have been evaluated, these rules are applied to create instances in the form of Param_Data and Param_Ref nodes which are then passed to the called query or command. Optimisations Optimisation rules are those involved in handling the execution token discussed earlier, and are listed along with their priorities in Table 5.4. A thorough discussion of the involved types and rules is given in Section 5.3. Errors In this group of rules, we collect error conditions. This includes properties such as presence of a deadlock or a void call, which we are interested in when verifying programs. In addition, we also have rules that aid us during development and serve as "sanity checks". For example, the rule debug_multiple_handlers matches, if an object has more than one handler. Since this situation is not possible according to the scoop specification, a match of this rule means that there is an error in our model. Matching such "bad states" was used extensively during development to catch bugs, but the corresponding rules have been removed from the final gts. The priorities of the current error rules are listed in Table 5.5 on page 73, a short description of them follows. error_deadlock A large part of the motivation behind this work is detecting, amongst other properties, deadlocks in scoop programs. This rule, shown in Figure 5.22, detects deadlocks by matching, if a processor n1 has a lock on some processor n4, but is also waiting on a processor n2, which in turn is locked by some other processor (not shown, but expressed using the regular expression edge -lock.wait)+) which again is waiting on n4. An example configuration where this rule matches is shown in Section 7.2.1. error_deadlock_2 and error_deadlock_3 Deadlock situations can not only occur in the above case where no processor is able to acquire locks and make progress. For example, when two processors execute the same feature which contains a wait condition that requires the other processor to finish this particular feature, then both processors can lock the request queue of the other one. They then both wait for the other one to handle the query request generated in the wait condition and therefore none of them makes progress. The rules error_deadlock_2 and error_deadlock_3 handle such situations for two or more processors respectively. Rule error_PostconditionFail If a postcondition is evaluated to False, this rule puts the processor in a special state of type State_Postcondition_Fail and creates an ERROR node with attached information about where the postcondition has failed. error_Command_Void_Target error_Query_Void_Target If a target of a command or query has been evaluated to a void reference, then the call is invalid, which is detected and reported with these two rules. Configuration Currently, there is only one rule in this category, the rule config_CheckPostcondition, which is applied if one specifies that postcondition should be checked. It has a high priority and advances a processor from a final state to the start of the postcondition, ensuring that no cleanup rules are applied before the postconditions have been checked. The processor will evaluate the postconditions and if everything evaluates to true, end up in another final state where the normal cleanup rules can be applied. If no such configuration node exists, this rule can not be applied and the cleanup rules take place, ignoring possible postcondition related parts of the graph. Testing The cpm+oo model has been developed in an iterative fashion by adding features described in this chapter to the cpm model one by one. Changing the gts is error-prone. It is all too easy to alter the behaviour such that it does not reflect the intended one any more by adding rules that contain bugs, changing priorities that result in certain rules being applied in a state where we do not want the rule to be applicable, or altering the type graph and rendering existing rules useless. To ensure that the model stays true to the intended behaviour, we use a number of start graphs representing test programs and specify the expected output. For example, along with evolving the model, we also evolve the examples of the dining philosophers with both the correct and the deadlock implementation. Our testing utilities then explore the state-spaces of these examples and match it against the expected behaviour which checks properties like state-space size, the number of final configurations, and whether ERROR nodes are present in final configurations. Once we finalised the type graph for the current cpm+oo model, we used this testing approach in combination with our translation tool, described in Chapter 6. This allows us to write scoop programs and specify the expected output of our state-space exploration tool. The testing utility then first translates the source code to a cpm+oo start graph, and then explores the state-space and checks whether the actual output matches the expected output. Future Work With cpm+oo at its current state, we are able to simulate a number of scoop features directly in the model, as opposed to simulating them using more basic cpm constructs. We added rules and types to cpm that make the model more expressive and allow start graphs that closely resemble the corresponding scoop source code. To support more scoop features, one possible way is to extend the cpm+oo model to directly support those features. This has the advantage that programs that make use of those features can be represented directly in a compact and readable fashion. Another approach is simulating these features using existing cpm+oo functionality. In our automatic translation tool, it would require additional work to express features not directly supported by cpm+oo, resembling the work of traditional compilers. Translation With cpm+oo, we introduced object-oriented features of scoop to the cpm model. Thanks to that effort, more scoop programs can now be represented and simulated using the model. Since both cpm and the extensions that we introduce in cpm+oo are closely modelled after scoop, mapping source code to start graphs becomes a less tedious task. In this chapter, we discuss the automatic translation tool that translates a subset of scoop to cpm+oo start graphs. Overview Translating a scoop program to cpm+oo consists of a number of steps, as depicted in Figure 6.1 where the tool progresses from top to bottom. In the first step, scoop source files are parsed and syntax trees are generated. Using these syntax trees, an internal representation of the program is created in two steps: First the syntax trees are walked to gather typing information of features and variables. Then, in a second pass through the syntax tree, we use typing information to create a structure that closely relates to the cpm+oo type graph. In the final two steps, the intermediate representation is transformed to a simple graph representation, which also contains layout information. Finally, this graph can be traversed and rendered as an xml file that can be used in the cpm+oo transformation system. The tool is implemented in Java and uses a number of libraries, namely the following. JUnit The JUnit framework is used to automatically test various aspects of the implementation. Apache Commons The commons libraries offer a wide variety of reusable software components. This project makes use of the mathematics features, in particular for case studies and evaluation purposes. groove Not only does groove provide graphical and command-line interfaces, but it can also be used as a library in custom software. We use the library to perform exploration and verification from within our toolchain. This enables us to create more specific output tailored to cpm+oo as opposed to the generic gts output provided by the command-line interface of groove. What follows are more detailed technical descriptions of the individual steps. Translating Programs With the help of antlr, the first step of parsing consists of writing a grammar in the antlr grammar format. We did not write this from scratch; instead we adapted the grammar found in EVE [22] for this usage. During this process, we modified the grammar to conform to the antlr file format. Given this grammar, antlr is able to generate a lexer and a parser which can be used in our tool. It is important to note that at this stage, we consider all scoop programs. This means that we are able to parse programs with more advanced features, such as inheritance and generics. The decision whether a program is translatable or whether it contains unsupported features is performed when inspecting the syntax tree (steps 2a and 2b in Figure 6.1). To perform step 2a, the tool uses a class that implements the scoop syntax tree visitor. It keeps track of the class currently inspected and stores the following type information for each class: • Declared routines and their parameter types and return types (if any). • Declared attributes and their types. • A list of creation procedures. It is necessary to record this typing information in advance, as we need to know the types of symbols in the next step. In a single-pass approach, it is possible to encounter symbols from classes that have not yet been analysed, therefore making it impossible to know whether it is an integer, Boolean, or reference symbol. After having gathered the types of declarations, we pass through the parse tree once again. This time, we create a number of CPMOGraph objects, one for each parsed class. The CPMOGraph class and its subclasses are closely related to the cpm+oo type graph. In fact, most types in cpm+oo have a direct representation as a subclass of CPMOGraph. For example, we use a class BoolConstant that inherits from BoolOp, which in turn inherits from the class Op. Similarly, in cpm+oo the BoolOp_Constant type is a subtype of BoolOp, which in turn is a subtype of Super_Op. This direct correlation is useful in a variety of ways. In particular, translating a CPMOGraph to a cpm+oo start graph is straightforward, as we can simply go through the structure and create a cpm+oo graph node (in a generic graph representation) for each encountered CPMOGraph object. This also means that the "compiler effort", i.e. relating source code statements (in the form of parts of the syntax tree) to cpm+oo nodes is concentrated in one step, namely the visitor class that implements step 2b. This second visitor also handles situations of input programs that use features currently not supported. If our tool encounters such a feature in step 2b, it either ignores it (in cases where the feature does not influence the program execution, e.g. a note block at the top of a class) or aborts the translation and prints the part of the source code that resulted in the tool to fail. This way, we have a single point in the tool where the decision is made whether a program is supported, and only one point at which translation can fail due to the nature of the input program. Supported SCOOP Features For correct translation and simulation in cpm+oo, we require a complete scoop program to be passed as a set of input files. In particular, all referenced classes must be part of the input. A number of other restrictions on the input programs apply for the tools to function correctly. In this section, we give an overview of the supported features of scoop and discuss the parts that are missing. The translation tool focuses on the basic features of scoop. The goal is not to support the complete scoop language, but enough of the language to allow writing expressive programs in an object-oriented manner. The following features are currently supported. Classes and Objects cpm+oo supports objects and classes natively. The translation creates object templates for each input class consisting of the class name and the names and types of its attributes. Feature declarations Both routines (with the do keyword) and attributes are translated. While attributes are part of the class template, we also create a getter routine (consisting of simply assigning the attribute to Result ) for each attribute. This makes it possible to create Queue_Items that call the getter functions instead of accessing the data from other processors directly (which would cause cpm+oo to misbehave, as the requests would not be served in FIFO order anymore). Routine declartions In routines, we support common constructs such as formal arguments, preconditions, wait conditions, and postconditions. Local declarations Local variables of reference, integer, and Boolean type are supported. Integer and Boolean types are special native types in cpm+oo and behave like expanded types. Generic support for expanded types is currently not available. Instructions A number of instructions are supported, namely: • Creation calls with the create keyword and an explicit creation procedure. • Local calls (with target C u r r e n t). • Integer and Boolean literals. Expressions We support arbitrarily complex expressions, which are translated to a single Op node in the output graph. As a consequence of leaving out a number of features such as agents, expressions related to those features are not supported (e.g. agents inside expressions). Both cpm+oo and the translation tool currently lack support for a number of features. Most prominently, we do not support inheritance. With this, a number of related scoop features are not supported either, for example partial classes, the redefinition of features, arrays, generics, agents, and others. In addition, we currently leave out a number of other language features, such as class invariants, old values in postconditions, and others. In Section 6.6, we give an overview of the most important features currently missing and present possible implementation strategies as future work. Output Once the intermediate representation of the input files is generated (in the form of CPMOGraph objects), the remaining task is outputting the representation as a gxl file. To achieve this, we use the visitor pattern once again: The interface CPMOGraphVisitor allows implementing classes that pass through the cpm+oo structure. This is used to create a simple graph representation using the output.graph.Graph class and its subclasses. These classes implement a straightforward graph representation with nodes and directed edges. In addition, nodes can also store position values. This allows us to create start graphs that are "human readable" when rendered in groove. In particular, we organise routine subgraphs by aligning states and actions from left to right, while attaching additional nodes, like parameters and target operations, above them. Using two separate steps to output a CPMOGraph structure to xml may seem unnecessary, as we could as well just have generated the gxl file directly. But using a separate simple graph representation has the advantage that we can separate the tasks of creating graphs with layout information and rendering them in some format (in our case gxl). This leaves more flexibility when extending the program, for example when we want to render the graph in another output format we can traverse a simple graph structure with edges and positioned nodes. When extending the cpm+oo model, we simply have to adjust the part that generates a graph from CPMOGraph objects, but do not have to adjust anything related to the gxl output. This leaves us with a well structured design that cleanly separates concerns and can easily be extended at various stages. Testing As briefly mentioned in Section 5.5, we test our translation tool in conjunction with the model by providing scoop programs, translating them, and exploring their state-spaces. The output is matched against the expected output using the JUnit framework. With this approach, the start graph is implicitly tested against the type graph presented in Section 5.1. By assuming that the model behaves correctly at this point, we can test the translation tool by simulating the generated start graph and analysing the output. In case the output does not match, we most likely have an error in the translation tool. We do realise that this is hardly "unit testing" in the traditional sense, instead we test the toolchain as a whole. While this may be suboptimal in general, we are confident that it is sufficient for the size of this project and due to the fact that this is a prototype implementation. In addition, we develop only a single part of the toolchain at a time, i.e. we either change the translation tool or the cpm+oo model, which allows us to check the influence of the changes on the final output. To make sure that we catch the expected behaviours when translating and modelling, we use a wide range of test input programs and specify the expected behaviour. This includes small programs that focus on certain features, e.g. ones that use a wide range of available query types, as well as larger example programs that resemble real programs, such as the ones used in the case studies in Chapter 7. Future Work In Section 5.6, we briefly discussed features missing from the cpm+oo model, and in Section 6.3 we named some scoop features that are not handled in the translation tool. In this section, we propose ideas to how certain features could be implemented in the future. Since this not necessarily only affects the translation to the cpm+oo model, but may require changes in the model itself, we discuss possible changes to the cpm+oo model as well. In general, supporting additional features can be tackled by either extending the compiler to translate to the current cpm+oo model, which means that the feature is simulated using more primitive cpm+oo constructs, or by extending cpm+oo itself by adding direct support of these features. The advantage of the latter is that program representations become easier to read and understand, and a more direct translation can be made from source code to start graph. While this is a desirable outcome, it also requires careful reasoning about the model changes, something one can avoid if only the compiler is extended. Inheritance The most important feature towards supporting more complex scoop programs is inheritance. The main difficulty in supporting inheritance is the complexity and feature-richness of the semantics related to inheritance. scoop offers a wide range of mechanisms, such as multiple inheritance, redefining, undefining, and renaming of features, partial classes, and others. As a result, adding these features to either the translation tool or cpm+oo requires careful analysis of the underlying semantics. Identifying and isolating individual parts of the inheritance mechanisms and modelling them one by one (where possible) seems to be the right approach to tackle this task, which allows us to be confident in the resulting model. To implement simple inheritance (i.e. using the i n h e r i t keyword), one strategy would be to "unfold" the inheritance structure during translation. This means that for a class FOO that inherits feature baz from class BAR, we simply create the feature baz for both classes (in fact, it would suffice to have a feature with two init state nodes, one for FOO . baz and one for BAR . baz). Whether this is a feasible approach remains to be evaluated. Extending cpm+oo for handling simple inheritance is another possibility. Implementing the semantics directly would require representing the inheritance structure in the start graph. When performing queries and commands, rules would then need to first determine the dynamic type of the target object and based on this traverse the inheritance structure and select the correct feature to be applied. Expanded Types In cpm+oo, we only support integer and Boolean expanded types. A more general approach would distinguish between expanded types and normal (reference) types. Supporting expanded types is an important step towards full support of scoop, but will require considerable effort and requires extending the cpm+oo model, as expanded types are treated different than normal types in the scoop semantics [13], and adding them to cpm+oo has implications on existing parts of the model. Miscellaneous A number of other features are currently not supported by our toolchain. This includes more exotic features of scoop like non-object calls, assigner calls, but also basic features like character and floating point number literals or class invariants. Adding these features to cpm+oo have currently lower priorities as opposed to inheritance and expanded types, but will be considered once the above is implemented properly. In the previous two chapters, we discussed the main contribution of this work that allows us to automatically map a subset of scoop to cpm+oo start graphs and to verify state-space properties using groove. One part of the motivation behind this work is to provide a "one-click" solution that verifies certain properties-e.g. deciding whether a deadlock can occur-for a given input program written in that scoop subset. In this chapter, we inspect various scoop programs as case studies, show how they are translated to a cpm+oo graph using our toolchain, and show the properties we can verify. We provide metrics for the programs to show how the model behaves in various situations and present insights about the gained verification results. We compare our toolchain to cpm and discuss the obtained results, before we close this chapter with an outlook on future work. We use the following abbreviations to denote program configurations in this chapter. DP(n, m, {eat, bad_eat}) Dining philosophers with n philosophers and m rounds, as presented throughout this thesis. The last parameter indicates which implementation is used, where eat denotes the correct implementation and bad_eat the implementation that can result in deadlock. This program is presented as a case study in Section 7.2.1. DS(n, m, o, {bad, good}) Dining savages with pot size n, m savages, and o hunger per savage. The final parameter indicates which implementation is used, where bad is the one that can result in savages being stuck, and good the one that always terminates. This program is presented as a case study in Section 7.2.2. CS(n) Cigarette smokers problem with n rounds. In this problem, a number of cigarette smokers require different ingredients to build cigarettes, which are provided by a dealer. This program is discussed as a case study in 7.2.3. SEPC(n) Single-element producer/consumer with n rounds. In this program, a producer and a consumer are created. The producer creates n items that are consumed by the consumer. The producer has a buffer of size 1, which means that p r o d u c e and c o n s u m e calls have to alternate. Counter(n, m) Counter with n counters and m counts per counter. This is a simple program that spawns a number of counters (n), which simply perform the task of incrementing an integer from 0 to m. While it does not require any synchronisation, it is a small and easy to understand example that showcases scoop features. While the cigarette smokers program is our own implementation, the others are taken from the EVE source code repository [22] and adapted to match the input specification of our toolchain. The above programs make up the main part of the benchmark programs that we used during development of cpm+oo and the translation tool. In addition, we have a number of smaller programs that focus on a certain aspects of the model and translation (e.g. one program provides a wide range of statements involving queries). Setup The values presented in this chapter have been, if not otherwise stated, obtained using the latest revision of the tools, as described in Chapters 4, 5 and 6. We investigate how the system behaves with different setups (e.g. disabling the token passing mechanism for state-space reduction) and use the following two main configurations. Default Here, all optimisations are turned on, and all rules (in particular error rules) are enabled. Pre-and postconditions are checked as well. No token optimisation In this configuration, we disable the token optimisation, giving all actions and query operations the same priority. Error rules and pre-and postcondition checking are still enabled. Values presented in this Chapter represent the median of five (where applicable), and are obtained from a workstation with an Intel Core i7-4810MQ CPU and 16 GB main memory. Runtimes and memory usage are obtained using Java library classes (for cpm+oo measurements) and GNU time 1.7 (for cpm measurements in Table 7.6 only). Case Studies In this section, we take a closer look at three programs. We start by we revisiting the dining philosophers problem one last time and show how the implementation behaves using our toolchain. We present the implementation and (parts of) the generated start graph, before discussing evaluation results. In addition, we compare cpm+oo to cpm+oo without token optimisations, and we show how a deadlock can be detected in the bad implementation. In the second case study, we present the dining savages problem and again inspect two implementations, where the "bad" implementation does not behave as expected. We point out how our toolchain is not able to detect certain undesired behaviours. Finally, we present the cigarette smokers problem, where we show our implementation and discuss results obtained using both full state-space exploration and ltl formula checking. Dining Philosophers In this section, we conclude our running example by presenting an implementation in scoop that is in the subset of programs supported by our translation tool. We discuss parts of the start graph and take a closer look at how the program is simulated. Finally, we show how we can detect problems with the implementation, in particular, we show by example how one can detect a deadlock situation. In the root procedure (A P P L I C A T I O N . make), we create three philosophers and forks between each pair of adjacent philosophers. The philosophers get initialized with an identifier, the two separate forks they need to pick up, and a round count value, indicating how often they need to eat before terminating. Note that philosophers are started using the call l a u n c h _ p h i l o s o p h e r ( a _ p h i l o s o p h e r ). This is required since a _ p h i l o s o p h e r is of separate type and must be controlled. Passing them as an argument makes it controlled in the called feature, where we are allowed to make the call p h i l o s o p h e r. live. The code of the philosopher's make procedure also uses pre-and postconditions, which we can inspect later through model checking by our verification tools. Note that the preconditions are not wait conditions, as no involved variables are of separate type. Start Graph We do not include the complete generated start graph here-with 287 nodes and 789 edges it is too large to print. Instead, we refer to [21], where one can find the cpm+oo gts and the start graph dining_philosophers_3_philoso-phers_1_round_eat, which represents this instance. We focus on highlighting several interesting parts of the graph and its behaviour under cpm+oo here. Figure 7.1 shows the live procedure of the philosopher (nodes have been rearranged manually for improved readability, but note that the translation tool already performs basic positioning of the graph nodes). The feature starts at node n5. The first statement in the feature (until t i m e s _ t o _ e a t < 1) is represented as a pair of Boolean operations with nodes n9 and n11 and corresponding Action_Test nodes that implement the branching. Following the path to n19 means that t i m e s _ t o _ e a t < 1 and consequently we end up in node n2, the final state. Otherwise, the other branch is followed where the loop body is implemented with state nodes n14, n18, and n15. Inside the loop, two actions represent the statements eat ( left_fork , r i g h t _ f o r k) and t i m e s _ t o _ e a t := t i m e s _ t o _ e a t -1. State n15 leads to two test actions evaluating the until part. The creation procedure in the philosopher class, shown in Figure 7.2, initializes a number of attributes. In addition, it contains pre-and postconditions which validate the passed arguments and ensure that the attributes have been successfully set. In the graph, the procedure starts at node n51 with an init state. First, the handlers of the separate arguments are locked with an Ac-tion_Lock node. The following test actions represent preconditions. Note that nodes n43 and n6 have a flag precondition_fail. This denotes that these actions represent the path that is taken when a pre-or wait condition fails. This does not necessarily mean that the tested statement is a precondition. Depending on the objects included in the test and whether their handlers are controlled or not, they are either preconditions or wait conditions. This can only be detected at runtime, which is handled by the rule action_Test. When the rule is applied with an action with the precondition_fail flag and it turns out that it is in fact a precondition, then the rule creates an ERROR node, which has the effect that the system is immediately in a final state, which can be analysed by the postprocessing tools. Once a processor is in state n56, the method body gets executed. When the processor reaches node n50, there are two possibilities: Either postcondition checking is disabled, in which case the processor returns to the calling procedure or becomes idle, or postcondition checking is enabled, in which case the edge to n20 is followed. Results With the start graph discussed earlier, we can now verify different properties of the dining philosophers instance. We are not only interested in whether a deadlock can occur, but we also want to make sure that we never have a call where the target is a void reference, or that postconditions never fail. We start with the default configuration with optimisations enabled. Table 7.1 shows results with varying numbers of philosophers and rounds (number of times each philosopher eats) for both implementations (eat and b a d _ e a t). To obtain these results, we used breadth-first search and explored the full state space. Example output from our command-line tool for an instance with the b a d _ e a t implementation looks as follows. 12 13 The simulation generated an error node with label : " Deadlock detected ". Our tool explores the complete state-space and inspects final graphs. If there are nodes of type ERROR, the associated information is fetched and reported. While this kind of output is rather rudimentary, one can use groove to save the offending traces and inspect the program execution to find the root cause of the error. In cases where no error node is present, our tool first checks whether there are in_method edges in final states, which means that there are processors still executing code while no rule can be applied any more. This indicates that the program is stuck. This situation should not arise, as it means that the program is stuck without a corresponding error rule, which can be either a bug in our implementation, or an error situation we did not define and capture with a rule yet. Inspecting the numbers in Table 7.1 reveals that we are able to verify deadlock freedom, absence of pre-and postcondition failures (although only a limited number of such statements are in the source code) for the correct implementation with up to seven philosophers, which requires less than 150,000 states and transitions. The runtimes are reasonable, with most instances being evaluated in less than a minute. The numbers for the b a d _ e a t implementation are substantially larger. Due to the fact that locking of the forks is not atomic anymore, more interleavings are possible. Even with the smallest instance, this results in an increase of roughly 40% in the amount of states and transitions. With larger instances, the effect is even bigger, with the one with seven philosophers having a state-space of almost 3,000,000 states. The runtime of roughly 85 minutes is substantially longer than the runtime of the corresponding correct implementation. If one is only interested in detecting whether a deadlock can occur or not, an alternative approach is to use ltl formula exploration instead of full statespace exploration. In this approach, one can instruct groove to try and find a counterexample to an ltl formula. To detect a deadlock, we can use the formula ! F error_deadlock. Table 7.2 shows the corresponding results. As we can see, the numbers of explored states and transitions for the correct implementation do not differ from Table 7.1, as, in order to prove that no counterexample exists, one has to explore the full state-space. For the bad implementation on the other hand, the number of explored states and transitions are substantially smaller. While this may seem like an improvement at first, taking a look at the runtimes reveals that checking the formula comes at a cost. While the smaller instances of the correct implementation take roughly the same time in both cases, using ltl exploration takes longer with the larger instances. In the bad implementation, finding a counterexample is faster for small instances, but with larger ones, it takes longer to check the formula, even though fewer states are explored. In the case with 6 philosophers, finding a counterexample requires (on average) only 441,416 states compared to the 662,009 states of the full statespace. Nevertheless, finding the counterexample takes more than twice the time as compared to exploring the full state-space. Disabling Optimisations The above numbers are quite promising, as we not only can verify a minimal dining philosophers program, but also instances with a larger number of involved processors and rounds. Reducing the runtimes has helped us immensely during development, as we can test changes made to the system almost instantly, where we previously had to wait several minutes for a result. The story is quite different if we look at earlier revisions of cpm+oo. The feature that has the biggest impact is the optimisation using the execution token which marks the processor that is allowed to execute sequential actions, and where the system only processes separate actions once no processor can make sequential progress any more. If we disable this functionality, we obtain the numbers presented in Table 7.3. Verifying the instance with three philosophers and a single round already results in more than 250,000 states and 300,000 transitions, a huge difference compared to the numbers with the optimisation turned on. The difference can be explained with the fact that without the token mechanism, large chunks of the program get simulated over and over again in different interleavings without affecting the outcome. For example, consider the situation where a processor is in state n5 of Figure 7.1 and has already evaluated the arguments to the test action nodes. Without the token optimisation, at this point all other processors could simulate their states until they are finished. Another execution plan would first advance our processor to node n14 (assuming n9 has been evaluated to true), before simulating the remaining processors all over again. There is no value in considering both variants, as the outcome of the test action, which is a purely local step, does not depend on any outside properties of the system (in particular, it does not depend on the states of other processors). With the token mechanism, we force the system to take one particular path in these situations and only allow branching at points where processors can potentially interact with each other and where different outcomes can originate. Detecting Deadlocks The "eat" implementation behaves as expected. The simulation is unable to find any issues with it, in particular, we are not able to find a situation where the program deadlocks. In this section though, we inspect the alternative implementation of the philosopher's behaviour (i.e. the b a d _ e a t method in Listing 7.2). Since we want to find out whether a deadlock can occur or not, we can do so by using the ltl formula ! F error_deadlock, which states that, starting from the start graph, there is no future state where the rule error_deadlock matches. groove explores the state-space and reports whether a counterexample exists for the formula. If so, we have a situation where a deadlock occurs and we can inspect the trace that leads from the start graph to that particular state. When using the b a d _ e a t implementation, we can in fact find counterexamples to the formula. Figure 7.3 shows an excerpt of such a state with the involved processors, locks, and states. Both processors are in the p i c k u p _ r i g h t command and hold their left fork (which is the other one's right fork). Each processor holds the lock to the processor the other one is waiting for. As a result, no processor can make progress, and we have a deadlock situation which is detected by the error_deadlock rule. Dining Savages Our second case study is the dining savages problem. The premise is that there are a number of savages that share a single pot that contains their food. A cook can fill the pot, and each savage can get servings from it. Since the pot is rather small, the number of servings is limited and only one savage can get a serving at a time. If a savage is trying to get a serving when the pot is empty, he notifies the cook to fill it up again, waits until the cook does is job, and then gets his serving. Source Code Our implementation of the program consists of four classes. Apart from the A P P L I C A T I O N class that initializes and starts the system, there are classes for representing the cook, a savage, and the pot. In our implementation, we have three configuration variables, namely the pot size (number of servings the pot can hold), the number of savages, and the hunger of a savage (which is the number of servings a savage is going to take before terminating). The program first creates all objects and then launches the savages. Listing 7.4 shows relevant code of the savage class. During the lifetime of a savage, it executes the live feature which is a simple loop that executes step a number of times. In a step, a savage calls f i l l _ p o t which notifies the cook to fill the pot, if necessary. The program continues, since f i l l _ p o t can return even if the pot is empty, as the command that can get called in the routine body is asynchronous. Afterwards, g e t _ s e r v i n g _ f r o m _ p o t gets called. Since-in case the pot was empty or has become empty in the meantime-we can not be sure whether the cook already filled the pot, we use the wait condition not my_pot . i s _ e m p t y. This ensures that the savage gets the serving from a non-empty pot. Once the wait condition is satisfied, the savage has exclusive access to the pot, which means that it is impossible for the pot to become empty before the savage can call my_pot . g e t _ m e a l. The final command in a step of the savage is eating, which simply decreases the hunger value to avoid having an infinite loop in the live procedure. While the original implementation defines step without an argument, we pass the pot to this feature in our adapted implementation. We do this to avoid processors being stuck in a wait condition that never gets fulfilled. Consider the instance where the pot can hold one serving and two savages want to eat only once. Without passing the pot to the step feature call, the following sequence can occur. • Savage 1 calls f i l l _ p o t, sees that the pot is full and therefore does not ask the cook to fill it. • Savage 2 calls f i l l _ p o t, sees the same, and does not ask the cook either. • Savage 1 calls g e t _ s e r v i n g _ f r o m _ p o t, passes the wait condition, and returns. The pot is now empty. Savage 1 has finished its loop and does not execute anything any more. • Savage 2 calls g e t _ s e r v i n g _ f r o m _ p o t, but is stuck in the wait condition, as the pot is now empty. Since Savage 1 has finished, the pot will never get filled again and Savage 2 is stuck forever. By passing the pot as an argument to the step routine, we ensure that for one savage in a single step, all operations involving the pot are executed without interleaving requests from another savage. This makes the above interleaving impossible and as a result, savages can not get stuck any more. Note that the condition in g e t _ s e r v i n g _ f r o m _ p o t is now a precondition, as the request queue of the handler of the pot is already locked when g e t _ s e r v i n g _ f r o m _ p o t gets called. We call this the "good" implementation, but also take a look at the implementation where we do not pass the pot to step, which we call "bad". Parts of the source code of the cook is shown in Listing 7.5. In the r e q u i r e block of the cook feature, we use another wait condition to make sure that the pot is in fact empty when the feature body gets executed. Table 7.4 lists results obtained with cpm+oo for a number of instances of the dining savages program. The instances range from two to four savages and two to six total calls to the g e t _ s e r v i n g _ f r o m _ p o t routine. Like in the dining philosophers case, the number of involved processors has the largest impact. Even if we lower the number of times a savage eats in the last instance (with 4 savages), the state-space is by far the biggest in both implementations. This is no surprise, as with more processors, the number of synchronisation points (i.e. situations during the execution where multiple non-separate actions or queries can be performed) increases, and individual synchronisation points may include more processors, resulting in more branching in the lts. Results Comparing both implementations paints a similar picture as what we have seen in the dining philosophers example. In the "bad" implementation, we perform less restrictive locking, thus allow more possible interleavings. While the impact of the smaller instances is negligible, it becomes obvious with the larger ones. In the instance with 4 savages, the "bad" implementation takes about three times longer than the "good" implementation. Note that we do full state-space exploration here, and our tool does not report an issue with both implementations, i.e. no ERROR nodes get generated. While savages can get stuck in wait conditions in the "bad" implementation, these situations are not deadlock situations. A processor may never proceed past the wait condition, but it can make progress in the sense that the wait condition is checked over and over again (as requests are generated for and executed by the handler of the pot). The target processor is idle and can execute the requests from the stuck processor. In the lts, this results in a local cycle of states, where there is no path that "breaks out" from this cycle. Currently, we are unable to detect such situations, and whether it is possible to detect them using ltl or ctl formulae remains to be investigated in future work. Cigarette Smokers Problem In our final case study, we implement and evaluate the cigarette smokers problem. In this problem, there are three cigarette smokers wanting to build cigarettes and smoke them. Their problem is that each one has only one of the required three ingredients, namely tobacco, matches, or papers. Thankfully, a dealer is available that has an infinite amount of each ingredient. The dealer randomly makes two of them available at a time, allowing the smoker with the third ingredient to retrieve them and then build and smoke a cigarette. The original premise, which we borrow from [4], states that both the dealer's supply as well as the smoker's desire to smoke are infinite. We change this in order to get a program that terminates. In particular, we now require that all smokers only retrieve ingredients and smoke n times. In addition, the dealer puts out each distinctive pair of ingredients exactly n times. As a result, nobody is stuck waiting, as once the dealer has put out every pair n times, he can go home, and all smokers are satisfied as they were able to build and smoke a cigarette n times. Source Code Our implementation of the problem consists of three main classes, DEALER, CLIENT , and I N G R E D I E N T _ P A I R. The dealer is a simple class resembling a semaphore and is used to make sure that no two pairs are available at the same time (which would imply that all three ingredients are available, which is not allowed in the problem statement). Listing 7.6 shows the full source code of the DEALER class. Instead of using a class to represent individual ingredients, we use one to represent pairs of ingredients. Since we want a limited amount of each pair, we can use this representation to force each pair a fixed number of times. Ingredient pairs (Listing 7.7) are created with a separate dealer, and the p u t _ o u t feature is called after creation. A pair then, if the dealer is not busy, puts itself out, ready to be consumed by a client (Listing 7.8). Once a pair is consumed via the c o n s u m e feature, it either terminates, or tries to put itself out again. A client gets initialized with a pair of ingredients, and simply calls c o n s u m e n times which is blocked until its ingredient pair is actually out. With pairs of ingredients as separate objects, we introduce the randomness specified in the problem statement. The dealer serves as semaphore that can be occupied by one pair at a time, ensuring that no two pairs are out at the same time. The source code is, thanks to expressive wait conditions, rather simple and clear. Results The generated start graph of the cigarette smokers program consists of more than 400 nodes and 1100 edges. We are confident that our implementation works correctly, and therefore it is no surprise that no ERROR states are generated when verifying instances of this program. Table 7.5 shows results for various instances of the program, where we varied the number of times each smoker and ingredient pair execute their corresponding loops. In the top half of the table, we explore the full state-space and report on any ERROR nodes generated in final states. While the state-space grows quickly, we nevertheless are able to verify in a reasonable amount of time that instances with up to 5 rounds do not exhibit any of the bad properties we are looking for. In the lower half, we used the ltl formula !F error_deadlock to find counterexamples for a deadlock error rule. Since we did not find such an error in the full state-space exploration above, it is no surprise that the exploration does not find such a counterexample when using ltl property checking either. We can observe once again that formula checking comes at a cost: all instances take more time in the lower half of the table compared to their counterparts in the upper half. While we can argue that in these cases we gain more information and are faster when exploring the full state space, it is important to note that ltl checking has value as well. In particular, when looking at instances where we are no longer able to explore the full state-space, i.e. when having to rely on bounded model checking, looking for counterexamples of properties is the only way to gain any valuable information. Comparison with CPM In this section, we take a look at how various variants of our models perform. In cpm+oo, we introduced a number of abstractions and except considerable overhead in the form of a larger state-space, as compared to cpm. To reduce the state-space size in cpm+oo, we used several optimisations, in particular we used more quantifiers in certain rules and we introduced the token execution optimisation. Since there is currently no translation tool that can generate start graphs for cpm, we have to create start graphs by hand. This makes translation of realworld examples tedious and is error-prone. As a result, there are only a limited number of start graphs for use with cpm available. Most notably, we have start graphs for the dining philosophers problem with both implementations, as well as a start graph for single-element producer/consumer. It is important to note that start graphs between cpm and cpm+oo for the same program do differ substantially, making the comparison more difficult. The graphs for cpm are simpler in most regards: there are no local calls, no evaluation of call targets (instead it is directly specified by the name of an attribute), and other abstractions introduced in cpm+oo are missing as well. Nevertheless, it is interesting to compare results of those two models. Additional abstractions in cpm+oo have led to less direct action applications, which resulted in additional rules and interleavings, but using optimisations has helped reduce the statespace again. Table 7.6 shows results obtained for the dining philosophers and the singleelement producer/consumer examples with three models, namely cpm, cpm+oo, and cpm+oo without token optimisation. In the case of cpm, we use start graphs translated and adapted by hand. In the other cases, we use our scoop reference implementations and generate start graphs with our translation tool. Comparing the numbers of cpm to the ones of cpm+oo without optimisations shows that the size of the state-space of the latter exceeds the size of the state-space generated with cpm. The DP(3, 2, bad_eat) instance takes around 27 minutes to verify using cpm+oo without the token optimisation, whereas the same instance verified with cpm takes less than 4 minutes. Compared to cpm+oo without optimisations, cpm performs better across all instances. We explain this with the above arguments, namely that cpm+oo increases complexity and adds more abstractions that require additional computations. In addition, the start graphs of the cpm instances have been generated by hand and are optimised for these examples. Fortunately, the token optimisation has a huge impact in the performance of cpm+oo. When enabling the optimisations, we can verify each instance in under 30 seconds. Not only does this outperform cpm+oo without optimisations by a huge margin, but it is also considerably faster and generates smaller state-spaces than cpm with optimised start graphs. We realise that comparing the models using start graphs that differ as much as they do is not optimal. Still, in our opinion this comparison gives a good impression of the effect of optimisations and shows that-although cpm+oo is inherently more complex than cpm-we manage to not only preserve the size of generated state-spaces, but thanks to optimisations we are even able to produce smaller state-spaces focusing on the synchronization points of the programs. Scalability and Future Work So far, we have only considered input programs that resulted in state-spaces that can be fully explored with our toolchain. In our development environment, we have generated ltss with up to 4,000,000 states, at which point we ran out of memory. The number of states that can be explored depends on a number of variables though. For example, the chosen exploration strategy can be a factor, as well as the model and start graph. With larger programs where full state-space exploration is not feasible any more, one can consider doing bounded verification, where one explores only parts of the state-space. It is important to note that with bounded verification, it is only possible to search for instances of errors, but there is no guarantee that an error can be found, and the absence of errors can not be proved. groove offers a range of different exploration strategies. So far, we have used breadth-first-search and ltl exploration. For larger state-spaces, this may not be the optimal choice. For example, when searching the state-space with breadth-first search, one can only reach a particular depth. This may be undesirable, e.g. when the synchronization points that may result in a deadlock occur only later in a program. By searching for counterexamples with depth-first search instead, one may be able to actually reach such synchronization points. Of course, with this approach, not all branches are explored and one might as well miss the ones that result in a deadlock. groove also offers other exploration strategies, such as random linear exploration, where exactly one path is followed per state, or conditional exploration with restrictions on the number of edges and nodes in a state. Custom exploration strategies tailored to cpm and cpm+oo should also be considered. A thorough investigation regarding bounded exploration and using various exploration strategies to gain confidence in the obtained results is out of scope of this thesis and remains to be done in future work. Conclusion In this chapter, we conclude the thesis. We start with a review of the research hypothesis and summarise our efforts and contributions, before we close the thesis with some final words on future work. Contributions In Section 1.2, we stated the following research hypothesis. A subset of valid scoop programs can be modelled using a graph transformation system. These programs can, without modification of the source code, be automatically translated to input graphs for the transformation system. Using verification by model checking, it is possible to verify a number of properties such as absence of deadlock or absence of precondition violations for a given input program. In our opinion, this thesis satisfies the hypothesis with the following contributions. In Chapters 4 and 5, we described formal models, implemented in groove, that can be used to simulate a subset of scoop programs. In the former, we discussed cpm, which focuses on the concurrency features of scoop. In the latter, we extend cpm by adding object-oriented features from scoop to obtain the cpm+oo model. By careful (informal) reasoning in individual steps, we were able to preserve confidence in the correctness and completeness of the model. In Chapter 6, we presented a simple compiler that takes a subset of valid scoop programs as input and generates input graphs for our formal model. While we are not able to support the complete scoop language, we were able to translate a number of real-world concurrent example programs, as later discussed in Chapter 7. In addition, by embedding the groove binaries, we also created a simple command-line interface that can be used to verify scoop programs matching the input specification with one single command. This supports the research hypothesis, as the tool works on unmodified scoop code. We evaluated our approach in Chapter 7 with several case studies. We investigated a number of implementations of problems suited for demonstrating concurrent programming, such as the well-known dining philosophers problem, and we have shown how our translation tool and the models behave. We discussed various aspects of the current version of the cpm+oo model, but also presented the effects of state-space optimisations and a comparison to cpm. We have seen that our toolchain can verify properties like absence of deadlock or absence of precondition violations for the inspected programs, which supports the research hypothesis. Future Work With our tools and model, we are able to translate a number of scoop programs and can verify certain properties, such as the absence of deadlock scenarios. While our input programs already use a number of object-oriented features of scoop, we are a long way from supporting the complete language. In particular, both the translation tool and the model lack support for inheritance. Our focus in the future is to extend the model and compiler to support a larger subset of scoop programs. We provide a simple tool that works on scoop source code and prints out verification results with a single command. The tool outputs simple messages that state which errors could be found. Ideally, the tool should be integrated with the EVE [22] integrated development environment, which combines a number of other verification and analysis approaches. Ultimately, the goal should be to provide a GUI interface that is intuitive and easy to use. The output should be more verbose, extracting more information about the situations that occur (e.g. stating which features processors are executing when a deadlock is detected). In this thesis, we have been using sample input programs that generate small state-spaces that can be fully explored within minutes or hours. A thorough investigation and evaluation of our work with respect to larger programs and bounded verification remains to be done. While this thesis focuses on the scoop model, it may be possible to adapt our approach to other concurrency languages and models, such as Grand Central Dispatch (gcd) [7]. An evaluation remains as future work.
2015-05-20T09:33:14.000Z
2015-05-20T00:00:00.000
{ "year": 2015, "sha1": "4553a41b9931dc4e1e0e2c952775d3a80ffdc2ee", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "be8bc278169dba1afcc9cac6a537a5012b6de950", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
46035165
pes2o/s2orc
v3-fos-license
Measurement of collective excitations in VO$_2$ by resonant inelastic X-ray scattering Vanadium dioxide is of broad interest as a spin-1/2 electron system that realizes a metal-insulator transition near room temperature, due to a combination of strongly correlated and itinerant electron physics. Here, resonant inelastic X-ray scattering is used to measure the excitation spectrum of charge, spin, and lattice degrees of freedom at the vanadium L-edge under different polarization and temperature conditions. These spectra reveal the evolution of energetics across the metal-insulator transition, including the low temperature appearance of a strong candidate for the singlet-triplet excitation of a vanadium dimer. Vanadium dioxide is a spin-1/2 electron system that undergoes a metal-insulator transition near room temperature [1], and has been the subject of strong interest in both basic and applied research.When cooling through the transition, vanadium atoms pair into strongly hybridized dimers as the the crystal structure changes from rutile (R phase) to monclinic (M1 phase) [2,3].The mechanism driving this transition incorporates Peierls splitting of the bonding and antibonding states of the dimer basis [4,5] and represents a fascinating crossover from itinerant to localized behavior in an electron system that is intrinsically poised at the threshold of becoming a Mott insulator [6][7][8][9][10][11].A key challenge to establishing a comprehensive understanding of VO 2 based systems is that, though the gapping of symmetric and antisymmetric states within vanadium dimers is of central importance in motivating the metal to insulator transition, excitations across these gaps have not yet been experimentally resolved.Here, resonant inelastic X-ray scattering (RIXS) at the vanadium L-edge is used to measure the evolution of vanadium site energetics across the transition.Close comparison with a first principles based multiplet cluster model is used to identify symmetries within the RIXS spectrum, and reveals a strong candidate for a symmetric-to-antisymmetric excitation that breaks the singlet bond of a low temperature vanadium dimer. The orbital character of VO 2 electronic states was first explored by Goodenough [12], and is outlined in Fig. 1(a).The octahedral-like crystal field splits vanadium 3d orbitals into π * (t 2g ) and σ * (e g ) manifolds.Electrons in the ground state largely occupy a π * orbital termed 'd ', with lobes that point between neighboring vanadium atoms along the chain axis (c R -axis).When cooling into the low temperature M1 phase (Fig. 1 Elements of strong correlation have been well established in the electronic structure of VO 2 [9,11,[14][15][16], and singlet bonding between dimerized vanadium sites with well defined 3d 1 occupation is thought to explain the nonmagnetic nature of the insulating M1 phase.However, the excitations that best represent energetic changes upon entering the M1 phase, including bonding-antibonding (d b to d a ) excitations and the singlet-triplet excitation of vanadium dimers, fall within an antisymmetric sector that is not strongly accessed by optical (Q ∼ 0) spectroscopies.Non-optical experimental studies of the electronic density of states have largely made use of single-particle spectroscopies in which an electron is added to or removed from a vanadium site, which does not give information about coherent electronic transitions such as the singlet-triplet mode.In the present study, soft X-rays provide sufficient momentum transfer along the dimer axis for direct symmetricto-antisymmetric excitations to appear in RIXS spectra (see Fig. 3 RIXS measurements were performed at the ADRESS beamline of the Swiss Light Source at the Paul Scherrer Institute [17,18], with combined energy resolution better than ∆E=90 meV at the RIXS incident energy of hν=515.6 eV.The linear polarization of the incident photons could be set either perpendicular (σ-polarization) or parallel (π-polarization) to the scattering plane.This experimental configuration allows one to selectively probe excitations with polarization perpendicular (E ⊥ c R ) or near-parallel (E c R ) to the rutile c R -axis.Measurements were carried out at temperatures of (insulating) 260 and (metallic) 320K, with base pressures better than 5×10 −11 torr.The incident photons were maintained at a grazing angle of 15 • and RIXS was measured at an acute outgoing angle of 65 • with respect to the [001] sample surface (included angle is 50 • ).Beam damage was minimized by adopting a new beam spot for each measurement.The high quality single crystalline VO 2 film of 10nm thickness was grown on a TiO 2 (001) substrate by pulsed laser deposition, following the procedures described in Ref. [19].Under these conditions, the metalinsulator transition occurs sharply at T M I =295K, and the c R /a R lattice constant ratio is 0.617 [16,19]. The RIXS and XAS spectral functions are simulated for the experimental scattering geometry by the standard atomic multiplet method, augmented to incorporate two equivalently treated vanadium atoms with interatomic hopping via the d orbital (see details in online Supplemental Material [20]).First principles calculations estimate the intra-dimer d hopping parameter to be more than an order of magnitude larger than inter-dimer d hopping [9], making this a good approximated basis for the analysis of low energy excitations in the M1 insulating state. The vanadium L-edge polarization-dependent XAS spectra of VO 2 in the M1 phase are shown in Fig. 2(a, top), and are well studied in previous research [21,22].These high-resolution measurements were carried out using the total electron yield method at the elliptically polarized undulator beamline 4.0.2 of the Advanced Light Source, in the Vector Magnet endstation [23], and employ the same experimental geometry as the RIXS measurements.Separate spectral features associated with the L 3 and L 2 core hole symmetries are observed at ∼517 eV and ∼524 eV respectively, each having weak leading edge features followed by a strong high energy peak.Intensity near the L 3 maximum is greatest with polarization parallel to the axis of vanadium dimerization (c R -axis), while intensity at L 2 is enhanced when polarization is normal to the dimer axis.The atomic multiplet simulation in Fig. 2(a, bottom) reproduces these characteristics, with a ∼0.5 eV discrepancy in some feature energies within the E c R channel. Low energy excitations from 0-4 eV are measured by RIXS in Fig. 2(b).Broad features centered at roughly ∼0.6 eV and ∼2.3 eV are consistent with the energy gaps expected for excitation of a d electron into the octahedral π * (t 2g ) and σ * (e g ) symmetry state manifolds, and have been labeled accordingly.Spectral intensity at low temperature is greatly suppressed within the ∆=0.6 eV insulating gap of VO 2 [13,24], and is partly filled-in when the sample is heated into the metallic phase.Remarkably, the π * (t 2g ) excitation feature has significant polarization dependence, and has an intensity maximum inside the 0.6 eV insulating gap when measured at low temperature with polarization parallel to the c R -axis.It is also noteworthy that upon cooling into the insulating phase, no feature appears at ∼1.4 eV, the expected energy of a d s to d * excitation in the itinerant limit (1.4 eV is roughly twice the inter-dimer d hopping parameter [9]). To understand these spectra, low energy excited states of the dimerized V 2 atomic multiplet model are used to calculate the RIXS spectral function in Fig. 3, via the Kramers-Heisenberg equation: Here, the spectral intensity is dependent on both the excitation energy (E) and the incident energy (hν), which determine the degree of resonance for scattering paths from the ground state 'g' to intermediate core hole states 'm' and the final excited states 'f', which are broadened by inverse lifetime terms (Γ).All final excited states fall within symmetric (Q = 0) or antisymmetric (Q = π) sectors, and appear with different matrix elements for polarization polarization parallel and perpendicular to the dimer axis (see Fig. 3(a-b,d-e)).In the experimental geometry chosen for this study, the momentum (Q) transferred from the scattering event has a component of Q = 0.26π along the dimer axis, in units of the inverse distance between nearest neighbor vanadium atoms.This resulting RIXS spectrum is derived 84% (cos(0.26π/2) 2 = 0.84) from the Q = 0 final state sector and 16% (sin(0.26π/2) 2 = 0.16) from the Q = π sector. All of the symmetry sectors show qualitatively similar RIXS spectra, with prominent peaks at E∼0. (t 2g ) and σ * (e g ) manifolds, respectively.The Q = π excitation sector differs from the optically accessible Q = 0 spectrum in that a singlet-triplet excitation of the dimer is found at 0.42 eV, 26% reduced from the singlet-triplet gap (d s to d t transition) expected from perturbation theory in the strongly correlated limit ( U =4×-0.8 eV 2 /4.5 eV=0.57eV).Charge transfer excitations between the vanadium atoms are not seen, as they occur at a higher energy scale than the plotted range.Calculated spectra for the experimental momentum value are shown in Fig. 3(c,f), and show the three features outlined in the "Singlet" scenario of Fig. 1(b). Close comparison between experiment and theory is complicated by the fact that most experimental features appear at energies larger than the band gap, and may be significantly broadened due to rapid decay of multiplet states into delocalized band excitations [25].Plotting the experimental results over a larger energy range in Fig. 4(b-c) reveals that higher energy line features are qualitatively broader, and sharp line shapes comparable with experimental resolution are only found within the insulating gap.Taking this trend into account, the experimental data under each polarization condition are well fitted by four Lorentzians representing the three low energy features found in the simulation, as well as one high energy excitation at 7.2 eV, which can be principally attributed to metal-ligand charge transfer.Feature energies below E<4 eV are lower than nearby peak energies seen in the imaginary part of the dielectric constant by optical spectroscopies [26][27][28][29][30].This is in part due to differing excitation symmetries [31], and can also be attributed to the spatially localized nature of the RIXS scattering process, which couples to coherent atomic-exciton-like final states.The direct RIXS excitations of strongly corre-lated systems with a single electron degree of freedom per atom (e.g.cuprates, vanadates) correspond closely with the difference between orbital site energies set by the crystal field [32], whereas optical excitations represent transitions between band continua. A particularly dramatic slope is seen from 0.2-0.5 eV under E c R polarization (Fig. 4(b)) and means that, given the above constraints, an good fit for that polarization condition must include a larger component of the sharp in-gap singlet-triplet mode.Attribution of this mode to the singlet-triplet dimer-breaking excitation is further supported by the fact that this sharp feature is no longer evident upon heating into the non-dimerized metallic phase (Fig. 1(b)), and that no analogous feature is seen in optical (Q ∼ 0) measurements on analogous thin film samples [26].With this feature attribution and better separated singlet-triplet and non-bonding modes, we anticipate that it would be possible to identify a more detailed line shape dressed by the interplay of coherent phonon states and the inter-vanadium hopping parameter as atomic positions relax following the breaking of the singlet bond. To assess the likely accuracy of this fit, a summary of the calculated intensity of scattering in the singlet-triplet excitation (E∼0.45 eV) versus non-bonding π * (t 2g ) symmetries (E∼0.8 eV) near the experimental RIXS incident energy of hν=515.6 eV is shown in Fig. 4(a).Within a ±0.25 eV neighborhood surrounding the RIXS energy, the singlet-triplet feature accounts for 19.3% of the intensity in a combined t 2g feature under E c R polarization and 4.5% of the intensity with E ⊥ c R .These numbers show a good qualitative correspondence with values of 23% and 8% respectively from the fit.The model appears to underestimate the overall intensity of such a combined feature in the E ⊥ c R channel, however the calculated non-bonding resonance peaks near the RIXS energy in both polarization channels have a close ±0.2 eV correspondence with features seen by XAS (Fig. 1(a, top)). In summary, RIXS has been used to measure the energies of single particle transitions between the significant orbital manifolds of VO 2 .By comparison with a first principles-derived numerical model, we find a strong correspondence between this spectrum and the electronic states expected in a strongly correlated picture for low temperature vanadium dimers.Scattering matrix elements are found to enable the first experimental measurement of the singlet-triplet excitation that breaks the singlet spin bond of a vanadium dimer.Polarization and temperature dependence in the experimental spectra are used to identify a strong candidate for this feature at E=0.46 eV.These results provide a window into the gap structure and high energy landscape underlying the metal-insulator transition of VO 2 , and more generally demonstrate the power of the RIXS technique as an incisive probe of correlated energetics in transition metal compounds. (b)), dimerization of vanadium atoms along the c R -axis splits the d -derived states into symmetric and antisymmetric manifolds.The d derived states are split by the Peierls transition into bonding and antibonding states (d b and d a ), which are energetically modified and manifest additional DOS features (e.g.d b * ) due to local entanglement and correlations [9, 10, 13]. discussion), making it possible to measure antisymmetric-sector excitations directly as outlined in Fig.1(b). FIG. 2 : FIG. 2: XAS and RIXS across the metal-insulator transition: (a) (top) Experimental and (bottom) theoretical XAS spectra of M1 phase VO2 are shown for two incident photon polarization conditions.(b) RIXS spectra in the insulating and metallic states at temperatures T=260K and 320K, respectively.The two E ⊥ cR curves are downward offset by 100 counts. FIG. 3 : FIG. 3: Simulated RIXS spectra of a vanadium dimer: The RIXS scattering profile is simulated for E cR polarized incident photons in the high symmetry (a) Q=0 and (b) Q=π sectors, and (c) for a superposition given by the experimental value of Q=0.26π.Panels (d-f) show corresponding spectra obtained with E ⊥ cR polarization.Features are plotted with an artificially narrow Γ f =0.1 eV width for visual clarity.
2016-03-03T16:36:24.000Z
2016-03-03T00:00:00.000
{ "year": 2016, "sha1": "b239dc1e99c04df6104321297c768846b2786c9b", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.94.161119", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "c1e3b567b04af222b0593039b2e14db3cf9d25b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
265262944
pes2o/s2orc
v3-fos-license
Assessing the weather conditions for urban cyclists by spatially dense measurements with an agent‐based approach Convincing commuters to use a bike is a timely contribution to reach sustainability goals. However, more than other modes of transportation, cycling is heavily influenced by the current meteorological conditions. In this study, we assess the weather conditions experienced on individual cycling routes through an urban environment and how weather observations and forecasts may give guidance to a better cycling experience. We introduce an agent‐based model that simulates cycling trips in Hamburg, Germany, and a three‐category traffic light scheme for precipitation, wind and temperature comfort. We use these tools to evaluate the cycling weather based on the commonly used single‐station measurement approach versus spatially dense observations from an urban station network and radar measurements. Analysis of long‐term data from a single station shows that most frequently discomfort is caused by temperature with a probability of 33%. Wind and precipitation discomfort occur only for about 5% of the rides. While temperature conditions can be well assessed by a single station, only one‐third of critical precipitation events and less than 10% of critical wind events are captured. With perfect knowledge, temporal flexibility in start time of less than ±30 min reduces the risk of getting wet by 50%. For precipitation, nowcasting is able to predict 30% of the critical events correctly, which is significantly better than model forecasts. Operational ensemble forecast provides satisfactory guidance concerning temperature; however, the limited predictability of precipitation and wind renders these forecasts only useful for riders with a high risk‐awareness and small sensitivity to false alarms. | INTRODUCTION New transportation concepts have emerged over the past years with the goal to reduce greenhouse gas emissions and to ensure more effective transportation within cities. Cycling routes are essential to these new concepts.More than other modes of transportation, cycling is heavily influenced by the current meteorological conditions (e.g., Brandenburg et al., 2007;Miranda-Moreno & Nosal, 2011;Tin Tin et al., 2012).Thomas et al. (2012) found that a combination of weather factors (temperature, duration of sunshine, precipitation and wind speed) could explain 80% of the observed variability in cycling volume in two cities in the Netherlands.Weather conditions can be quite variable even in the same city for a given time and may change within minutes, for example, during convective precipitation events.Therefore, better information about current and future weather events from meteorological measurements and forecasts can improve the cycling experience.In this study, we assess the weather conditions experienced on individual cycling routes through the urban area of Hamburg and how weather observations and forecasts may give guidance to a better cycling experience. Previous studies often combined bicycle count data with measurements from weather stations to analyse the impact of weather conditions on cycling behaviour.Tin Tin et al. (2012) calculated linear regressions between bicycle counts in Auckland, New Zealand, and hourly measurements from a nearby weather station.They found that a 10% increase in cycling volume was related to either an increase in temperature of 4.4 K or in sunshine duration by 25 min or to a decrease in precipitation by about 1 mm or in wind gusts by 7.1 m/s.A similar result with a temperature increase of 4.0 K causing a 10% increase in cycling volume was found by Miranda-Moreno and Nosal (2011) for Montreal, Canada.Some studies also used survey data to assess the weather impact on transport choices (e.g., Helbich et al., 2014;Liu et al., 2014;Meng et al., 2016).Based on travel diaries and daily aggregated weather measurements for Rotterdam, the Netherlands, Böcker and Thorsson (2014) found negative relationships between bicycle use and both precipitation sum and wind speed.Temperature had a bell-shaped effect on cycling with an optimum on days with maximum air temperatures around 24 C.In a questionnaire for bicycle commuters in Melbourne, Australia, about 85% of people responded that rain had an impact on their decision to ride, while the impact was smaller for wind with 57% and temperature with only 34% (Nankervis, 1999).These studies identify precipitation, temperature and wind as the most relevant cycling weather parameters (see also Bean et al., 2021;Kruijf et al., 2021).Different studies suggested that precipitation also effects cycling choices with a time lag of a few hours (Miranda-Moreno & Nosal, 2011;Nosal & Miranda-Moreno, 2014;Zhao et al., 2018) and that commuters are more inclined to delay their trips to avoid bad weather compared to recreational cyclists (Morton, 2020). A few studies also elaborated the impact of weather forecasts on cycling behaviour.In a survey conducted by Meng et al. (2016) in Singapore, more than 66% of people reported that they would change cycling plans if the weather forecast predicted rain.Kraemer et al. (2015) found that forecasted rain decreased the number of bicycle commuters in Washington, D.C., the United States, by 40%, while actual observed rain caused a decrease of only 28%.Wessel (2020) used bicycle counting stations from 37 cities in Germany together with television weather forecasts.They found that both actual and forecasted weather significantly influenced bicycle usage. Most previous studies on the influence of weather on cycling used measurements from a single meteorological station or were even based on daily means of the considered meteorological variables.However, the weather experienced locally by individual cyclists may differ substantially due to both small-scale variability within the area of a city and rapid temporal evolution.This discrepancy between local perception and single point observation is an inherent reason for cyclists to complain regularly about weather observations and forecasts, even if they are scientifically without error.This study explores the potential of novel small-scale resolving observing systems to overcome this misconception by adequately capturing the local perception of cyclists.In particular, we analyse the benefits of a dense weather station network with kilometre-scale, along with precipitation radar data with a 100-m resolution-both providing minute-scale temporal resolution-for the Hamburg region in northern Germany.The results give guidance for the design of future urban observation systems that will be able to inform citizens about the individual perceived weather conditions.We also analyse how well weather forecasts are able to represent this variability and thus can serve to inform and predict cycling weather.More precisely, based on modern observations resolving urban weather variability in space and time, we aim to answer the following questions: (1) How often does a virtual typical commuter in an urban environment experience bad weather while cycling based on highresolution meteorological observations?(2) Can smart cyclists, being informed by nowcasting and day-ahead forecast, effectively reduce the number of bad weather rides? For this purpose, we introduce a three-category (green, yellow, red) traffic light scheme based on critical values for precipitation, wind speed and air temperature, which is then used as an indicator of cycling comfort.The data from weather stations, precipitation radars and the COSMO-D2 model ensemble used in this study are described in Section 2. In Section 3, we derive the thresholds used for the traffic light scheme and introduce an agent-based model to simulate cycling trips of commuters in Hamburg.We assess the weather conditions for cyclists in Hamburg based on both a single station with long-term data (Section 4.1) and spatially dense data from precipitation radars and a network of more than 100 stations, which measured weather conditions in summer 2020 (Section 4.2).In Section 5, we analyse the potential of forecasts for improving the cycling experience.First, we evaluate how a flexible starting time of a cycling trip can reduce the chances of experiencing uncomfortable precipitation conditions (Section 5.1).As a second step, we analyse how well simple forecasts, such as persistence and nowcasting, and more complex numerical weather forecasts are able to capture the spatial and temporal variability of cycling comfort.Finally, we quantify the value of probabilistic ensemble forecasts for cyclists with different risk perceptions (Section 5.2). | Atmospheric measurements In this study, we make use of numerous and various meteorological stations and instruments operated by our institute (Figure 1), ranging from long-term single-point measurements ('Wettermast' and 'HUSCO') to a campaign setup during July and August 2020 of a spatially dense network of additional weather stations ('APOLLO' and 'WXT').Precipitation is continuously monitored by a high-resolution X-band precipitation radar since 2021.In addition, we use the data of one C-band precipitation radar of the German weather service (DWD).years of data from 2007 to 2020 as a reference period.The temporal resolution of these data is 1 min. Special processing was applied to the precipitation data.Due to the low resolution of the precipitation amount measured by the rain gauge (0.1 mm), for most weak precipitation events the time series alternates between values of 0.0 and 0.1 mm and, thus, is not suitable to represent a continuous precipitation event as desired in our agent-based model (see Section 3.2).For this purpose, we use additional data of a more sensitive infrared rain detector and distribute the precipitation amount values measured by the rain gauge to all minutes with a positive rain detection flag lying beforehand.This rain detector is in operation since mid-2006, which restricts our data set to the year 2007 and after.Additionally, Wettermast data are used to derive the temperature thresholds in Section 3.1.3.For this purpose, we use measurements of downward shortwave and longwave radiation, surface temperature, as well as air temperature and water vapour pressure at 2 m height. | HUSCO stations The Hamburg Urban Soil Climate Observatory (HUSCO) consists of 10 weather stations with focus on urban soil measurements across Hamburg.A typical HUSCO station is equipped with soil sensors and standard meteorology sensors providing data as 1-min averages.Some of these stations are arranged in pairs with a short distance to investigate the difference between urban and rural environments.Temperature sensors are mounted at 2 m height and wind sensors at 2.3 to 3.0 m height.Further details are described in Wiesner (2013).Only nine of the HUSCO stations provided wind measurements during the case study period in July and August 2020. | APOLLO and WXT weather stations Between June and August 2020, a dense network of 103 custom-built weather stations covered the greater area of Hamburg within the framework of the FESST@HH field experiment.The network featured 82 low-cost APOLLO (Autonomous cold POoL LOgger) stations that sampled air temperature and pressure at a resolution of 1 s and was primarily designed to capture rapid perturbation associated with frontal passages of convective cold pools.Furthermore, 21 WXT weather stations based on commercial sensors (Vaisala WXT 536) recorded air temperature, pressure, relative humidity, wind speed, wind direction and precipitation at a resolution of 10 s.All variables are sampled at a height of about 3 m above ground.A detailed description of the measurement stations can be found in Kirsch et al. (2022).Here, we use air temperatures and wind from this data set (Kirsch et al., 2021) to get insights into the smallscale variability of meteorological parameters within the city. | Radar The precipitation variability experienced by bicycle commuters may be miscaptured by a few local rain gauge measurements.Therefore, we use spacious precipitation measurements estimated at different spatio-temporal scales by an X-band and a C-band weather radar.An operational, single-polarised X-band weather radar provides measurements within a 20-km scan radius around its location on the rooftop of the Meteorological Institute of Universität Hamburg in Hamburg's city centre (Figure 1).This local area weather radar operates at one elevation angle (3.5 ) with a high temporal (30 s), range (60 m) and sampling (1 ) resolution (Lengfeld et al., 2014), refining observations of the German nationwide C-band weather radars of the DWD.The DWD's C-band radar provides measurements within a 150 km radius and is located near Boostedt about 50 km north of Hamburg, thus scans the whole city area.We use the precipitation scan at the elevation angle 0.8 with a 5 min temporal, 250 m range and 1 sampling resolution. The X-band precipitation rates (Burgemeister et al., 2022) are estimated from radar reflectivity values adjusted for several sources of radar-based errors, for example, the radar calibration, alignment, attenuation, noise and non-meteorological echoes (Burgemeister et al., 2023).Radar reflectivities of both radars were corrected for attenuation using the method of Jacobi and Heistermann (2016) implemented by Heistermann et al. (2013).The precipitation rates were retrieved from attenuation-corrected reflectivities using a power-law relationship between these two quantities ('Z-R relationship'), using the standard Marshall-Palmer relationship (Marshall et al., 1955). | Data availability during case study period in July and August 2020 The availability of observation data typically depends on various external factors specific to the measurement instrument and variable.In case of the dense network of APOLLO and WXT stations (Section 2.1.3),the recording of measurement data was determined by the continuous power supply of batteries that needed to be changed on a regular basis.Due to limited human resources, this led to some shutdowns of single stations after a certain period of time.The station network hence becomes partly coarser over the course of the period.For the example of temperature, Figure 2 depicts this temporal dependency on availability of measurement data.While HUSCO and Wettermast achieved entire data recording during the period, the WXT network was running during 95% of the time.For the APOLLOs, the decline in measurement data recording was strongest in the first 2 weeks.However, the data variability ranges between 87% and 95% and the availability is always higher than 80%, meaning that more than 60 APOLLO stations were running continuously.One WXT station was partly inactive causing a decrease for all three measurement quantities. Throughout most of the analysis period, the radar systems were running smoothly.However, for days with significantly lower availability radar data are neglected, as indicated in grey.Analyses using precipitation data are limited to days where both X-band and C-band radars provide complete data. | COSMO-D2 ensemble model data The simulations of the operational Consortium for Smallscale Modelling (COSMO)-D2 model are performed every 3 h by the DWD, and its domain covers the whole area of Germany and parts of the neighbouring countries.The storm resolving simulations have a horizontal resolution of 2.2 km with hourly output values and a forecast horizon of 27 h per simulation.In this study, the weather forecast for the cycling decision on the following day is based on the 12-UTC run.Additionally, this study uses the 20 ensemble members of the COSMO-D2 Ensemble Prediction System (EPS) to analyse the added value of a probabilistic forecast for cyclists.The different initial and boundary conditions of the ICON EUrope (ICOsahedral Non hydrostatic) ensemble are used together with various model physics and soil moisture variations to set up the 20 different COSMO-D2 EPS members.Further information about the model, the ensemble and its physics can be found in Baldauf et al. (2016) and Baldauf et al. (2011). | METHODS TO ASSESS URBAN CYCLING WEATHER To evaluate the influence of meteorological conditions on the cycling experience, we use a three-category traffic light scheme for precipitation, wind and temperature.It identifies conditions, which are either nice/suitable (green), uncomfortable/barely suitable (yellow) or very uncomfortable/not suitable, including dangerous (red) (Section 3.1).To track and assess weather conditions experienced by virtual bicycle commuters, we introduce an agent-based Monte Carlo simulation using one bicycle counting station and proxy data for start and end points of the cycling trips (Section 3.2).The methods used to compare different data sets and to statistically evaluate the performance of forecasts are presented in Section 3.3. | Classification of weather conditions Previous studies have considered precipitation, wind and temperature as cycling-relevant weather parameters (e.g., Brandenburg et al., 2007;Goldmann & Wessel, 2020).Other weather factors influencing the cycling experience are, for example, low visibility during fog, icy roads or lightning during thunderstorms.Here, we focus on precipitation, wind and temperature only, since these quantities are widely available from measurements and model forecasts.A traffic light scheme for cycling comfort based on these three parameters is introduced in this section.The corresponding thresholds for each parameter are listed in Table 1 and described in more detail in the following sections. | Precipitation thresholds In the case of precipitation, we assume that cyclists collect water on their skin and clothes while riding.The more water they have collected, the more inconvenient they feel.A measure for this collected water amount should be the sum of the precipitation amounts given in mm for each minute of the trip.With that, we assume that a short but strong shower has the same effect as long moderate precipitation. Starting from typical intensities of drizzle and rain that can be found in literature (Deutscher Wetterdienst, 2022b), and based on our own experiences on trips during extended rain events with known and constant intensity, we distinguish the three traffic lights empirically as follows: A precipitation sum below 0.1 mm means green (acc.to light drizzle for a typical trip duration), a moderate value from 0.1 to 0.5 mm means yellow (acc.to light rain for less than 10 min, e.g.), and more than 0.5 mm is red (Table 1). | Wind thresholds Common experience teaches us that wind has a significant impact on how comfortable a bike ride is.Two aspects must be taken into account: Moderate wind speeds have an influence on the power cyclists need to hold their own riding speed depending on the relative wind direction.Strong winds may be harmful either directly to the cyclist's balance or indirectly regarding severe damages in the environment like breaking branches of trees. The effect of strong winds is considered to be independent of wind and cycling directions.With respect to the DWD's warning categories (Deutscher Wetterdienst, 2022a), we choose 8 Beaufort (72 km/h) as a limit when cycling begins to be dangerous.This corresponds to warning level 2 (storm gusts).Since we only measure mean wind, we rescale this limit for wind gusts using a gust factor of 1.6.If the mean wind speed at 10 m height exceeds this condition (45 km/h), our traffic light turns red. For cases without strong winds, the relative wind direction has to be taken into account.Schlichting and Nobbe (1983) derived a relationship for the power needed for an average cyclist to overcome the wind and rolling drag dependent on wind speed and direction.We consider sectors spanning 45 and ranging from headwind to tailwind and define thresholds for each.The traffic light changes to yellow (red) when the power needed to overcome the wind drag is twice (three times) the usual amount required in a windless situation.These thresholds are only considered if they are below the strong wind limit. To calculate the specific limits, we assume a riding speed of 15 km/h.Since the formula of Schlichting and Nobbe (1983) considers the wind speed at the height of the cyclist, we first have to reduce our 10-m and 3-m wind speeds to 1 m height, approximating the centre of the body of a typical cyclist.Following the Klima-Michel model of Jendritzky (1990), this can be done by multiplying the 10-m wind by 0.67.To also convert measured 3-m winds to the height of the cyclists, we derive an additional scaling factor using Wettermast data.Based on wind speed measurements from 10 m and 3 m height, we obtain a factor of roughly 0.8 (not shown).The ratio of these two factors then yields a factor of 0.83 to convert 3-m winds to the height of the cyclist.Adjusting the strong-wind threshold to cyclist level then yields 30 km/h, accordingly. All thresholds at cyclist's height can be found in Table 1.Note that only the thresholds for headwind and cross headwind are below the strong-wind threshold.In addition, cross-winds can also destabilise a bicycle.Studies indicate that for cross-wind speeds larger than the cycling speed, the side forces acting on the bicycle become larger than the drag forces (Kraemer et al., 2021).However, determining a critical wind speed for destabilising conditions would require extensive studies on actual cyclists or using wind tunnels, and thus, we decided to neglect this specific impact. | Temperature thresholds Weather stations and numerical weather prediction models typically provide air temperature as a variable. However, what matters more for the cycling experience is actually not the temperature but rather the thermal comfort of the cyclist (see, e.g., Brandenburg et al., 2007;Böcker & Thorsson, 2014).Besides air temperature, thermal comfort also depends on humidity, wind speed and radiation, as well as the clothing and physical activity of the considered individual (Jendritzky, 1990).A measure for the distribution of thermal comfort within a large group of people is the 'predicted mean vote' (PMV), which has been originally introduced by Fanger (1970) and ranges from À3 (very cold) to þ3 (very warm).PMV can then be used to predict the number of dissatisfied people (PPD), Since the calculation of thermal comfort requires many variables that are not typically available from weather stations, we aim to find a relationship between PMV and air temperature, allowing us to derive thresholds for cycling comfort based on air temperature alone.We calculate PMV using the formulas in Jendritzky (1990) based on 14 years of Wettermast measurements (2007 to 2020).PMV is originally defined for pedestrians only and thus needs to be adjusted for the speed of the cyclist, which we assume to be 15 km/h in accordance with the agent-based cycling model in Section 3.2. The results for PMV and PPD as a function of 2-m temperature are shown in Figure 3a,b.To obtain thresholds for our cycling traffic lights, we look at cold and warm discomforts (based on the sign of PMV) separately.We define the yellow (red) thresholds for more than 10% (50%) of people in discomfort.We then select all points within a ±1% interval of the 10 and 50% PPD thresholds and plot the corresponding 2-m temperatures as histograms and cumulative frequency distributions (Figure 3c-e).By selecting a certain cumulative fraction (CF) threshold, we can then derive corresponding air temperature thresholds.An obvious choice would be the median and thus a CF threshold of 0.5.In Figure 4a, we present an example of classifications based on temperature thresholds derived using a CF value of 0.5 and compare them with classifications based on the PPD values via a contingency table.To assess the accuracy of our classifications, we calculate the total number of correct classifications by summing the values on the diagonal of the contingency table (highlighted in coloured boxes in Figure 4a).For a CF threshold of 0.5, this yields an accuracy of approximately 92.5%.We also tested whether an even better agreement could be obtained for other CF thresholds.Figure 4b illustrates that, for CF thresholds ranging from 0.45 to 0.75, the number of correct classifications always remains above 92%.The optimal agreement, reaching almost 93%, is achieved at a CF threshold of 0.65.Consequently, we have selected this CF threshold for calculating the temperature thresholds.The resulting four temperature thresholds can be found in Table 1. We also tested whether the air temperature thresholds vary with season.However, we only found small variations over the course of the year, which can be neglected (not shown).In a second sensitivity study, we estimated the impact of the relative air speed (cycling speed combined with wind from different directions) on the thermal comfort.For a cycling speed of 15 km/h, a variation in the wind direction by 180 changes temperature thresholds by up to 0.2 K.However, the changes induced by varying the cycling speed by 5 km/h have a much larger impact and can change thresholds by more than 0.5 K.These results suggest that uncertainties caused by the choice of average cycling speed are much more important, and thus, the cycling direction is negligible for the derivation of temperature thresholds. Different studies also focused on the impact of shading by buildings and trees on the experienced temperature stress (Fischereit, 2021;Hoffmann et al., 2018).We tested the impact of shading by recalculating the temperature thresholds using the method above and assuming that the direct solar radiation equals zero.As a result, warm thresholds increase by up to 2 K for shaded conditions and almost no impact is found for cold thresholds.However, since the overall classification of a cycling trip is based on the most extreme classification during the whole trip (see Section 3.2), our results would not change if we considered shading conditions for each street canyon.It is highly unlikely that a cycling trip would be shaded for the whole trip duration, and thus, we can neglect the shading impact on temperature thresholds. We want to stress that the thresholds in Table 1 are only valid for a cyclist moving at 15 km/h in a region with similar overall climate as measured by the Wettermast, that is, the larger Hamburg region.For other local climates, new thresholds can be determined using our method described in this section. | Agent-based model for bicycle commuting The availability of bicycle counting stations or assessments of bicycle commuter flows in urban environments are limited.To, nonetheless, assess cycling weather that accounts for the individuality and diversity of cycling trips and thus the weather conditions experienced, we develop a novel agent-based Monte Carlo model that simulates cycling trips of commuters in Hamburg. Each evaluation day, 10,000 virtual cyclists are propagated in space within the urban area of Hamburg from given start to end points along coordinates of routes.The cycling routes are based on OpenStreetMap bike network data and created with the python package OSMnx (Boeing, 2017).The start and end points are randomly drawn from a discretised latitude-longitude grid at a 100-m scale based on trip production and attraction rates derived from proxy data. The trip production rate is proportional to the population in a district (Yang et al., 2014).We use as proxy data for the trip production the spatial distribution of the number of inhabitants raised by German authorities in 2011 (Statistikamt Nord, 2011) provided with a resolution of 100 m.More recent data with such a high spatial resolution are not available, only on district level from 2019 (Statistikamt Nord, 2020).By comparing the number of inhabitants per district from both data sets, we obtain a multiplicative correction factor, which is then used to update the population distribution.The lower limit of population is 500 inhabitants per grid point to avoid outliers.Remapped on the latitude-longitude grid, the ratio of inhabitants per grid point to the sum of all inhabitants of Hamburg results in the probability of a starting point. The trip attraction rate is correlated to the density of points of interest on a district level (Yang et al., 2014).We retrieve the trip attraction rates based on land for industrial or commercial purpose given in Hamburg's land-use plan (Freie und Hansestadt Hamburg, 2018b).We assign different weights for multiple land-use categories, like industrial and commercial (1.0), mixed (0.6), airport and harbour (0.4), or residential (0.1) areas, which are based on the expected number of workplaces per area.Discretised on the latitude-longitude grid, the ratio of weight per grid point to the sum of weights describes the probability of an end point. The bicycle commuter flows are modelled following a gravity approach (e.g., Balcan et al., 2009;Lenormand et al., 2016;Yang et al., 2014).The gravity model assumes that the probability of commuting w ij between two locations, x i and x j , is proportional to the product of the trip production rate p i and trip attraction rate a j , and inversely proportional to the travel costs, with their distance d ij ¼ x i À x j 2 and f d ij À Á being a function describing the travel costs.The travel costs are modelled with an exponential distance decay function: with the length-scale τ. With this gravity model, we create bicycle routes in two steps.First, we sample the end point of a route, x j,end .Second, we sample the start point, x i,start , using the probability π ij of the i-th grid point being the start point, with its trip production rate p i .To retrieve a realistic bicycle route length distribution, we assume the empirical length-scale of τ ¼ 2000 m and limit the travel distance to a minimum of 150 m; below this threshold, we assume walking distance.The mode of the resulting bicycle route length distribution is 3250 m, and the median is 5665 m (not shown); therefore, the distribution has a reasonable range as shown by surveys (e.g., de Haas & Hamersma, 2020;Nobis, 2019;Schantz, 2017;Schneider et al., 2022).Then, to each obtained route, a starting time is assigned, which is based on an average diurnal cycle derived from hourly bicycle counter data (Freie and Hansestadt Hamburg, 2018a) from 8 October 2014 to 22 November 2020.The counting station is located close to the city centre of Hamburg (Figure 5) and is highly frequented by commuters during weekdays and by recreational cyclists mostly during weekends and holidays.Daily cycles of bicycle counts for weekdays typically show two maxima in the morning and in the afternoon for utilitarian use and, for weekends, a single maximum in the afternoon for recreational use (not shown).Since the focus of this study is on bicycle commuters, we only consider bicycle counts from weekdays.A mixture of two Gaussian functions is fitted to the daily cycle of bicycle counts (Figure 6).We use data from the whole period since the seasonal changes of the overall shape of the fitted functions are small (not shown).The cycling routes are discretised on a 1-s temporal resolution with the assumption of a constant speed of 15 km h À1 . We evaluate the cyclists' weather conditions for 10,000 randomly drawn bicycle commuters for every day within the period from July to August 2020 (Figure 5).The weather data (Section 2) are spatiotemporally assigned to the route coordinates using nearest-neighbour interpolation.Finally, we apply the meteorological traffic lights and the corresponding thresholds introduced in Section 3.1.A route is classified as red if it contains at least one red segment, as yellow if it contains yellow but no red segments, and as green if every segment is classified as green. We introduce the agent-based model to include spatial variability of weather conditions in the statistical analysis of cycling comfort.It is a first approach constrained by realistic conditions, such as official bike routes and measured temporal cyclist patterns.To obtain an even more realistic distribution of cycling trips, future studies could make use of the constantly growing databases of cyclist traffic volumes based on Global Positioning System (GPS) positions from smartphones (e.g., Freie und Hansestadt Hamburg, 2022;Lißner et al., 2018).Our approach defines fixed routes and starting times and assesses the cycling comfort for each cycling trip.Future analyses could also enable decision-making of the cyclists by allowing them to change the route or the mode of transportation.Such an agent-based modelling framework has been introduced and tested in the Hamburg region by Yang et al. (2018), for example, who simulated the exposure of commuters to environmental stresses. | Skill scores To assess the potential of forecasts in reducing bad weather rides and to evaluate how different the cycling weather is assessed by spatially dense measurements compared to a single station, we use the Critical Success Index (CSI, or also Threat Score) and Equitable Threat Score (ETS or Gilbert score).Both scores only take into account critical events, that is trips with red or yellow weather conditions.Correct negatives (green weather conditions) are not considered.Speaking in terms of forecast potential, only times with observed or forecasted critical weather conditions are considered.The CSI then measures the fraction of correctly classified events (hits) out of all critical events (hits þ misses þ falsealarms).The ETS additionally takes into account the climatological frequency of an event; that is, some hits can occur purely due to random chance (hits random ).It measures the fraction of correctly classified events adjusted for chance (hits À hits random ) out of all critical events (hits þ misses þ falsealarms À hits random ).Both CSI and ETS range from 0 to 100%, where 100% represents a perfect forecast. | Frustration reduction The added value of the COSMO-D2 ensemble forecasts for cyclists is examined by calculating the frustration reduction potential to not experience bad weather or miss a ride during good weather.The calculation is based on the relative economic value (Richardson, 2000).In principle, there are two groups of cyclists for which individual risk perception factors R fact are defined.The first group is more sensitive to bad weather conditions (R fact > 1) and rates with a risk perception factor of one to four, for example, one ride with bad weather like missing four rides with good weather.The second group has the opposite risk perception (R fact < 1) and is more sensitive to missed rides in good weather.If there is a perfect forecast, there will be no frustration.In contrast, if there is no forecast at all, always accepting bad weather or missing rides, the frustration will be with p clim being the climatological probability of experiencing bad weather.Furthermore, the contingency table with hits, falsealarms, missedevents and correctnegatives is calculated and normalised.Based on the contingency table, the experienced frustration is The potential to reduce the frustration for cyclists by using forecasts is derived by which points out the forecast skill and is used for the analysis of the COSMO-D2 ensemble. | WEATHER CONDITIONS FOR CYCLISTS IN HAMBURG By applying our traffic light evaluation method (Section 3.1), we investigate how favourable the meteorological conditions for cyclists in Hamburg are.We first conduct a multi-year analysis using the single-station data of Wettermast Hamburg.Based on this, we further assess how representative single-point measurements are to describe prevailing urban meteorological conditions for cyclists and compare them to a spatially dense measurement coverage.For this, we make use of our agent-based model with cyclists on 10,000 routes (Section 3.2). | Single-station-based assessment First, the cycling weather conditions obtained from single-station measurement data are investigated in more detail by a multi-year analysis.For simplicity, we do not use the routes from the agent-based model for this first analysis but only take into account the bimodal distribution of the daily traffic volume (Figure 6).For precipitation, we assume a trip duration of 30 min, for wind, we use 10-min averages and wind coming from a random direction, and for temperature, we just consider the 1-min values. Following the method of Section 3.1, Figure 7 depicts our traffic light scheme applied to the 14-year data set from the Wettermast site.According to our considered thresholds (Section 3.1), cycling weather in Hamburg is quite favourable, on average, as the traffic light shows the green signal for at least two-thirds of the trips for each meteorological variable.In particular, precipitation and wind lead to cycling discomfort for less than 5%.While precipitation events cause considerable discomfort in about 2% of the trips, wind has a minor impact with 0.2%.Remarkably, temperature is the most dominant meteorological factor for cyclists' discomfort, causing unfavourable cycling conditions for one-third of all trips. To elaborate statistics of cycling weather conditions on seasonal scale therefrom, Figure 8 shows 14-year monthly averages of cycling conditions based on the Wettermast single-station data set.Although the city of Hamburg is notorious for its rainy weather, our results suggest that dry cycling is actually possible for more than 90% of the time every single month.Seasonal variability is small, but higher percentages of very uncomfortable cycling conditions in June/July and broader transition zones (yellow) in winter seasons somewhat represent the increase of convective showers in summer and frequent light precipitation in winter. Strong-wind events preventing comfortable cycling remain rare throughout the year.Every month shows very uncomfortable cycling conditions for less than 1% of the trips.Compared to that, the yellow transition zone is relatively broad, characterising almost 5% of the trips for the entire year with roughly 7.5% in the spring season (March, April, May) and less than 2.5% during mid-and late-summer. Besides the fact that temperature affects cyclists the strongest on yearly average, thermal cycling conditions show the largest variability on seasonal scale (Figure 8c).Severe cycling discomfort occurs most frequently in the summer season (June, July, August) with warm discomfort for more than 12% of the trips, while transition seasons are most attractive for cycling in terms of heat.Remarkably, while up to 10% of the trips in January and December also contain strongly unfavourable (cold) cycling conditions, most of the trips are here dominated by slight cycling discomfort described by our transition zone (yellow).Altogether, only 30% of the trips in January and February are certainly suitable for cycling.It is also notable that during April, it is possible to experience both cold and warm thermal discomfort. As noted above, these results account for the average distribution of cycling trips during weekdays.We analysed the impact of this assumption by comparing the results to those without any weighting (black lines in Figure 8).The differences for precipitation and wind are generally small, while notable differences occur for temperature in the summer months.The afternoon peak of the bimodal distribution of the cycling frequency coincides with those hours during which daily maximum temperatures are typically observed in summer.Consequently, the weighted results have a larger fraction of warm discomfort. Our results are in line with those of Goldmann and Wessel (2020) and particularly with Brandenburg et al. (2007) analysing cycling weather in Vienna (Austria) using single-station data, which similarly showed that cycling conditions are dominated by the thermal comfort.Although most studies refer to single-station based analyses for a specific urban area, their representativeness when describing decakilometre scales, as common for large cities, remains limited.Subsequent studies dealing more with the spatial variability hence remain crucial. | Assessment with spatially dense measurements We now investigate how well weather conditions for cyclists in Hamburg can actually be assessed by a singlepoint measurement station, as carried out in the previous section, compared to spatially dense measurements.To do so, we turn to a case study period spanning July to August 2020.During this period, the FESST@HH measurement campaign took place (Kirsch et al., 2021), together with existing stations, providing unique dense measurements of precipitation, wind and temperature over the area of Hamburg (see Section 2.1).We combine all available measurements to data sets with different spatial resolution, using the data set with the highest spatial resolution as our baseline data set.For each data set, we apply the agent-based model described in Section 3.2 and classify each trip using our traffic light thresholds.See Table 2 for an overview of the different data sets and the corresponding relative fractions of green, yellow and red cycling trips during the study period.Since we only consider the summer season, critical temperature conditions are only related to warm discomfort.In summer, we also mainly observe convective precipitation and fewer strong-wind conditions than in the other seasons (see Figure 8).As seasonal changes, however, are small, we expect our overall conclusions to be valid also in other seasons. On average, the differences between the data sets with coarser and denser resolutions are below 2%.However, for a total number of critical trips (red or yellow) in F I G U R E 7 Traffic light scheme of cycling-relevant weather parameters based on thresholds specified in Section 3. The coloured areas scale with the frequency of occurrence, as given in percentages in the upper left.Grey dashed contour lines indicate frequencies of 25%, 50% and 75%, respectively.Over the long-term period, precipitation values were not available for 0.7%, wind for 0.8% and temperature for 0.6%.the order of 2 to 4% (wind) to 26% (temperature), such differences are not negligible.For the further analysis, we now compare the classification of each route for different data sets and compute the CSI (see Section 3.3.1).We use the baseline data set as a reference to compare to coarser spatial measurements and ultimately the single-point measurements at Wettermast. Figure 9 shows that for precipitation, the single-point measurements at Wettermast are only moderately representative.About 35% of the critical precipitation conditions are classified correctly by the single-point measurements.The C-band radar classifies around 65% of the critical precipitation conditions correctly.Wind conditions can be least well classified by singlepoint measurements.Only 7.2% of the critical wind conditions are correctly classified by the Wettermast.In contrast, single-point measurements can well distinguish whether temperature conditions are suitable for cyclists.Nearly 90% of the red and 80% of the red or yellow events are classified correctly by the Wettermast.Spatial but still coarse measurements (WMH-HUSCO-WXT) provide little added value compared to the Wettermast.To exclude the effect of correct classifications by chance, we also calculated ETS values, which are up to 6% smaller than the CSI values but do not change the overall picture of the results. It is remarkable that for precipitation and, to a certain extent, also for wind, CSI values are much lower than would be expected from the results in Table 2.While the Wettermast captures the average number of critical precipitation trips even better than the C-band radar, the CSI value reveals that this good agreement is only valid on average and not locally for each trip.This means that a single station is able to reproduce the overall amount of precipitation and wind gusts in the considered region but does not get the right timing, which is what we would T A B L E 2 Fractions of green, yellow and red trips during the case study period in July and August 2020. Data set Green expect for highly variable conditions, for example, within rain showers. How representative a single-station measurement is depends most likely also on the distance of the cyclist to the measurement location.Accordingly, we calculate the mean distance of each trip to the Wettermast location and calculate separate CSI values for distance bins of 2 km.The strongest change with distance is observed for precipitation.Figure 10 shows that the CSI drops from 40% at 5 km distance to 30% at about 20 km distance.A dependence on distance is not discernible for wind.For temperature, the CSI also decreases with increasing distance but very slowly.At 20 km distance, critical temperature events can still be well classified by the Wettermast with 75% for yellow and red and 85% for red events. | FORECAST POTENTIAL Weather forecasts of different types and complexity can help cyclists to improve their cycling experience by scheduling trips in comfortable weather conditions.The forecasting of weather has a long history and various methods have been developed.The common cyclist is confronted with a large set of forecast types and experiences, which are used in the upcoming subsections to assess their added value for the planning of cycling trips. | Perfect forecast In case of precipitation, the decision-making based on even perfect forecasts will not prevent cyclists from F I G U R E 9 Critical Success Index (CSI) between the data sets denoted at the x-axis and the corresponding baseline data sets, which are X-band radar for precipitation, WMH-HUSCO-WXT for wind and WMH-HUSCO-WXT-APOLLO for temperature.Red bars consider only trips classified as red, and striped bars consider both red and yellow trips.The black horizontal lines indicate the corresponding ETS values. (a) (b) (c) F I G U R E 1 0 Critical Success Index between the baseline data sets (see Table 2) and the Wettermast data as a function of average distance of the trip to the Wettermast for precipitation (a), wind (b) and temperature (c).Empty symbols denote bins containing less than 5% of all trips. getting wet, because staying at home is not an option for most commuters.However, many companies may provide flexible working hours, allowing the employees to start their commute in a certain time slot as they wish.In this section, we analyse how such a flexible starting time can reduce the chance of getting wet.For this purpose, we consider both data from a single station (WMH) but for multiple years from 2007 to 2020 as well as highresolution X-band radar data for our study period in summer 2020. For the single-station data, we assume that a cyclist has to do a 30-min trip around 9 a.m. but can freely choose the starting time within a window of AEΔt, for example, Δt ¼ 5, 10, 15, … min.The wider the window, the more flexible the cyclist will be for avoiding precipitation, or at least minimising the collected water amount.Figure 11 shows the effect of such optimisation with a fixed position at the Wettermast site.Without any flexibility, the cyclist has to start the trip exactly at 08:45 every day and will reach the destination at 09:15.In this case, the chance of getting wet (i.e., yellow or red traffic lights) is 6%.This value decreases with the increasing possibility of shifting the starting time forth and back.It is halved at slightly less than AE30 min, meaning a starting time between 08:15 and 09:15.When considering only red trips, a flexibility of less than AE15 min results in a halving of the original chance of getting wet.This kind of 'half-value period' holds also when regarding seasonal variances in precipitation types, especially in summer (typically showers) and winter (more stratiform rain events).Even though the chance of getting wet in winter is twice the chance that it is in summer, it halves within 30 (15) min flexibility for red and yellow (red only) trips as it does in the annual view (not shown). As an expansion of this single-point, but long-term evaluation, we now consider the spatial data of the Hamburg X-band radar with realistic cycling trips through the city, but for 2 months in summer 2020 only.Generally, the results are very similar.For the red threshold, the halve-value period is also slightly less than AE15 min (Figure 11).When considering both yellow and red trips, the chance of getting wet without any flexibility is almost 6% as for the Wettermast but decreases faster with increasing flexibility.This results in a halve-value period of only about 15 min, which is smaller than for the Wettermast data and therefore better for the cyclist.This difference could either be only valid for this specific time period in summer but also be related to the fact that for the Wettermast analysis, we considered only trips in the morning while trips used for the X-band radar evaluation cover the whole diurnal cycle. In any case, a precise knowledge of an upcoming precipitation event can therefore lead to substantial better conditions for cycling trips with typical durations and cyclists with a typical flexibility in time.This is not self-evident since it depends on how long the single precipitation events are and the periods between them.The precipitation interruptions must be long enough to enable a (nearly) dry trip, and the precipitation events must be short enough; otherwise, there will not be any precipitation interruptions in the considered time slot.In the spatial view, the size of precipitation cells and bands comes into account, too.We analysed the Wettermast data and found that most of the time, the results for the city of Hamburg indicate a sufficient time slot between two precipitation events and small enough precipitation cells to benefit from a time flexibility (not shown), which may also be valid for a greater area in Central Europe. | Realistic forecast Now that we know the potential of a perfect forecast, let us examine the potential of realistic forecasts.Cyclist can make use of different forecasting methods to plan their upcoming cycling trips.The simplest forecast may be persistence, that is, expecting the same weather today as yesterday.Another way of forecasting is nowcasting, that is, looking out of the window and predicting the weather at the starting point for the entire cycling trip.Lastly, numerical weather prediction (NWP) provides forecasts at least twice a day.For Hamburg, we assume that cyclists will typically consider forecasts from 12 UTC to plan their cycling trips for the next day.NWP forecasts can come with varying spatial resolution.To mimic the latter, we investigate the forecasts of the COSMO-D2 model, once with the full spatial resolution and once as a single grid cell averaged over Hamburg.Compared with our baseline data sets based on measurements (Section 4.2), only about 20% of the critical precipitation conditions are classified correctly by NWP (Figure 12), while nowcasting exceeds this value by about 10%.In the case of wind, all forecasts are only a poor source of information about critical cycling conditions with less than 10% of trips being classified correctly.Unlike for wind conditions, all forecasts show a good skill in predicting critical temperature conditions.Around 70% of the critical temperature conditions are classified correctly by NWP and even about 80% by nowcasting. A higher resolution of the COSMO model only offers a small but mostly marginal added value for temperature, while for precipitation and wind, the spatially averaged forecast even outperforms the full resolution forecast.As one would have expected, persistence performs best for temperature, for which more than 50% of the critical trips are classified correctly, but yields only poor results for fast-changing precipitation conditions. So far, we neglected that there are different types of cyclists.Some cyclists are more sensitive to experiencing bad weather, and others care more about missing good weather rides, and, thus, judge their risk differently.The potential of the COSMO-D2 ensemble predictions to reduce the frustration of both cyclist types considering their individual risk perception (Goldmann & Wessel, 2020) is evaluated with the frustration reduction rate (Section 3.3.2).The added value for reducing the frustration is analysed for the main COSMO-D2 run and the ensemble by increasing the number of required ensemble members, which need to exceed the threshold to trigger a forecast for bad weather (Figure 13).To distinguish between good and bad weather conditions, yellow and red are considered one category for this analysis. One of the most challenging parameters to forecast with numerical weather prediction is precipitation, because of the involved cloud microphysics and the small-scale structures of, for example, rain showers.Numerical weather forecasts give the highest added value for people who are sensitive to bad weather.A frustration reduction of almost 40% for the main run and of up to 68% for the ensemble can be reached (Figure 13a).Concerning wind forecasts, COSMO-D2 is most helpful for cyclists who try to avoid bad weather conditions more than missed rides and almost no added value is seen for the others.The dissatisfaction by bad wind conditions for a risk perception of 1-to-30 is reduced by up to 65% using the ensemble, which is 30% more than for the main run (Figure 13b). Regarding temperature, for people with an approximately balanced risk perception, that is who do not want to miss a ride but also do not want to experience too hot temperatures, COSMO-D2 forecasts can help to reduce their frustration by up to 80% and even by up to 90% using the ensemble (Figure 13c).The more sensitive the cyclists are either to bad weather or missed rides, the less value the forecasts provide.In total, the COSMO-D2 numerical weather predictions provide a substantial benefit for the route planning of cyclists to avoid bad weather or missed rides, especially in the case of temperature.For wind and precipitation, these forecasts are useful only for people who weigh the chance of avoiding uncomfortable weather higher than the harm of missing rides. | SUMMARY AND CONCLUSIONS In this study, we assess the weather conditions experienced on individual cycling routes through the urban area of Hamburg as well as how weather observations and forecasts may give guidance to a better cycling experience.To answer these two key questions, we introduce a novel agent-based model analysing a huge ensemble of randomly drawn routes that simulate cycling trips of commuters in Hamburg and assess the meteorological comfort of each ride with a three-category traffic light scheme based on thresholds for precipitation, wind and temperature. Long-term data of 14 years from a single station show that weather conditions for cycling in Hamburg are favourable on average two-thirds of the time and cause severe discomfort only up to 10% of the time.In line with previous studies (Brandenburg et al., 2007;Goldmann & Wessel, 2020), we find that temperature is the most dominant meteorological factor causing discomfort for cyclists.Temperatures exceed favourable cycling conditions for on average one-third of the year, exhibiting a strong seasonal cycle with most discomfort (up to almost 70%) caused in winter.Remarkably, precipitation and wind lead to cycling discomfort for only about 5% of the time.Contrary to Hamburg's reputation of being a rainy and windy city, more than 90% of the time dry and windless cycling is possible in all months.Rain events cause most considerable discomfort in only 2.3% of the time, wind in only 0.2% of the time.Thus, most of the time, precipitation or wind conditions are not relevant for cyclists in climatic conditions similar to Hamburg. Confronting the commonly used single-station approach to spatially dense observations from an urban station network and radar measurements, we find that temperature conditions can be well assessed by a single station, but only one-third of the critical precipitation events and 10% of the critical wind events are captured.Discomfort by precipitation and wind has a high spatial and temporal variability, which gives potential of guidance by weather observations to nowcast and reduce the risk of bad weather rides.We show that with a perfect knowledge and if flexible working times may allow, the risk of getting wet can be halved by a flexible starting time of less than AE30 min and the risk of getting severely wet by only AE15 min. Day-ahead forecasts by operational ensemble forecast provide almost perfect guidance in terms of temperature, but the limited predictability of precipitation and wind renders these forecasts useful only for riders with a high risk-awareness, that is, who try hard to avoid bad weather and are not very prone to missing rides.For forecasting critical precipitation conditions, nowcasting offers the greatest potential-about 30% of critical precipitation is correctly predicted this way.Numerical weather prediction correctly predicts only abound 20% of the critical precipitation conditions. In conclusion, this study indicates that weather should not be severely limiting biking commuters and highlights the potential of guidance by weather observations and forecasts to reduce the risk to experience bad weather during a ride.The results also underline the benefits of spatially dense measurements and consequently, future studies should be based on more than a single weather station.Especially when considering cycling conditions in different climate zones, it could also be relevant to include other weather parameters, such as snow or icy roads, thunderstorms or fog. 2.1.1 | Wettermast Hamburg (WMH) The Wettermast Hamburg ('Hamburg Weather Mast') is a scientific measuring site operated by the Meteorological Institute of Universität Hamburg at the south-eastern outskirts of the city.It provides boundary layer tower measurements up to 280 m and additional ground-based instruments and standard meteorology.More information can be found in Brümmer et al. (2012).In this study, the Wettermast site plays the role of a local weather station, typically provided by national weather services as part of a greater network.Measurements at this site started in 1995, and different sensors have been added in later years.For the general analysis of cycling weather in this study, we only use precipitation, wind (10 m above ground) and temperature (2 m) data.Since the infrared rain detector was installed in 2006, we use 14 complete F I G U R E 1 Overview of measurement locations in the greater area of Hamburg.More information about the station networks HUSCO, WXT and APOLLO are given in the text.The dashed circle line indicates the range of the X-band precipitation radar.The C-band radar of the DWD is located 50 km north of the city centre and scans the whole area. F Relative temporal availability of devicespecific station networks during the summer period of interest in 2020.Daily hundred percent availability is reached if all devices of one type are running continuously over the course of the day. FF I G U R E 3 (a) Predicted mean vote (PMV) and (b) number of dissatisfied people (PPD) as a function of 2-m temperature at the Wettermast for the years 2007-2020.Red and yellow horizontal lines indicate the PPD thresholds, which are used to determine temperature thresholds.Histograms (c, d) and cumulative fractions (e) of the temperature values within the considered red and yellow PPD bins in panel (b).The horizontal dashed lines mark the optimised threshold for the cumulative fractions and the red and yellow vertical lines the corresponding temperature thresholds for too cold and too warm conditions.I G U R E 4 (a) Contingency table comparing temperature classifications based on PDD values and on the derived temperature thresholds using the median (0.5) as cumulative fraction threshold.The coloured boxes on the diagonal represent correct classifications.(b) Fraction of correct classifications for different cumulative fraction thresholds.The dashed line indicates the final choice of 0.65. F I G U R E 5 62,000 randomly drawn bicycle routes of the 620,000 bicycle routes used during the case study period in July and August 2020.The darkness of the bicycle routes scales with the number of bicycle routes passing the location.The location of the bicycle counting station is indicated with a circle.F I G U R E 6 Normed median count of bicycles on weekdays for all seasons, including the fitted starting probability at local time.Based on hourly bicycle counter data at one highly frequented bike lane in the centre of Hamburg from 8 October 2014 to 22 November 2020. Seasonal variability of cycling weather conditions for (a) precipitation, (b) wind and (c) temperature.Red conditions (very uncomfortable) are represented by the dashed area at the top, followed by yellow (uncomfortable) and green (nice).For temperature, yellow and red areas at the bottom of the figure correspond to cold discomfort and at the top to warm discomfort.Long-term monthly average values of frequencies of occurrence for weather conditions based on Wettermast data from 2007 to 2020.Filled contours show the values weighted for the cycling behaviour Gauss function (see Section 3.2).Black lines do not consider any daily weighting, showing the statistics for the entire data set.Based on a 30-min trip duration for precipitation, 10-min averaged values of wind speed with random directions, and 1-min values for temperature. F I G U R E 1 1 Fraction of trips classified as red and/or yellow according to the precipitation thresholds as a function of flexibility of the starting time.Panel (a) is based on Wettermast data from 2007 to 2020 assuming reference trips of 30 min with a fixed starting time at 09:00 CET.Panel (b) uses X-band radar data and the routes from the agent-based model for the study period in July and August 2020.Vertical lines denote the halve-value periods. CSIs between the forecasts denoted at the x-axis and the corresponding baseline data sets, which are X-band radar for precipitation (a), WMH-HUSCO-WXT for wind (b), and WMH-HUSCO-WXT-APOLLO for temperature (c).Red bars consider only trips classified as red and striped bars consider both red and yellow trips.The black horizontal lines indicate the corresponding ETS values. Frustration rate reduction possibilities by COSMO-D2 ensemble predictions.Main COSMO-D2 run (blue line), increasing number of ensemble members that warn (grey thin lines) and envelope of all members (black line). The asterisks mark the baseline data sets for each variable. Note:
2023-11-18T16:05:12.522Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "463f5b5ab3bc64f1cefc2531429005652ed7325c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/met.2164", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "014f3ab1be16bb7bf4095665d946f2e327a9579e", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
264169717
pes2o/s2orc
v3-fos-license
Spatial transcriptomics reveals the distinct organization of mouse prefrontal cortex and neuronal subtypes regulating chronic pain The prefrontal cortex (PFC) is a complex brain region that regulates diverse functions ranging from cognition, emotion and executive action to even pain processing. To decode the cellular and circuit organization of such diverse functions, we employed spatially resolved single-cell transcriptome profiling of the adult mouse PFC. Results revealed that PFC has distinct cell-type composition and gene-expression patterns relative to neighboring cortical areas—with neuronal excitability-regulating genes differently expressed. These cellular and molecular features are further segregated within PFC subregions, alluding to the subregion-specificity of several PFC functions. PFC projects to major subcortical targets through combinations of neuronal subtypes, which emerge in a target-intrinsic fashion. Finally, based on these features, we identified distinct cell types and circuits in PFC underlying chronic pain, an escalating healthcare challenge with limited molecular understanding. Collectively, this comprehensive map will facilitate decoding of discrete molecular, cellular and circuit mechanisms underlying specific PFC functions in health and disease. The prefrontal cortex (PFC) is a complex brain region that regulates diverse functions ranging from cognition, emotion and executive action to even pain processing.To decode the cellular and circuit organization of such diverse functions, we employed spatially resolved single-cell transcriptome profiling of the adult mouse PFC.Results revealed that PFC has distinct cell-type composition and gene-expression patterns relative to neighboring cortical areas-with neuronal excitability-regulating genes differently expressed.These cellular and molecular features are further segregated within PFC subregions, alluding to the subregion-specificity of several PFC functions.PFC projects to major subcortical targets through combinations of neuronal subtypes, which emerge in a target-intrinsic fashion.Finally, based on these features, we identified distinct cell types and circuits in PFC underlying chronic pain, an escalating healthcare challenge with limited molecular understanding.Collectively, this comprehensive map will facilitate decoding of discrete molecular, cellular and circuit mechanisms underlying specific PFC functions in health and disease. The prefrontal cortex (PFC) is a major region of the mammalian brain that has evolved to perform highly complex behavioral functions.It plays important roles in cognition, emotion, reward and executive function.The PFC engages in complex executive tasks that dynamically coordinate cognition, attention, learning, memory, judgment, etc. to direct the action of an organism 1,2 .As such, dysfunctions of the PFC are associated with many cognitive and neuropsychiatric disorders 3,4 . In addition to regulating intellectual and emotional behaviors, PFC is even involved in modulating pain processing as well as the negative effect of pain 5,6 .Increasing evidence indicates that disruption of this regulation is associated with the development of chronic pain, a rapidly increasing healthcare challenge that affects about 20% of the US population with economic burdens exceeding that of diabetes or heart disease 7,8 .Chronic pain has been associated with PFC hypoactivity, and transcranial stimulation of the PFC can induce pain relief [9][10][11][12][13] .Although projections from PFC to brainstem have been historically described in descending inhibition of pain 5,14,15 , the underlying molecular mechanism is poorly understood.Besides, PFC interacts with many downstream targets including the amygdala, nucleus accumbens (NAc) and thalamus-the major components of the central pain matrix, critical for the sensory or affective symptoms of chronic pain 6,16 .As such, PFC has an important role in pain 'chronification' 5,15 . Thus, a central question is how does PFC organize and manage such diverse functions-from cognitive processes to autonomic pain modulation?To address this question, we and others have previously performed single-cell RNA-seq (scRNA-seq) to decode the cellular Article https://doi.org/10.1038/s41593-023-01455-9 Unsupervised clustering revealed the major cell types-excitatory neurons, inhibitory neurons and non-neuronal cells that include oligodendrocytes, oligodendrocyte precursors (OPC), microglia, endothelial cells, astrocytes and vascular leptomeningeal cells (VLMC) (Fig. 1a).Within the excitatory neurons, the major subgroups are clustered together, as described by the commonly used nomenclature 21 -the intratelencephalic (IT) populations of different layers, the extra-telencephalic (ET) neurons, the near projecting (NP) and the cortico-thalamic (CT) populations (Fig. 1a).Within the inhibitory neurons, populations from the medial ganglionic eminence (Pvalb and Sst) and the caudal ganglionic eminence (Vip, Sncg and Lamp5) clustered distinctively (Fig. 1a). The major cell types were further clustered into the following 52 hierarchically organized cell subtypes: 18 excitatory, 19 inhibitory and 15 non-neuron cell subtypes (Fig. 1b).Four subtypes were detected in L2/3 IT (L2/3 IT 1 to L2/3 IT 4), two subtypes in L4/5, three subtypes in L5 and two subtypes in L6.Additionally, the L5 ET split into two subtypes, the L6 CT into four subtypes and L5/6 NP formed a single cluster.Among the inhibitory neurons, the Pvalb and Sst each split into six subtypes, the Lamp5 into three subtypes and the Vip and Sncg into two subtypes each (Fig. 1b).Among the non-neurons, the endothelial cells formed five subtypes, Endo1-5, while astrocytes formed three subtypes, and oligodendrocytes and OPCs each formed two subtypes. Projecting these clusters in space (based on MERFISH coordinates) revealed the anatomical layout of the coronal section and precise localization of every single cell (Fig. 1c, inset-magnified view showing individual cells).Molecularly similar excitatory neurons localized together to form distinct layers, from which a laminar histology, characteristic of the cerebral cortex, emerged (L2/3 IT to L6 CT: outside inwards; Fig. 1c, left half).Within each layer, the subtypes are further organized in strata (for example, L2/3 IT 1 to L2/3 IT 4; Fig. 1c, right halfdistribution of subtypes).Inhibitory neurons are broadly distributed and do not form specific layers, although some subtypes appear to be enriched within certain layers or subregions (Fig. 1c).Non-neuronal cells are also broadly distributed, except for enrichment of oligodendrocytes near the fiber tracts (for example, corpus callosum) and the VLMC in the outermost layer of the brain (Fig. 1c, yellow).Established markers like Otof, Cux2 and Fezf2, respectively, localized to L2, L2/3 and L5 on the MERFISH slice, consistent with the in situ hybridization (ISH) images from Allen Brain Institute, further validated our analysis (Extended Data Fig. 1d). Together, excitatory neurons comprise the largest population in PFC, followed by all non-neuronal cells combined and then the inhibitory neurons (Fig. 1d, left).Within excitatory neurons, the IT neurons are the largest subgroup, followed by the ET, NP and CT of deeper layers, respectively (Fig. 1d, middle).Within the inhibitory, Sst and Pvalb neurons are most abundant followed by the Lamp5, Scng and Vip (Fig. 1d, right). There was no skewing between samples and similar percentages of cell types and subtypes were detected in the three biological replicates (Extended Data Fig. 2a).To further evaluate our detection accuracy, we first performed an integrated analysis of the MERFISH data with scRNA-seq data of the PFC from the Allen Institute 21 .All the major subtypes showed a strong correlation between the two datasets (Fig. 1e).Similar integrated analysis comparing the MERFISH data with our own scRNA-seq of PFC 17 revealed strong correspondence even at the subtype levels (Extended Data Fig. 2b-f; Methods).In fact, MERFISH heterogeneity of the PFC 2,17 , which revealed a myriad of cell types.However, those studies lacked information about the spatial organization and interaction of the diverse cell types, which are major determinants of the functional diversity of the PFC.A relatively homogeneous histology, with a laminar organization, is the most striking feature of the mammalian cerebral cortex [18][19][20] .Yet, distinct regions of cortex perform highly specialized functions, including vision, locomotion and somatosensation.This regional specialization of functions, despite apparent homogeneity, is likely due to the distinct features at multiple levels, including molecular composition (transcriptome), circuit organization (connectome) and anatomical (spatial) organization of cell subtypes within each cortical area.Decoding such organizational logic is critical not only for mechanistic understanding of cortical function but also for developing drugs to selectively target neurological disorders of cortical origin, such that drugs directed to either cognitive (frontal cortex) or hearing (auditory cortex) defect do not disrupt visual or motor function. Approaching such questions has been historically limited by technological barriers, despite extensive scRNA-seq profiling across the brain, including cortex [21][22][23] .With recent advances in spatial transcriptomics techniques, such questions can now be addressed.Using multiplexed error-robust fluorescence in situ hybridization (MERFISH), an image-based method for spatially resolved single-cell transcriptomics 24 , here we decode the spatial organization of the PFC and its subregions.Our results demonstrate distinct cellular composition of the PFC relative to its adjoining cortical areas.PFC adopts distinct molecular features to suit its specific electrophysiological properties, different from its adjacent cortices.We map molecular identities (and layer localization) of PFC's projection neurons to major subcortical targets.Finally, based on projection, transcription and activity markers, we reveal the molecular identity of PFC neuronal clusters most substantially affected in chronic pain. Diversity and organization of cell types in mouse frontal cortex All experiments reported in this entire study were conducted in adult male mice.To understand the diversity of cell types and determine their spatial organization within the PFC, we performed MERFISH 24,25 , an imaging-based method for single-cell transcriptomics with error-robust barcoding read through iterative rounds of single-molecule FISH.MER-FISH detects the precise location of each RNA molecule to ultimately reveal the spatial organization of diverse cell types within anatomically defined tissue regions (Extended Data Fig. 1a) 25,26 .The MERFISH library comprised 416 genes including cell-type markers and functionally important genes like ion channels, neuropeptides, G-protein coupled receptors and a panel of neuronal activity-regulated genes (ARG; Supplementary Table 1; see details in Methods).We collected brain samples from three different adult male mice and prepared rostral to caudal coronal slices covering +2.5 to +1.3 from Bregma to broadly image the frontal cortex.The imaged RNA species were detected, decoded and assigned to individual cells using established analysis pipelines 25 (Extended Data Fig. 1a).Overall, we obtained 487,224 high-quality cells in the frontal cortical region from three independent biological replicates with high consistency (Extended Data Fig. 1b).Expression of individual genes showed good correlation with that of the bulk RNA-seq of the PFC (Extended Data Fig. 1c).could classify some of the scRNA-seq clusters at a finer resolution to reveal distinct subclusters (Extended Data Fig. 2e,f).This point is particularly true for the inhibitory neurons (for example, Inh 1, 2 and 7 of scRNA-seq), possibly due to their higher rate of detection in MERFISH (Extended Data Fig. 2g).Further analysis revealed markers within each major group that can distinguish subtypes from each other in both excitatory (Extended Data Fig. 3a) and inhibitory (Extended Data Fig. 3b) neurons.Because inhibitory neurons are sparse and hence poorly understood, we validated some of their subtype markers using single-molecule FISH (RNAScope).Subclusters of each major cell type showed distinct marker expression (Extended Data Fig. 4).For example, Crh and Pdyn, selectively labeled Sst 2 and Sst 5 subclusters (Extended Data Fig. 4a), Nos1 (Sst3 cluster marker) labeled only a subset of Sst neurons (Extended Data Fig. 4b), Pthlh and Moxd1 (respective markers for Pvalb 5 and Pvalb 4) labeled respective subsets within Pvalb population (Extended Data Fig. 4c,d) and finally, Htr3a staining in Vip neurons revealed the classic Htr3a + and Htr3a − populations. To gain insights into the broader implications of our PFC profiling, we compared our PFC data with the following three recent MERFISH studies in mouse and human that covered parts of cortex: (1) mouse motor cortex (MOp) in ref. 27, (2) a snapshot area of mouse frontal brain (PFC/striatum/corpus callosum) in ref. 28 and (3) the superior and medial temporal gyri of human cortex in ref. 29.Cell composition appeared to be more closely related between the mouse studies than with that of human, with a higher proportion of non-neuronal cells in human (Extended Data Fig. 5a).Transcriptomic comparison also revealed strong correlation of subtypes with the mouse datasets, especially with MOp (Extended Data Fig. 5b).However, despite some conserved molecular signatures, human cortical neurons had a weaker correlation with the transcriptomic constitution of the mouse cells, particularly with MTG (Extended Data Fig. 5b).Thus, while the strong correlation across mouse datasets reaffirms our data quality and the power of MERFISH, a deeper profiling study is necessary to determine the precise correspondence of mouse and human cells, and how they diversified during evolution. Heterogeneous distribution of subtypes along A-P and D-V axes of mouse PFC To understand the spatial organization of the different neuron subtypes within the anatomically defined PFC region, we aligned our profiled serial sections with the Allen Mouse Brain Common Coordinate Framework (CCF) v3 (ref.30), a reference created for the mouse brain based on serial two-photon tomography images of the 1675 C57Bl6/J mice (Extended Data Fig. 6a), which outlines the PFC boundaries within each section. Mapping the MERFISH clusters onto the sequential anteriorposterior (A-P) sections revealed the order of cellular organization in 3D throughout the sections (Fig. 2a).Heterogeneous distribution of several neuron subtypes along the A-P and dorsal-ventral (D-V) axes was visually evident.Analysis along the A-P axis revealed that L2/3 IT and L4/5 IT neuron subtypes are enriched in the anterior-most part, where all types of L5 and L6 neurons are generally low (Fig. 2b).This density gradient follows a reverse order in the posterior direction where deep layer neurons like L5 ET 1 or L6 CT 1-3 are gradually enriched (Fig. 2b).Detailed mapping of various neuron subtypes on the serial brain sections clearly revealed the uneven distribution along the A-P axis (Fig. 2c and Extended Data Fig. 6b,c).In contrast, some subtypes such as L5/6 NP are modularly distributed and few others (for example, L5 IT 2 or L6 IT 1) are sparse, but uniform throughout the A-P axis (Fig. 2b). There is also strong distribution heterogeneity among the inhibitory neurons, but it follows a pattern of regional enrichment instead of gradual transitions along the A-P axis (Fig. 2b).For some subtypes, such as Lamp5 3, Pvalb 4 and Vip 2, the fluctuation in density along the A-P axis is very prominent (Fig. 2b).Neighborhoods with high density of distinct interneuron subtypes may indicate regulatory hotspots or focal points for specific subcortical projections circuits. Another readily recognizable feature from the coronal slices is the laminar organization of various excitatory neurons along the D-V axis, within each representative section (Fig. 2a).Computation of physical depth inward from the cortical surface revealed that IT neurons located more superficially within each layer.The L2/3 IT (and L4/5 IT) subtypes are most superficial and closer to the surface of the brain (Fig. 2d).Similarly, in L5, most IT neurons (L5 IT 1, L5 IT 3) are superficial to the other populations of the layer (L5 ET 1, L5 ET 2) (Fig. 2d).Within layer 6, although L6 IT 1 is superficial, L6 IT 2 mingles with the deepest CT subtypes (Fig. 2d).Plotting each population individually onto a representative coronal section clearly resolved a highly specific spatial localization of each neuron subtype in layers inwards from the cortical surface (Fig. 2f and Extended Data Fig. 7a). The D-V organization of GABAergic interneurons was even more interesting.Although inhibitory neurons, unlike the excitatory, are not organized in layers, most subtypes appear to be enriched within specific excitatory layers or subregions (Fig. 2e).Broadly, the Lamp5 (Lamp5 1 to 3) and Vip (Vip 1 and 2) neurons along with Sncg 1 are more enriched in superficial layers.Lamp5 3, for example, is restricted only to the superficial layer (Fig. 2e,f).However, Sncg 2 is broadly distributed along the entire depth (Fig. 2e and Extended Data Fig. 7b).This appears to be different from the neighboring motor cortex, as per recent reports 27 , where all subtypes of Sncg neurons are present only in superficial layers.Additionally, the motor cortex also has some subtypes of Vip neurons in deeper layers, which was not detected in PFC.However, the most interesting observation is that specific molecular subtypes of Pvalb and Sst neurons are differentially enriched in various layers along the cortical depth (Fig. 2e).For example, while Pvalb 5 and Pvalb 2 have higher density toward the superficial layers, Pvalb 3 and Pvalb 6 are enriched in the very deep layers, and Pvalb 1 and Pvalb 4 are maximally enriched in the intermediate region (Fig. 2e,f and Extended Data Fig. 7b).Likewise, Sst 1 and Sst 5 are more superficially enriched, and the remaining are distributed in the intermediate to deep layers (Fig. 2e and Extended Data Fig. 7b). Most non-neuronal subtypes displayed a more broad and dispersed distribution (Extended Data Fig. 7c), with few exceptions.The VLMC, for example, line the outermost surface along the cortex.Oligo 1 and Oligo 2 are enriched near the regions of origin of the white matter tracts (Extended Data Fig. 7c).The Astro 2 had a significant presence in L1 and somewhat greater enrichment in the medial prefrontal region (Extended Data Fig. 7c). Distinct neuron subtypes are selectively enriched in mouse PFC PFC is very distinct in function and connectivity compared to the adjacent cortices.We asked whether this functional and connectivity distinction is associated with its specialized cell composition.To this end, we identified the adult mouse PFC boundary in each section by aligning with CCF v3 (Extended Data Fig. 6a).By projecting the cells identified from the alignment as 'in'-PFC onto the combined UMAP of the frontal cortex (Fig. 3a), we found that some subtypes of excitatory neurons are selectively biased 'in', and some others 'out' of the defined PFC region ('out' being mainly primary and secondary motor cortices; Fig. 3a), indicating different cellular composition in PFC and the adjacent areas.Calculation showed that L2/3 IT 1, L5 ET 1 and L5 IT 1 are about eightfolds enriched within the PFC, whereas L6 CT 2 and L6 CT 3 are more than twofolds (Fig. 3b).In contrast, L2/3 IT 4, L4/5 IT 1 or L6 IT 1 are markedly depleted (fourfolds to eightfolds) in the PFC (Fig. 3b).When mapped onto the representative coronal section, the enriched, depleted and unbiased populations were clearly visible with respect to the boundaries of the PFC (Fig. 3c).Inhibitory neurons, although less abundant, exhibit clear subtype selectivity across all the major Article https://doi.org/10.1038/s41593-023-01455-9types in PFC (Fig. 3b).Switching of Pvalb subtypes (~2-fold enriched in Pvalb 3 and 4, and depleted of Pvalb 1, 2 and 6), depletion of Sncg 2 and enrichment of Sst 4 and Sst 6 are the most prominent features (Fig. 3b and Extended Data Fig. 7b).Notably, Lamp5 3, the most superficially located interneuron (L1) is the only enriched Lamp5 neuron in PFC (Fig. 3b).The relative proportions of specific IT, ET and CT subtypes are intimately tied to the projections of a cortical area (inside and outside the telencephalon).The selection of specific interneurons determines the precise excitatory-inhibitory balance in the input/output circuits of the projections.In combination, these circuit motifs likely serve as a blueprint for the specialized functions of a specific cortical area, and PFC is clearly organized into a highly selective assembly in this regard. The PFC has distinct functional subregions from its dorsal to ventral end, viz.dorsal anterior cingulate cortex (dACC or ACAd), prelimbic cortex (PL), infralimbic cortex (ILA), dorso-peduncular cortex (DPP; Fig. 3d) and also part of the medial orbitofrontal cortex (ORBm), present more anteriorly.We asked whether these subregions have distinct cellular composition.Indeed, clustering with the normalized cell proportions across all subregions revealed the most enriched excitatory neurons in each subregion (Fig. 3e, Extended Data Fig. 7d).For example, L5 ET 1 is enriched in PL and ILA (but depleted in ACAd), while L6 CT 2 is mainly in ILA and L5 IT 3 is mainly in ACAd (Fig. 3e,f, enriched cells in panel e are labeled by red fonts).In ORBm, L2/3 IT 1 is enriched, but L2/3 IT 4 of the same layer is strongly depleted (Fig. 3e).The differential neuron subtype distribution in the different PFC subregions can help explain PFC's subregion-specific functions (how some subregions regulate specific behaviors) and their implications in physiological and pathological complexity of neuropsychiatric processes. Distinct transcriptional signatures emerge in mouse PFC Functional differences across brain regions often underlie molecular adaptations 21 .The cortex is believed to be no exception.Thus, we asked whether the distinctive functions and cellular organization of the PFC are associated with specialized molecular features by comparing the transcriptome of PFC with that of the adjacent cortical regions.Indeed, a large number of genes interrogated in the MERFISH library are differentially expressed between the PFC and the neighboring cortices (Fig. 4a).Among the 416 genes analyzed, 54 were substantially enriched and 40 depleted in PFC (adjusted P < 0.01, expression fold change > 20%; Supplementary Table 2).Mapping expression of substantially enriched (Nnat) or depleted (Scn4b) genes onto the coronal section showed clear enrichment or depletion in the PFC region (Fig. 4b), which is consistent with the ISH data from the Allen Brain Institute (Fig. 4c), validating our MERFISH results.Next, we asked whether specific types or categories of genes are selectively enriched or depleted in PFC.The DEGs had a strong representation of several ion channels and some key neurotransmitter receptors, which can impart very distinct electrical properties characteristic of the PFC, relative to adjoining cortices 31 .Several potassium channels are enriched or depleted (Supplementary Table 2).The voltage-gated potassium channels subtypes [31][32][33] , especially delayed rectifiers (Kcna2, Kcnb2, Kcnc2, Kcnc3, Kcnq3 and Kcnq5) and inward (Kcnh7) or outward (Kcnh5) rectifier are depleted (Fig. 4a and Supplementary Table 2).BK channel like Kcnmb4, reciprocally enriched modifiers/silencers like Kcng1 or Kcnf1 and posthyperpolarization regulator like Kcnn3 are enriched (Fig. 4a and Supplementary Table 2).Mutations of these genes are often implicated in psychiatric disorders (like Kcnn3 in schizophrenia and bipolar disorder 34 ).Apart from potassium, some prominent calcium channels (Cacna1e and Cacna1h; Extended Data Fig. 8a) and sodium (Scn3b) channels, which are implicated in autism and epilepsy [35][36][37][38] , are also enriched. To globally represent the remarkable transcriptional features of PFC neurons, we calculated the 'PFC signature', the average expression of the top ten enriched genes minus the top ten depleted genes.When values for this index were projected (as red color) onto cells in the original UMAP (Uniform Manifold Approximation and Projection, for dimension reduction), the PFC-enriched excitatory neurons clearly clustered and emerged (Fig. 4d).When the PFC signature was mapped onto a representative coronal section, it localized precisely within the anatomical limits of the PFC (Fig. 4e), indicating a distinct molecular composition of the PFC relative to the adjacent cortices. iSpatial: transcriptome-wide PFC-enriched genes and pathways in mouse To expand our spatial mapping of gene expression to the transcriptome scale (including genes beyond the MERFISH library), we integrated our prior PFC scRNA-seq data 17 and current MERFISH data to predict the expression pattern of all genes using iSpatial 45 , a bioinformatic tool that we developed.The analysis revealed 190 PFC-enriched and 182 PFC-depleted genes (Fig. 4f and Supplementary Table 2).Mapping enriched and depleted candidate genes predicted by iSpatial, Cdh13 and Abcd2, respectively, onto a coronal section revealed consistent localization with respect to the PFC boundaries (Extended Data Fig. 8b), which is similar to Allen Brain ISH results (Extended Data Fig. 8b). Gene ontology enrichment analysis of the 372 spatially DEGs revealed biological function categories highly enriched in transporters, channels and receptor activity, which are known to modulate membrane potential (Fig. 4g).Depletion of voltage-gated potassium channels or transmembrane potassium transporter concur with a poised state of activity that PFC neurons must maintain for working memory function, a feature not essential for adjacent motor or sensory cortices 33,46 .Greater enrichment of 'postsynaptic neurotransmitter activity' or 'glutamate receptor activity' (Fig. 4g) relative to adjacent cortices reaffirms that PFC retains substantial plasticity compared to these regions, even in adults.Curiously, some functions such as 'gated channel' or 'cation channel activity' show enrichment as well as depletion (Fig. 4g).This indicates that PFC likely uses a different subset of receptors (class switching) for the same functions compared to adjacent cortices to adapt to its distinct electrophysiological needs. A signaling pathways enrichment analysis of these 372 genes revealed opioid signaling, endocannabinoid pathway and glutamate receptor signaling as the top three pathways (Extended Data Fig. 8c).While glutamate signaling is widespread in the cortex, opioid and cannabinoid signaling are more characteristic of the PFC (in mood, memory, feeding, etc.) [47][48][49] .This indicates that the distinct molecular composition of PFC is indeed tied to its specialized functions. Decoding the transcriptome-wide, spatially enriched, gene expression patterns also allowed us to investigate whether there is expression bias between subregions of the PFC.Indeed, we detected several genes (for example, Nnat, Fezf2, Nr4a1, Scn4b, etc.) that are preferentially expressed in certain subregions of the PFC (Fig. 4h and Extended Data Fig. 8d), which are also validated in Allen Brain ISH data (Extended Data Fig. 8d).Thus, subregion-specific functions of PFC are potentially enabled by discrete molecular compositions imparting specific electrical and signaling properties. Organization predicts subtype-specific interactomes in mouse PFC PFC integrates multilevel (thalamic, cortical and subcortical) inputs within local circuits for efficient cognitive processing [50][51][52] .We asked whether potential cell-cell interactions between neuron subtypes can be predicted by the organizational map revealed by MERFISH.We queried the cell subtype composition of the neighborhood of each cell and calculated the enrichments of paired subtype-subtype colocalizations.We also compared the interactions between the in-PFC and out-of-PFC regions, and presented them as two halves of a circle for each interaction (Fig. 5a).We found that enrichment of proximity was notable among many groups of cells (Fig. 5a).For example, first considering the inside PFC alone, the IT subtypes of L2/3 are closely apposed Genes analyzed by MERFISH are colored in black, and genes inferred by iSpatial are colored in yellow (two-sided Wilcoxon test, Bonferroni corrections for multiple comparison; genes with adjusted P < 0.01 and fold change > 1.2 defined significant) g, The gene ontology enrichment analysis of genes that are enriched or depleted in PFC (one-sided Fisher's exact test, Benjamini-Hochberg method for multiple comparison).h, Gene expression enrichment analysis of genes enriched in the different anatomical subregions of PFC and the adjacent cortical regions. Article https://doi.org/10.1038/s41593-023-01455-9 in the superficial layers and potentially engage in cortico-cortical interactions with sensory and association cortices (Fig. 5a).Interestingly, most of these subtypes have interactions with L4/5 IT subtypes (Fig. 5a) that receive exclusive inputs from thalamus or lower order cortex (because PFC has no clear L4), which are known to relay processed information to L2/3 (ref.52).Interestingly, our analysis also revealed specific interactions in the deeper layers that may not be apparent from the histological organization.For example, L6 IT neurons (like L6 IT 1) share proximity with specific ET neurons (L5 ET 2), revealing subtype selectivity (and, in turn, circuit selectivity) within L5-L6 communication (Fig. 5a).Subtype selectivity is perhaps most important in excitatoryinhibitory coupling.Preferential pairing of many excitatory subtypes with one or few (but not all) specific Pvalb subtypes was detected (Fig. 5a).For example, L5 IT 3 scored the highest proximity with Pvalb 1, while L5 ET 2 (located within the same layer) has greater interaction probability with Pvalb 6 (Fig. 5a, highlighted boxes).Mapping cells onto a representative coronal section revealed the relative proximities of each of these two excitatory-inhibitory pairs, and also a different spatial enrichment of the Pvalb 1 and Pvalb 6 subtypes (Fig. 5b).While some proximities like Pvalb 1/L5 IT 3 and Pvalb 6/L5 ET 2 appeared similar both 'in PFC' and 'out of PFC' (Fig. 5a,b), many other subtypes show either weaker or altogether different cell-cell proximity features inside versus outside of PFC.For example, the L2/3 IT 3 neurons are close to L4/5 IT 2 neurons in the PFC region, but they are located far away outside the PFC region (Fig. 5a,a′,c).Thus, MERFISH allows the prediction of subtype-specific interactions based on spatial organization, which can be systematically studied in the future (by histology and physiology) to identify distinct circuits engaged by specific behaviors. Spatial and molecular organization of projections of mouse PFC It is well known that PFC's excitatory pyramidal neurons project to different subcortical targets including the striatum, NAc, thalamus, hypothalamus, amygdala, periaqueductal gray (PAG) or ventral tegmental area 16,53 .However, the spatial organization of projection neurons, and whether different neuron subtypes project to different targets, is not well characterized. A prior study performed scRNA-seq of PFC neurons retrogradely labeled from some of these major targets 2 .We integrated our PFC MERFISH data with this dataset to predict the PFC neuron subtypes and their spatial/layer location, which project to these different targets.Through joint embedding and supervised machine learning, we could assign respective projection identity to the molecular clusters organized in space within the PFC (Fig. 6a).An overlap of the MER-FISH and scRNA-seq clusters through UMAP visualization revealed a strong correspondence (Fig. 6b and Extended Data Fig. 9a).The receiver operating characteristic (ROC) curve for the prediction model independently predicted six different projection targets with high confidence, including contralateral PFC (cPFC), dorsal striatum (DS), hypothalamus, NAc, PAG and amygdala (Fig. 6c).Mapping these projection neurons onto a coronal slice of frontal cortex revealed the identity and spatial organization of neurons that project to each of these six targets within the PFC (Fig. 6d).Distinct spatial localization of each of these six groups of cells can be visualized when mapped individually on the coronal slice (Extended Data Fig. 9b).This analysis allowed us to associate different subsets of each neuronal type that project to different regions with their location within the PFC (Fig. 6e), which reveals that most of the target brain regions receive projection from more than one neuron subtypes.For example, the amygdala receives projections from all four subtypes of L6 CT neurons as well as L5 ET 1 neurons, but the majority comes from L6 CT 2. Likewise, the hypothalamus receives its projections from L5 ET 1 and L6 CT 1; DS from L6 CT 1, 2 and 3; and NAc gets mainly from L6 CT 1, L5 ET 1 and some from L6 CT 2. However, one exception is the PAG, which receives its projections almost exclusively from L5 ET, predominantly from L5 ET 1 (and some from L5 ET 2).Consistent with prior knowledge, superficial layer IT neurons project to the contralateral hemisphere of PFC 54 . To validate our computational model-based projection prediction, we injected retrograde labeling adeno-associated virus (AAV) (driving mCherry expression) into PAG and amygdala.Four weeks after the injection, single-molecule FISH (smFISH; RNAScope) showed that consistent with the prediction, all mCherry mRNA expressing PFC neurons, retro-traced from PAG, colabeled with Pou3f1, a selective marker for L5 ET 1 and L5 ET 2 (Fig. 6f).To further confirm, we also co-immunostained mCherry protein with Pou3f1 RNA-FISH showing that every mCherry-expressing neuron contained Pou3f1 mRNA (Fig. 6f).In amygdala, colocalization of mCherry was detected for both Pou3f1 (L5 ET 1) and Foxp2 (L6 CT) as predicted (Extended Data Fig. 9c).High-resolution confocal images revealed strong mCherry expression in subsets of Foxp2 + and Pou3f1 + neurons, respectively (Extended Data Fig. 9d).These data support the accuracy of our circuit predications. Identifying PFC neuron subtypes involved in chronic pain The role of PFC in cognition and executive function is most widely studied.However, PFC also has a pivotal role in autonomically modulating pain perception, and aberrations in this process are emerging as a major determinant in pain 'chronification' 5,12 .While chronic pain is escalating as a leading healthcare challenge 7 , molecular underpinnings of the dysfunction remain unknown.Chronic pain has been strongly associated with transcriptional adaptations across the PFC 5,55 ; however, the spatial or cell-type-specific resolution of these changes is less clear.Using MERFISH, we attempted to identify the specific PFC neuron subtypes that are impacted by chronic pain.To this end, we used the well-established spared nerve injury (SNI) model of chronic neuropathic pain in mice 56 where two of the three branches of the sciatic nerve are transected (Fig. 7a), which causes a state of chronic neuropathic pain in the hind paw that lasts for months.We performed SNI and sham (control: nerve exposed, but not transected) surgeries in adult male mice and conducted weekly von Frey tests to assess mechanical sensitivity.Strong mechanical allodynia characteristic of neuropathic pain developed in the SNI mice that persisted throughout the 6-week testing period (Extended Data Fig. 10a).Six weeks after surgery, brains from three pairs of sham and SNI mice were collected and characterized with MERFISH (Fig. 7a).UMAP visualization and overlap (Extended Data Fig. 10b), followed by transcriptomic cell-type comparisons (Extended Data Fig. 10c) affirmed high correlation and convergence of sham and SNI datasets ensuring a reliable comparison to reveal the effect of chronic pain.We employed Augur 57 to identify the cell type(s) that are transcriptionally most perturbed by chronic pain, which revealed L5 ET 1 as the most affected subtype followed by the L4/5 IT 1, L6 CT 2 and L5 ET 2 (Fig. 7b).The most number of DEGs were also detected in L5 ET 1 followed by L6 CT 2, L5 ET 2 and few others (Extended Data Fig. 10d).No significant changes were detected in the other 30 clusters despite many of the excitatory neuron subtypes being highly abundant in PFC, suggesting that these clusters are minimally affected in chronic pain.Interestingly, the two most impacted clusters, respectively, project to PAG (L5 ET 1) and amygdala (L6 CT 2; Fig. 6e), the two major hotspots known to regulate sensory and affective aspects of pain 5,58 . Chronic pain is known to inflict strong and sustained hypoactivity across the PFC 9,10,12,59 .We asked whether this can be detected in the baseline expression of neuronal ARGs to identify prominently affected neuron subtypes.We calculated the ARG score using the mean expression of a panel of five ARGs (Arc, Junb, Fos, Npas4 and Nr4a1) and compared between sham and SNI groups (Methods).We observed a strong and widespread reduction of ARG score when it is plotted on representative coronal sections (Fig. 7c).A subregion-specific analysis revealed that the ACAd and PL are the most impacted PFC subregions (Fig. 7d).We next compared the differences of ARG score across the individual excitatory neuron clusters (Fig. 7e), and found that it is downregulated in several clusters, including those exhibiting transcriptional changes (for example, L5 ET 1 and L6 CT 3; Extended Data Fig. 10d).We also examined whether there is a biased decline in ARG score between the two PFC hemispheres of individual mice (Extended Data Fig. 10e).Despite some trends, no significant difference was detected, potentially indicating that prolonged chronic pain states (like 6 weeks here) can trigger more widespread impact in brain.Stronger ipsilateral versus contralateral bias may be evident during the early stages of pain 'chronification' and remains an interesting subject of future study. To validate chronic pain-induced hypoactivity across PFC, we performed smFISH to compare Fos expression between sham and SNI brain sections.Although sham shows a baseline Fos activity in PFC, a general decrease in Fos signal is obvious in the SNI (Extended Data Fig. 10f).Costaining Fos with Pou3f1, a selective marker for L5 ET 1, revealed significant Fos downregulation in this neuron subtype in the SNI brains (Fig. 7f-h).At high resolution, strong differences in RNA counts can be clearly visualized in single neurons (Fig. 7g).While Fos is commonly used, we employed two more ARGs for validation, Npas4 and Arc, both of which are reduced in PFC, particularly in L5 ET 1 neurons labeled by Pou3f1 (Extended Data Fig. 10g,h).Thus, the PAG projecting Pou3f1 neurons underwent one of the strongest reductions of ARG activity score, indicating potential impact on its baseline electrical activity. Despite the conventional knowledge that a PFC-PAG circuit is involved in the descending modulation of pain 5 , its cell-type identity or changes in chronic pain were unclear.Our findings revealed the molecular identity and spatial organization of this circuit-the L5 ET 1 neurons with PAG projection (Fig. 6e), which underwent strong reduction of ARG activity score in chronic pain (Fig. 7f,g) and also incurred the most transcriptional perturbation (Fig. 7b).Additionally, we also identified at least two CT subtypes in L6 (L6 CT 2 and 3) that project to limbic structures such as amygdala, NAc and hypothalamus (Fig. 6e) that may be involved in the affective response to pain. Discussion In this study, we present an account of how the PFC is distinctly organized at the molecular, cellular and projection levels relative to the adjacent regions within the frontal cortex.We exploit these features to reveal the molecular identity of key neuron subtypes that are engaged in chronic pain, and more broadly, we provide a foundation for systematic mapping of functional ensembles and circuits selectively engaged in various cognitive and executive functions of PFC.Spatial transcriptomics is a rapidly developing field 60 , and similar to recent studies 25,27,61 , MERFISH enabled systematic decoding of PFC's molecular and cellular organization. The cellular composition of a cortical area should be dictated by the input and output circuits associated with its function.We observed that a variety of neuronal subtypes are specifically enriched in PFC (Fig. 3a-c).This regional subtype specificity potentially underlies the characteristic properties of the PFC relative to other cortical regions.For example, the PFC is agranular and lacks a typical L4 (associated with thalamic input), it receives long-range inputs across all of its layers and projects to subcortical targets from different layers, and engages in reciprocal circuits with most of these targets 16,62 .The twofold enrichment of the superficial-most IT neurons (L2/3 IT 1) likely facilitates cortico-cortical communications, but the subsequent IT populations (L2/3 IT 4 or L4/5 IT 1) are markedly depleted to likely make room for more L5 IT 1 or L5 ET 1 that engage in long-distance subcortical projections.Enrichment of two CT subtypes (L6 CT 2 and 3) is consistent with the observation that CT neurons of PFC project to several subcortical targets (Fig. 6e), rather than thalamus alone.Notably, two of these enriched neuron subtypes (L5 ET 1 and L6 CT 2) emerge as key players in chronic pain, a function majorly assigned to the PFC within the cerebral cortex (Fig. 7).Inhibitory cell composition also has very substantial implications.For example, depletion of certain subtypes of Pvalb (Pvalb 1, 2 and 6), also resulting in an overall lower count of Pvalb neurons in the PFC (relative to the adjacent regions), suggests that feedforward inhibition (and hence excitatory/inhibitory balance) is differently organized in PFC.This is an important observation because functional imbalance of Pvalb neurons has been implicated in almost every PFC-associated disease, such as schizophrenia 63 , bipolar, depression and chronic pain 64 .Detection of all these regional differences would not be possible without spatial profiling techniques like MERFISH. Besides cellular composition, we detected strong transcriptional features (especially with ion channels or receptors) specific to the PFC compared to adjacent cortices (Fig. 4).It is generally appreciated that different cortical regions have different baseline electrical properties and qualitatively different activity patterns, which in turn is critical for their specific function 21 .Recording of electrical field potentials across cortical areas provides strong evidence supporting such regionally variable activity patterns 65,66 .However, the biological/molecular bases of such functional differences have been less clear.Our findings revealing preferential expression or repression, or even subtype switch of a wide range of ion channels, and key glutamate receptor subunits in PFC demonstrate potential mechanisms underlying regionally specific electrical properties in the cortex. We identified the key cell types in PFC that are specifically impacted by chronic pain (Fig. 7).Amidst the rising prevalence of chronic pain and emerging consensus that transition to chronic pain is centrally regulated, there has been little clarity about the cellular and circuit mechanisms underlying the 'chronification', which is key to therapeutic targeting.Previous studies have shown that transcranial stimulation of PFC could relieve chronic pain 9,11,13,67 .Such studies, although established a causal connection, provide limited long-term solutions for pain management owing to the potentially deleterious effects of broad nonspecific cortex-wide stimulations.Despite a long-standing general knowledge of putative PFC to PAG projections in descending inhibition of pain 5 , the molecular identity of this circuit was unknown.In this regard, our finding of L5 ET 1 as a major neuron subtype exclusively projecting to the PAG, and undergoing transcriptional changes in chronic pain state, is of particular relevance.While the reduced activity of L5 ET 1 can impair descending inhibition to potentiate physical pain, it remains to be determined whether it also Article https://doi.org/10.1038/s41593-023-01455-9contributes to the affective component of pain.However, L6 CT 2 and L6 CT 3, the other two implicated clusters, project to multiple limbic regions including amygdala, NAc and hypothalamus, and their dysfunctions may elicit strong negative affect characteristic of chronic pain states 58,68 .All these remain valuable prospects for future functional studies through targeted neuronal activity manipulation using genetically engineered animal models. Leveraging the resolution of MERFISH, this study revealed a wealth of information about the distinct cellular, molecular and circuit organization, as well as functional properties of the mouse PFC.However, some limitations remain, which are as follows: (1) despite better resolution in clustering (than scRNA-seq), characterization of inhibitory neuron function remained limited.While the biological replicates per group (n = 3) in MERFISH are sufficient to support the current findings, a larger number of mice or more sensitive techniques in the future may resolve this issue and clarify the role of inhibitory neurons in chronic pain; (2) although MERFISH can achieve very efficient single-cell resolution, it is restricted to a fixed library of preselected genes.Accordingly, studies are limited to hypotheses, and new transcriptional genome-wide changes cannot be discovered for which other lower-resolution spatial techniques can be integrated into future studies and (3) while ARGs provide an effective means to determine the activity history of neurons and help narrow down causal subtypes, physiological slice recordings of these cells (from sham and SNI mice) in future would be necessary to provide the ultimate proof for neuronal activity changes under chronic pain. Mice and surgery All experiments were conducted in accordance with the National Institute of Health Guide for Care and Use of Laboratory Animals and approved by the Institutional Animal Care and Use Committee (IACUC) of Boston Children's Hospital and Harvard Medical School.Wildtype male C57BL6 mice of about 10 weeks old were used for all experiments in the study.Mice were maintained at 12-h light/12-h dark cycles with food and water ad libitum.For the SNI surgery, mice were anesthetized with ketamine.Hair was shaved above the knee on one side (usually left) and the skin was sterilized with iodine and isopropanol.The muscles were separated by blunt dissection to expose all three branches of the sciatic nerve.The tibial and common peroneal branches of the nerve that run parallel were tied tightly with two sutures and a piece between the two ties was transected and removed.Care was taken that the third branch (sural nerve) was untouched during the whole procedure.The retracted muscles were released, and the skin was stitched back.In the sham surgery group, identical steps were followed to expose the nerve, but no transection was performed, and skin was stitched back in position.Six weeks after the surgery, brains were collected to assess the impact of chronic pain.Mice were tested the day before being harvested to confirm ongoing mechanical allodynia.On the day of harvest, any acute handling was avoided, and SNI/sham mice were taken directly from their home cage to euthanasia and brains were immediately harvested and frozen (within 5-7 min).These brains were eventually sectioned at 14-µm thickness to collect samples along the A-P axis and subjected to MERFISH. MERFISH library design and encoding probes The MERFISH library of 416 genes belongs to the following three categories: (1) cell-type markers; (2) neuronal function regulatory genes and (3) neuronal ARGs.Cell-type markers were determined based on our previous bulk and single-cell sequencing of the PFC that were used to distinguish different cell subtypes and subtypes, with priority for neurons (major cell-type markers, subtype markers and cortical layer markers).The functional genes are comprised of genes regulating neural activity and function such as ion channels, receptors and neuropeptides.We started with a comprehensive list of all ion channels, receptors and peptides reported in cortex 21 and then refined to only those expressed in PFC 17 .Finally, based on previous studies 25 , we selected a panel of neuronal ARGs whose expression can report the activity history of neurons. A library of MERFISH encoding probes for all target genes was generated as described previously 25 .Briefly, a unique binary barcode was assigned to each gene based on an encoding scheme with 24 bits, a minimum Hamming distance of 4 between all barcodes, and a constant Hamming weight of 4. This barcoding scheme left 60 'blank' barcodes unused to serve as a measure of false-positive rates.For each gene, 50 to 70 30-nt-long targeting regions with limited homology to other genes and narrow melting temperature and GC ranges were selected, and individual encoding probes to that gene were created by concatenating two 20-nt-long readout sequences to each of these target regions.Each of the 24 bits was associated with a unique readout sequence, and encoding probes for a given gene contained only readout sequences for which the associated bit in the barcode assigned to that gene contained a '1'.Template molecules to allow the production of these encoding probes were designed by adding flanking PCR primers, with one primer representing the T7 promoter.This template oligopool was synthesized by Twist Biosciences and enzymatically amplified to produce encoding probes using published protocols 25 . MERFISH tissue processing and imaging Tissue was prepared for MERFISH as described previously 25 .Briefly, mice were killed under CO 2 and brains were quickly harvested and rinsed with ice-cold calcium and magnesium-free PBS.The brains were frozen on dry ice and stored at −80 °C till sectioning.The frozen brains were embedded in OCT on a mixture of ethanol and dry ice.Serial 14-µm-thick frontal cortex sections spaced about 150 µm apart were collected and placed on poly-d-lysine coated, silanized coverslips containing orange fiducial beads prepared as described previously 25 .The sections were allowed to briefly air dry and immediately fixed with 4% PFA for 10 min.Sections were washed in PBS and stored in 70% ethanol for at least 12 h to permeabilize.The sections were washed in hybridization buffer (2× SSC + 30% formamide) and then drained and inverted over parafilm in petri dish onto a 50 µl droplet of mixture containing encoding probes and a poly(A) anchor probe 25 in hybridization buffer (2× SSC, 30% formamide, 0.1% yeast tRNA and 10% dextran sulfate) and hybridized in a covered humid incubator at 37 °C for 2 d.Coverslips were then washed in hybridization buffer and the sections were embedded into a thin film of poly-acrylamide gel, as described previously.The embedded sections were then digested for 2 d in a 2× SSC buffer containing 2% SDS, 0.5% Triton X-100 and 1:100 proteinase K.The coverslips were washed and stored in 2× SSC at 4 °C until imaging.MERFISH imaging was performed on a custom microscope and flow system, as described previously 25 .In each imaging round, the volume of each slice was imaged by collecting a z stack at each field-of-view containing 10 images each spaced by 1 µ.Twelve imaging rounds using two readout probes per imaging round were used to read out the 24-bit barcodes.Readout probes were synthesized by Biosynthesis and contained either a Cy5 or Alexa750 conjugated to the oligonucleotide probe via a disulfide bond, which allowed reductive cleavage to remove fluorophores after imaging, as described previously.A readout conjugated to Alexa488 and complementary to a readout sequence contained on the polyA anchor probe was hybridized with readouts associated with the first two bits in the first round of imaging. Image processing, decoding and cell segmentation MERFISH data were decoded as previously described 25 .Briefly, images of fiducial beads collected for each field-of-view in each imaging round were used to align images across imaging rounds.RNAs were detected using a pixel-based approach, in which images were first high-pass filtered, deconvolved and low-pass filtered.Differences in the brightness of different imaging rounds were corrected by an optimized set of scaling values, determined from an iterative process of decoding performed on a randomly selected subset of fields-of-view, and the intensity trace for individual pixels across all imaging rounds was matched to the barcode with the closest predicted trace as judged via a Euclidean metric and subject to a minimum distance.Adjacent pixels matched to the same barcode were aggregated to form putative RNAs.RNA molecules were then filtered based on the number of pixels associated with each molecule (greater than 1) and their brightness to remove the background. As described previously 25 , the identification of cell boundaries within each field of view (FOV) was performed by a seeded watershed approaching using DAPI images as the seeds, and the poly(A) signals to identify segmentation boundaries.Following segmentation, individual RNA molecules were assigned to specific cells based on localization within the segmented boundaries. Preprocessing of MERFISH data The decoded data were preprocessed by the following steps: (1) segmented 'cells' with a cell body volume less than 100 µm 3 or larger than 4,000 were removed; (2) cells with total RNA counts of less than 10 or higher than 98% quantile, and cells with total RNA features less than 10, were removed; (3) to correct for the minor batch fluctuations in different MERFISH experiments, we normalized the total RNA counts per cell to a same value (500 in this case); (4) doublets were removed by Scrublet 69 and ( 5) the processed cell-by-gene matrix was transferred to gene-by-cell matrix and then loaded into Seurat V4 (ref.70) for downstream analysis.The matrix was log-transformed by the Seurat standard pipeline.https://doi.org/10.1038/s41593-023-01455-9 Cell clustering Two rounds of cell clustering were used to identify cell types and subtypes.In the first round, we identified the following three major cell types: excitatory neurons, inhibitory neurons and non-neuronal cells.In the second round, each major cell type was further clustered.Excitatory neuron was further clustered into 18 subtypes, inhibitory neurons was further clustered into 19 subtypes and non-neuronal cell was further clustered into 15 subtypes.Then, we separated the excitatory subtypes into the following seven groups according to the neuronal projection: L2/3 IT, L4/5 IT, L5 IT, L6 IT, L5 ET, L5/6 NP and L6 CT.The inhibitory neuron was cataloged into the following five groups based on the main markers: Lamp5, Pvalb, Sncg, Sst and Vip.Non-neuronal cells were cataloged into the following six groups: endothelial cells, microglia, oligodendrocytes, OPC, astrocytes and VLMC.Each round of clustering follows the same workflow as described previously.First, all gene expression was centered and scaled via a z score, and PCA was applied to the scaled matrix.To determine the number of principal components (PCs) to keep, we used the same method described before 25,61 .Briefly, the scaled matrix was randomly shuffled and PCA was performed based on the shuffled matrix.This shuffling step was repeated 10 times, and the mean eigenvalue of the first principal component crossing the 10 iterations was calculated.Only the PCs derived from the original matrix that had an eigenvalue greater than the mean eigenvalue were kept.Harmony 71 was then used to remove the apparent batch effect among different MERFISH samples.The corrected PCs were used for cell clustering.The nearest neighbors for each cell were then computed by a K-nearest neighbor (KNN) graph in corrected PC space.Bootstrapping was used for determining the optimal k value for KNN as previously described 25,61 (k = 10 in the first round clustering.k = 50, 20 and 15 for excitatory neurons, inhibitory neurons and non-neuronal cells, respectively, in the second round).Leiden method was used for detecting clusters 72 .The resolution was set to 0.3 in the first round of clustering and was set to 2 for the second round.Finally, we manually removed the clusters representing doublets, which express high levels of the established markers of multiple cell types.Clusters located outside of the cortex were also removed. Correspondence between scRNA-seq and MERFISH clusters To compare the cell clusters identified by scRNA-seq and MERFISH, we first co-embedded the two datasets in a corrected PCA space using Harmony as described above.Then, all the cells from both scRNA-seq and MERFISH were used to build the KNN graph.The first 30 corrected PCs were inputted into Seurat::FindNeighbors to compute the KNN.For each cell cluster in MERFISH, we obtained the cell cluster's nearest 30 neighbor cells' information.Then, we calculated the percentages of the cell clusters derived from scRNA-seq that were near this MERFISH cluster, from which we obtained a correspondence matrix, where each row is a cluster from scRNA-seq, each column is a cluster from MERFISH and the element in the matrix indicates the similarity between the two clusters.Similarly, for each cell cluster in scRNA-seq, we inquired about the nearest clusters derived from MERFISH data to generate another correspondent matrix.The average of the two correspondent matrices was used to indicate the similarities between the cell clusters defined by scRNA-seq and MERFISH. Cortical depth measurement After MERFISH cell segmentation, the cells' spatial location was determined by their centroid coordinates.For each cell in PFC region, the cortical depth was measured as the shortest spatial distance between the cell location and the cortical surface line.VLMC cells are a monolayer located on the cortical surface and used to label the cortical surface in each MERFISH slice. Imputing transcriptome-wide expression by iSpatial MERFISH reveals gene expression and location at single-cell resolution but only targets 416 predefined genes.We used iSpatial (version 1.0.0) to infer the transcriptome-wide spatial expression.In short, iSpatial co-embed MERFISH and single-cell RNA-seq datasets by two sequential rounds of integration.The scRNA-seq data are from our prior PFC scRNA-seq data 17 .For each cell of the MERFISH dataset, iSpatial searches for the nearest neighbors from scRNA-seq data by a weighted k-nearest neighbors model.Then, the expression values of the neighbors are assigned to the MERFISH data, resulting in a transcriptome-wide spatial expression profile. Detecting neuronal subtype or gene expression enriched/ depleted in the PFC For each brain slice, we first normalized the total number of cells detected for each neuronal subtype, as the selected PFC adjacent regions for MERFISH imaging showed little difference in different slices.The ratio of the normalized cell number in the PFC to that outside the PFC was calculated in each neuronal subtype.Then, the ratio was log 2 transformed and used to indicate the subtype enrichment/depletion in the PFC. To detect the differently expressed genes between the cells in the PFC and adjacent regions, the Wilcoxon rank test was applied to each gene.Then Bonferroni correction was used to adjust the P value for multiple comparisons.Seurat FindAllMarkers was used for this analysis. Gene ontology enrichment analysis and ingenuity pathway analysis (IPA) Based on transcriptome-wide spatial expression that was inferred by iSpatial 45 , the DEGs between the PFC and adjacent regions were calculated and then used for gene ontology enrichment analysis and IPA.The compareCluster function from clusterProfiler (version 4.0.2) 73 was used for gene ontology enrichment analysis.All the inferred genes were chosen as the background list.IPA software (version Spring Release, April 2022) 74 was used for IPA.IPA not only identifies the most significant pathways but also the pathways predicted to be activated or inhibited based on the input gene list.IPA calculates the following two different statistics: (1) for the P value, IPA uses a Fisher's exact test to calculate the likelihood of the overlap between the input genes and the known pathways.The significance indicates the probability of association of input genes with the pathway by random chance alone; (2) for the z score, IPA considers the directional effect (activation or inhibition) of the genes' expression changes on a pathway. ARG score calculation The ARG score is calculated based on the expression of the following five neuronal activity-related genes 75 : Arc, Junb, Fosb, Npas4 and Nr4a1.Because the baseline expression of these genes is different, we first z scored the expression by genes to standardize the expression.The z score is calculated by its expression in each cell subtracting the mean expression level and then dividing by the standard deviation of that gene across all cells.For each cell, we calculated the ARG score by averaging the z scores of the five neuronal activity-related genes. Cell-cell proximity For each cell, we first identified the nearest 30 neighbors based on spatial distance.Next, we derived the cell subtypes of these neighboring cells and obtained the cell subtypes composition of these cells near the inquired cell.After iteration of all cells in all subtypes, we could calculate the number of occurrences of paired cell-cell and obtain the cell-cell proximity matrix (observed matrix).Because of the cell number differences for each subtype, we normalized cell-cell proximity matrix by a random shuffled matrix (expected matrix).To derive the shuffled matrix, we first shuffled the cell identities by randomly assigning a subtype for each cell.Then, the random cell-cell proximity matrix was calculated by the same method before.Finally, the normalized cell-cell proximity matrix was calculated by log 2 (observed matrix/ expected matrix).In addition, the P values were calculated by Wilcoxon https://doi.org/10.1038/s41593-023-01455-9rank tests (using wilcoxon.test in R) and then adjusted by Benjamini-Hochberg method (using p.adjust in R, method = 'BH'). Excitatory neuron projection prediction The scRNA-seq data (GEO: GSE161936) 2 were first preprocessed by standard Seurat pipeline.Only the cells from dorsomedial (dmPFC) and ventromedial (vmPFC) regions were used.We integrated the MERFISH and scRNA-seq data using Harmony, and all the cells derived from MERFISH/scRNA-seq were co-embedded on a corrected PCA space.The first corrected 30 PCs were selected as features to train a multiclass support vector machine (SVM) for predicting the neuronal projection.The cells from scRNA-seq were separated into training and test groups.Then, the SVM was trained on training data and validated on test data by using the radial basis function kernel.Gamma was set to 0.01, and cost was set to 10.The ROC curve was plotted to evaluate the performance using pROC package 76 in R. Finally, the model was applied to MERFISH cells to predict their projections, and the area under the curve (AUC) was equal to 0.913. Register MERFISH slice to Allen Brain Atlas To align MERFISH slices to the Allen Brain Atlas CCF v3, we leveraged the spatial distribution of cells identified by MERFISH in each slice as well as DAPI images of that slice.First, each brain slice was paired to the closest matching coronal section in CCF v3 with the help of DAPI image and spatial location of the cell types.Then, we modified the WholeBrain package 77 to align the MERFISH slice to the corresponding matching CCF coronal section.To ensure accurate alignment, we leveraged the MERFISH cell typing result at single-cell resolution and used certain cell types as anchors to help locate the anatomic features.VLMC cells are used for marking the surface of brain slice as follows: inhibitory neuron subtype, Lamp5 3, for locating layer 1, L2/3 IT neurons for locating layer 2, L6 CT neurons for locating layer 6 and oligodendrocytes for locating corpus callosum.Because some small slices do not have sufficient features to align, 45 of 60 slices are successfully registered to CCF v3, which allowed us to define the anatomic PFC and PFC subregions. Testing mechanical allodynia in SNI mice Mechanical allodynia due to SNI neuropathy was tested in sham and SNI mice using von Frey monofilaments.Animals were placed in testing room to habituate for 30 min.After 30 min, mice were placed in enclosures (Ugo Basile, 37000-006) on a perforated metal platform (Ugo Basile, 37450-005) with opaque walls separating mice for an additional 30 min to habituate.We employed the widely used up-down method for testing, as originally described in ref. 78.Starting with the base filament, higher or lower filaments were consecutively applied till a response was documented-followed by the series of four filaments to document response patterns, based on which the paw withdrawal threshold was calculated 78 . Mice were tested the day before surgery to ensure a uniform baseline sensitivity across cohorts.SNI surgery was performed the following day.After 7 d, the mechanical sensitivity was similarly tested every seventh day up to sixth week.On the day after the final test, mice were killed and brains were collected for MERFISH. DEGs between pain and control conditions To detect DEGs and correct the batch effects, we used a logistic regression framework.For each gene, we constructed a logistic regression model to predict the sample conditions C by considering the batch information S, C ~ E + S, and compared it with a null model, C ~ 1 + S, with a likelihood ratio test.Then, the Bonferroni correction method was applied to adjust for multiple comparisons.Here 'LR' method in Seurat FindAllMarkers was used for conducting this analysis. Sample sizes and statistical tests for all experiments in this manuscript were determined based on established literature in the field from us and others who have reported single-cell transcriptomics, viral tracing and pain assays.Moreover, samples were randomized, and data collection was blinded during experiments wherever appropriate to avoid any skewing or bias in data collection.No data points were excluded. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Fig. 1 | Fig. 1 | MERFISH reveals the molecularly diverse cell types and subtypes comprising the PFC and adjoining cortices.a, UMAP visualization of all cells identified by MERFISH.Cells are color-coded by their identities (number of cells = 487,224).b, Dendrogram showing the hierarchical relationship among all molecularly defined cell subtypes (number of cells = 487,224).The expression of marker genes is shown below.The color represents the normalized expression, and the dot size indicates the percentage of cells expressing each gene. Fig. 2 | Fig. 2 | Spatial organization of different neuron subtypes in PFC.a, Coronal MERFISH slices showing the spatial organization of neuron subtypes from anterior to posterior end in PFC and adjacent regions.The dotted lines indicate the PFC region.The color scheme is the same as in Fig. 1c.b, Heatmap showing the proportions of neuron subtypes within PFC from anterior to posterior (A to P) sections in excitatory (left) and inhibitory (right) neurons.c, Spatial organization of L4/5 IT 1 and L5 ET 1 as examples from A to P sections.d,e, Violin plots showing the cortical depth distributions of excitatory neuron subtypes (d) and inhibitory neuron subtypes (e) in PFC (cell number = 121,617).The maximum cortical depth is normalized to 1. f, Spatial location of five representative neuron subtypes (excitatory neuron subtypes: L2/3 IT 2, L5 ET 1, L5/6 NP; and inhibitory neuron subtypes: Lamp5 3 and Pvalb 3) in PFC on a coronal slice.Red dots mark the indicated cell types and gray dots mark the other cells. Fig. 3 | Fig. 3 | Distinct neuron subtypes are selectively enriched or depleted in PFC relative to the adjacent cortical regions.a, UMAP of all MERFISH cells colored by their spatial location whether in or out of PFC (in = 121,617 cells; out = 285,445 cells).b, Barplot showing the log 2 of the abundance ratio of subtype neurons in or out of PFC.c, Spatial location of excitatory neuron subtypes enriched (left), depleted (middle) and unbiased (right) distribution in PFC compared with Fig. 4 | Fig. 4 | Genes with expression enriched or depleted in PFC.a, Volcano plot showing the DEGs that are enriched or depleted in PFC neurons relative to the neurons out of PFC (in = 121,617 cells; out = 285,445 cells).Expression of genes enriched, depleted in PFC, are colored in red and blue dots, respectively (two-sided Wilcoxon test, Bonferroni corrections for multiple comparison; genes with adjusted P < 0.01 and fold change > 1.2 defined significant).b, Spatial gene expression of Nnat (top) and Scn4b (bottom) in all excitatory neurons.The dotted line marks PFC region.c, ISH data from Allen Brain Atlas showing the spatial expression of Nnat and Scn4b in a coronal slice (right) with zoom-in (left).d, UMAP of all MERFISH cells (bottom-left) and excitatory neurons colored by the PFC signature, which is defined as the average expression of top ten enriched genes minus the average expression of top ten depleted genes.e, Aligning the PFC signature onto a representative slice to show the spatial distribution of PFC signature.f, Volcano plot showing the expressions of genes enriched or depleted in PFC after imputing by iSpatial.A total of 20,733 genes are analyzed.Genes analyzed by MERFISH are colored in black, and genes inferred by iSpatial are colored in yellow (two-sided Wilcoxon test, Bonferroni corrections for multiple comparison; genes with adjusted P < 0.01 and fold change > 1.2 defined significant) g, The gene ontology enrichment analysis of genes that are enriched or depleted in PFC (one-sided Fisher's exact test, Benjamini-Hochberg method for multiple comparison).h, Gene expression enrichment analysis of genes enriched in the different anatomical subregions of PFC and the adjacent cortical regions. Fig. 5 | Fig. 5 | Cell-cell proximity across all neuronal cell types.a, Enrichment of cell-cell proximity of different subtypes shown in dot plot (total number of neurons = 32,811).The left half of each dot indicates the cell-cell proximity in PFC and the right half dot indicates that outside the PFC.The color represents log 2 -transformed observation to expectation of colocalized frequency of the two clusters.The size of dots indicates the significance of the colocalization (a′ shows an enlarged inset highlighting examples of proximities that are different 'in' and 'outside' of PFC).b, A representative slice showing the cell locations of Pvalb 1 and L5 IT 3 neurons (left), and Pvalb 6 and L5 ET 2 neurons (right).c, A representative slice showing the cell locations of L2/3 IT 3 and L4/5 IT 2 neurons.The dotted lines mark the PFC region. Fig. 6 |Fig. 7 | Fig. 6 | Spatial and molecular organization of PFC projection to the major subcortical targets.a, Schematics of the strategy for inferring neuronal projection of MERFISH clusters.The MERFISH and scRNA-seq data are integrated into a reduced dimensional space.An SVM is used to predict neuronal projection of the MERFISH neuron subtypes (Methods).b, UMAP visualization of cells derived from MERFISH (9,544 cells) and scRNA-seq (4,294 cells) data after integration.c, The ROC curves show the prediction powers of six projection targets; w/o represents the cells without projection information.d, A coronal slice showing in silico retrograde tracing from six injection sites, labeled by different colors as indicated.e, The inferred projection targets of molecularly defined excitatory neuron subtypes, represented by an alluvial diagram.f, PFC to PAG projection validation.Retrograde mCherry-expressing AAV was injected in PAG-injection scheme cartoon and injection site in PAG are shown (scale bar = 0.5 mm); brain slice of PFC was used for smFISH.mCherry (red) labeled neurons coexpressing the L5 ET 1 marker Pou3f1 (green), arrows in the enlarged image indicate double-labeled neurons.Co-immunostaining for mCherry protein with Pou3f1 RNA-FISH further confirmed extensive colocalization.(scale bars = 20 µm).Bar graph shows the percentage of mCherry positive cells that also express Pou3f1 (mean ± s.e.m., two-tailed t test, n = 4 biologically independent adult male mice, P < 0.001).Majority of mCherry + neurons are Pou3f1 + .Hypo, hypothalamus. Articlehttps://doi.org/10.1038/s41593-023-01455-9 Extended Data Fig. 1 | The workflow and quality control for MERFISH profiling.a, The workflow of MERFISH profiling of mouse PFC, including MERFISH imaging, decoding, segmentation and data analysis.b, Scatterplot showing the spearman correlation of the RNA counts per cell of individual genes measured by MERFISH in two independent experiments.c, Scatterplot of the RNA counts per cell of individual genes measured by MERFISH versus bulk RNAseq data.The counts are natural logarithms.d, Spatial gene expression of three representative genes detected by MERFISH.In situ hybridization (ISH) data from Allen Brain Atlas are shown at the bottom.Extended Data Fig. 2 | MERFISH and scRNA-seq based clusters are consistent.a, UMAP showing integration of cells from MERFISH or scRNA-seq data (GSE124952).b,c, UMAP showing the cell clusters defined by scRNA-seq (left) or MERFISH (right).d-f, Heatmap showing the correspondence between main cell types (d), excitatory (e) and inhibitory (f) subtypes defined by MERFISH and scRNA-seq.g, The cell proportions of the excitatory, inhibitory and non-neuronal cells from scRNA-seq or MERFISH.(Cell numbers for analyses in this figure, MERFISH = 121617, scRNAseq = 24822).Extended Data Fig. 3 | The marker gene expression in the different neuronal subtypes.a,b, Dot plot presentation for the top three marker genes for each of the excitatory (a) (Cell number = 247098) and inhibitory (b) (Cell number = 51794) neuron subtypes.Extended Data Fig. 4 | Single molecule FISH validates expression of inhibitory neuron markers and their overlap with several subtype-specific markers in PFC.a,b, smFISH for subtype-specific markers of Sst neurons.c,d, smFISH for subtype specific markers of Pvalb.e, Subtype specific marker that distinguishes the two identified Vip clusters.(Stains independently repeated 2 to 3 times).Scale bar for the figure in a = 20µm.Extended Data Fig. 5 | Comparative analysis between our dataset and those of published MERFISH dataset.a, Proportions of major cells types in ours compared with those in published papers.b, Heatmap showing the gene-expression correlation between cell types and subtypes defined in our study and those in published studies.(Cell numbers = motor cortex 23868; PFC_aging 185216; human_MTG 4321; human_STG 4871).Extended Data Fig. 6 | Spatial distribution of molecularly defined excitatory neuron subtypes along the anterior to posterior axis.a, Schematics of coronal brain slices aligned to Allen Brain Atlas CCF-v3 from anterior to posterior sections.b, Spatial organization of the indicated representative excitatory neuron subtypes across anterior to posterior sections.c, Spatial organization of the indicated representative inhibitory neuron subtypes across anterior to posterior sections.Extended Data Fig. 8 | Specific gene expression signatures of PFC and PFC subregions.a,b, Spatial expression of representative genes enriched (Cacna1h, Cxcl12, Cdh13) or depleted (Abcd2) in PFC relative to adjacent cortical regions.Inferred expression by iSpatial is shown in b.Only excitatory neurons are shown.Corresponding ISH data from Allen Brain Atlas are shown on the right.Dotted line marks PFC region.c, Ingenuity pathway analysis (IPA) of the genes, identified after imputation, showing enriched or depleted in PFC.The red/ blue bars indicate the pathway more active in/out PFC, respectively (p-values were calculated by One-sided Fisher′s Exact Test without adjustments for multiple comparison).d, The inferred spatial gene expression by iSpatial of four representative genes enriched in PFC subregions.A diagram of anatomical subregions in PFC and adjacent regions is shown on the left.Only the excitatory neurons are shown.ISH data from Allen Brain Atlas are shown on the right.Dotted line marks PFC subregion.Extended Data Fig. 9 | Integrated MERFISH and scRNA-seq data to predict neuronal projections.a, UMAP showing integration of cells from scRNA-seq (left) and MERFISH (right).The colors represent the projection sites in scRNA-seq data and the excitatory subtype in MERFISH data, respectively.b, Spatial location of neurons projecting to six different brain regions.c, Amygdala projection validation: mCherry expressing retrograde AAV was injected in amygdala (injection site shown) (scale = 0.5mm).Brain slice of PFC were stained with DAPI and mCherry to image the labeled neurons.smFISH co-labeling of mCherry with Pou3f1 (L5 ET marker), or Foxp2 (L6 CT marker) reveal partial overlap with both neuron subtypes.(Stains independently repeated 3 times).Arrows in enlarged images indicate dual labeled cells (scale bar = 20µm).d, High resolution images showing smFISH co-labeling of mCherry with Foxp2 and Pou3f1 within individual neurons (scale bar = 20µm).Extended Data Fig. 10 | Pain model: Behavior, MERFISH and smFISH measurements.a, Von Frey testing reveals chronic mechanical allodynia induced by SNI surgery during the 6 weeks testing period (n = 6 biologically independent adult male mice per group, sham and SNI, measured weekly for 6 weeks).b, UMAP visualization of cells from control and pain samples.Left panel: all cells from both groups combined; Right panel: separate view of the control and pain.c, Heatmap showing the gene expression correlation of the cell types between control and pain.d, The numbers of differentially expressed genes in the indicated neuron subtypes when compared pain and control samples.The numbers of up-regrated and down-regulated genes are colored in red and blue, respectively.e, Comparison of ARG scores between the two hemispheres of PFC in each mouse (two-tailed paired t-test is used to calculate the p-value; n = 3 biologically independent mice per group; the center is the median value, bounds of box indicate the first and third quantile; the minima are defined as the minimum values and the maxima are defined as maximum values within each group.Cell numbers, sham = 17873, SNI = 19392).f, Global overview of PFC in half coronal section with Fos smFISH (red) in Sham (Control) and chronic pain conditions (scale bar = 100µm).g, Npas4 and h, Arc expression is reduced across PFC in chronic pain relative to control and is significantly lower in Pou3f1+ neurons.(mean ± SEM; Mann-Whitney test; n = 8 biologically independent adult male mice per group of sham and SNI; Npas4, p = 0.0002; Arc, p = 0.004).(scale bar in g = 20µm, for g and h).
2023-10-18T06:16:18.104Z
2023-10-16T00:00:00.000
{ "year": 2023, "sha1": "c0096764b341c253f0c889c7315b7fbfa587ceb3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41593-023-01455-9.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "6996de0f392c55003a30463838f5e28e5d9b21e3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234631656
pes2o/s2orc
v3-fos-license
Solving the conundrum of extensional folding in metamorphic core complexes Folds with axial traces parallel to the extension direction are a common feature in continental detachment systems and metamorphic core complexes. Yet, how they form has been puzzling for many decades. Here, we show that the key to solving the conundrum lies in revising the long-held single-scale view toward natural deformation and application of kinematic models. We demonstrate that extensional folding can result naturally from the partitioned stress field in competent layers in plate-scale extension and transtension deformations. Competent layers that develop extension folds should be regarded as rheological inclusions in the lithosphere rather than infinitely extending plates clamped at system boundaries and subjected to system boundary conditions. so is the western Grenville Province (Fig.1). The strains reflected by extension folds cannot be explained by the transtension model. Here, we demonstrate that the difficulty in understanding extensional folding is due to our single-scale approach toward lithosphere deformation. We think it is unrealistic to regard layers that develop extension folds as infinitely-extending elastic or viscous plates that are subjected to the system boundary conditions. It is more realistic to regard them as rheologically heterogeneous inclusions embedded in the lithospheric mass undergoing macroscale transtension. It is the partitioned stress and strain field in these layers, rather than the homogenized stress and strain field of the macroscale transtension, that is relevant for extensional folding. We show that partitioned stress field can produce extension folds in competent layers in extension and transtension settings. Partitioned stress field in a competent layer-like inclusion Eshelby's inclusion solution [14][15][16] relates the stress and strain field inside an ellipsoid inclusion to the uniform macroscale field by: Where both the inclusion and matrix are elastic (an all-elastic system), ε and E are elastic strains. Where both the inclusion and the matrix are Newtonian viscous (an all-viscous system), ε and E are strain rates. Where the matrix is viscous and the inclusion elastic, ε is elastic strain tensor of the inclusion and E the viscous strain rate tensor of the matrix. C is the elastic or viscous stiffness of the matrix depending on its rheology and S the Eshelby tensor for the inclusion. We use the fundamental interaction equation (1) and the equivalent-inclusion approach 14,17 to derive analytical solutions for the partitioned stress in competent layer-like elements under macroscale transtension. The following assumptions are made. First, we regard the competent layer as a flat oblate inclusion with principal semi-axes 1 2 3 a a a =  . By varying the aspect ratio 3 1 a a  = , the inclusion may represent different bodies including a uniformly-thick plate ( 0  → ). We believe that, on the macroscale suitable for continental transtension, no geological unit is rheologically continuous throughout the deformation zone and subjected to the system boundary conditions. Second, for simplicity we assume that the rheology of the lithosphere on the macroscale and that of the inclusion are linear, isotropic, and incompressible. Third, on the macroscale appropriate for transtension, the lithosphere is approximated by a Homogeneous-Equivalent Medium (HEM) whose rheology is represented by the homogenized rheology of all the rheological elements in the macroscale representative volume 15,16,18,19 . We use the convention of tensile stress being positive so that tensile normal stress corresponds to positive extension strain. To model extension folds in North America Cordillera metamorphic core complexes, we consider stress partitioning in an all-elastic system. Equation (1) leads to a strain partitioning 15 ) which, upon using Hooke's law, is rewritten in the following stress partitioning form: where r is the ratio of the elastic shear modulus of the element  to that of the HEM s  and To model extension folds in the high metamorphic grade rocks in the Grenville Province, we regard the HEM as a Newtonian viscous material. The competent layers that develop extension folds may also be viscous, with a higher viscosity than the HEM. In such a case, the above results (equations (3)) can be applied if r in the equations is taken as the viscosity ratio between the inclusion and the HEM. We consider an additional situation where the element is instantaneously elastic; infinitesimal elastic strains of the layer are converted to permanent strains continually to give rise to the observed folds. In such a scenario, we apply the equivalent-inclusion approach to equation (1) by substituting the elastic inclusion with a fictitious inclusion of the same rheology as the HEM but with the same internal stress as the elastic inclusion. The interaction equation (1) becomes ( ) , where, as in equation (2) Folding of a horizontal competent layer in transtension The macroscale stress tensor for transtension, expressed in the coordinate system xyz ( Fig.2a), is of the following form (see Supplementary Information): Taking the eigenvalues, the corresponding principal stresses for an all-elastic system are: and those for an elastic layer in a viscous HEM are: 6 Note equations (8) . The case of  =0 means that the competent layer is an infinitely-extending sheet, and the solution converges to the single-scale solution of a horizontal sheet under macroscale transtension. In reality, a competent layer is an inclusion which means that  is small but not zero. It can be readily confirmed that the principal partitioned stress axes are parallel to the macroscale principal stresses for both equations (7) and (8). Therefore, whether treated as an allelastic system, an all-viscous system, or as elastic layers in a viscous HEM, the partitioned principal stresses in a horizontal layer are parallel to the macroscale principal stresses but are increased in magnitude by approximately a factor of r or 1  − . Significantly, the horizontal 2  is always compressive and does not vanish (equations (7b) and (8b)) even if the system is in pure As there are no available analytical solutions for the buckling of a flat oblate element embedded in an elastic or viscous matrix, we use the theory of cylindrical buckling of a rectangular elastic plate under in-plate loading 21,22 to provide an approximate analysis of a horizontal elastic layer element in an elastic or viscous HEM under transtension. The tendency for a competent elastic layer to buckle is determined by its flexural rigidity 21 , which for an incompressible elastic plate is ( h the thickness of the plate). The analysis in refs. 21,22 shows that the critical load P for the layer to buckle is . This yields an estimate for the critical 2  for buckling instability to develop in the layer: According to equation (8b), 2  increases in magnitude asymptotically to ( ) The critical aspect ratio for a horizontal layer is ( ) Similarly, in an all-elastic (or all viscous) system, we can use equation (7b) to find the critical aspect ratio. In the pure extension situation, the critical aspect ratio for a horizontal layer to develop buckling is . Thus, all horizontal layer elements with aspect ratios c   or ' c   , depending on the rheologies considered, will eventually reach the critical compression for buckling and develop extension folds in transtension (Fig.3). Folding of a layer increases its effective aspect ratio. The folding of a layer will continue until the effective aspect ratio is below the critical value. Folding of an inclined competent layer-like inclusion in transtension Equations (3) and Equations (5) (3) and (5)) which remain at the same level or smaller than the macroscale components ( 33    ). Therefore, in a strongly competent layer, 3  may be negligible compared to other components. This means that two of the three principal partitioned stress axes are always nearly parallel to the layer and the third principal stress is normal to the layer, despite of the layer orientation. This stress state is consistent with the field observation that competent layers are more prone to buckling and boudinage instabilities to develop folds and boudinage structures. It also implies that the classical elastic plate theory is a reasonable approximation for the deformation of thin competent layers in transtension. We denote the strike and dip of an inclined competent layer by  and  respectively. It turns out that the orientation of the partitioned principal stresses in a competent layer is 8 independent of whether the system is all-elastic, all-viscous, or an elastic layer in a viscous HEM. and  is measured with respect to the -y axis (Fig.2a). If buckling folds initiate in the layer parallel to 1  -axis, we can use equation (10) (2)). Equation (2) suggests that ij  in a mylonite shear zone are likely distinct from and smaller in magnitude than ij  of the bulk lithosphere making it unjustified to assume that ij ij   or that the invariants of the two are approximately equal. Acknowledgments: We are grateful to Lucy X. Lu, Ankit Bhandari, and Rui Yang for many discussions. We tank Lucy Lu for help with plotting Fig.4 Author contributions: DJ developed the theory, derived all the equations, and wrote the paper. CL conducted field structural analysis in the Grenville Front region and prepared Fig.1. Both were involved with the interpretation of field data and discussion of the ideas of the paper. Competing interests: Authors declare no competing interests. Data and materials availability: All data is available in the main text or the supplementary materials. Additional information Supplementary Text Figure S1 Supplementary References a, Transtension geometry and coordinate system xyz used to define the deformation. The horizontal divergence vector v is at an angle  relative to the system normal (thex axis). An initial square is deformed progressively into a parallelogram with its lower-right corner (p) moving along v (dashed line to the final position p'). The horizontal principal stretching ( 1 E ) axis is at 2  relative to thex axis and the horizontal principal shortening 2 E -axis is normal to 1 E -axis. They also are parallel to the two horizontal principal stresses 1  and 2  (not shown in Figure but referred to in text). The two horizontal principal finite strains, measured by stretch, are 12 SS  and 1 S orientation is between 1 E -axis and v depending on the magnitude of finite strain. b, When  is small, 2 S remains close to 1 regardless of the finite strain rendering minor horizontal principal shortening (1-2 S ). Transtension The velocity gradient tensor for macroscale transtension considered in coordinate system xyz (Fig.2 in the text) can be expressed as: cos 0 0 where v is the divergence velocity, d the width of transtension, and  the angle of divergence velocity measured relative to the transtension normal (the x-axis). The The three principal stretches are obtained (see ref. 2 for more detail) by taking the eigenvalues of the tensor The three principal stretches are: Fig. 2b in the text is based on Eq. s5b. Inclined Competent Layer in Transtension For an inclined inclusion with strike  and dip angle  , we set the inclusion coordinate system x'y'z' such that the x'-axis is along the dipline of the layer and pointing down, y'-axis is parallel to the strike of the layer, and z'-axis is normal to the layer and pointing up (Fig.S1). The rotation matrix ij Q relating xyz and x'y'z' coordinates are 3 : cos cos sin cos sin sin cos 0 cos sin sin sin cos The macroscale stress tensor expressed in the inclusion coordinate system are obtained by tensor The partitioned stress tensor components are obtained by inserting ' ij  in place of ij  in equations (3) and (5). In either case, the orientation of the 1 - axis relative to x'-axis is: With  obtained from Eq.s8, the trend and plunge for 1 - axis (equation (10) in text) can be obtained readily from simple trigonometry. Derivation of Equations (5) For shear stresses ij  ( ij  ), we expand equation (6) Fig. S1. Coordinate system x'y'z' for an inclined layer-like inclusion. Coordinate system xyz is the one in which macroscale transtension is defined. The yellow plane is the inclined layer,  and  are respectively strike and dip angles. The x'-axis is along the dipline of the layer and pointing down, y'-axis is parallel to the strike of the layer, and z'-axis (not shown) is normal to the layer and pointing up.  is the angle of the partitioned principal 1  axis relative to the x'axis.
2020-09-10T10:25:14.890Z
2020-09-07T00:00:00.000
{ "year": 2020, "sha1": "e62726baf28b786d2418d45293c69b5014168ce5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/essoar.10504210.1", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b4b031214f01b08149a66e84eb46956d2c4e7e16", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
26935552
pes2o/s2orc
v3-fos-license
The sodium/proton antiporter is part of the pH homeostasis mechanism in Escherichia coli. The Escherichia coli chromosome has been shown to bear at 89.5 min a locus designated phs which determines the Na+/H+ antiporter activity. The mutant -DZ3 has previously been shown to be simultaneously impaired in Na+ extrusion capacity and in growth on the Na*-co-transported substrates, melibiose and glutamate (Zilberstein, D., Padan, E., and Schuldiner, S. (1980) FEBS Lett. 116, 177-180). This mutant when mated with the wild type yielded wild type-like recombinants that appeared at a map distance of 1.5 min from metB recombinants. Furthermore, genotypes containing repressed operon of glutamate which cannot grow on this substrate still bear the normal phs locus and served as donors for transduction of DZ3 to yield wild type-like transductants. The mutant DZ3 also has been shown (Zilberstein et al., see above) to be impaired in growth at alkaline pH. This fact allowed us to investigate the role of the Na+/ H+ antiporter in pH homeostasis in E. coli. A pH-controlled growth system and rapid filtration technique were used to compare the wild type and the mutant DZ3 with respect to both internal pH and growth during transfers of logarithmically growing cells to different external pHs. Following the shift in external pH of the wild type, a transient state was initiated by reduction of ApH across the membrane. Subsequently, at a specific time course pH homeostasis was re-established. Whereas the capacity of the pH homeostasis mechanism was found to be a function of both the span of the external pH shift as well as the rate at which the change occurs, inhibition of protein synthesis did not affect this process. After stepwise transfer of growing wild type cells from pH 7.2 to 8.3 to 8.6, and then to 8.8, or from 7.2 to 6.4, the ApH was initially zero at each step and growth ceased. Subsequently, within 6 min at the most, the ApH was built up to a magnitude that yielded an internal pH of 7.6-7.8 and thereafter growth was resumed at the initial rate. However, if the shift was made abruptly from pH 7.2 to 8.6, the lag was longer and the buildup of the ApH was slower. The shift between pH 7 to 8.8 appeared to be the limit of the pH homeostasis capacity since the wild type grew normally when allowed to adapt by step transfers over this range and failed to restore both normal internal pH and optimal growth if the transition was made in one step. Since the mutant DZ3 behaved like the wild type after transfer from pH 7.2 to 6.4, but exhibited progressive failure to control internal pH and to grow at the alkaline shifts, we conclude that the Na+/H+ antiporter is The Escherichia coli chromosome has been shown to bear at 89.5 min a locus designated phs which determines the Na+/H+ antiporter activity. The mutant -DZ3 has previously been shown to be simultaneously impaired in Na+ extrusion capacity and in growth on the Na*-co-transported substrates, melibiose and glutamate (Zilberstein, D., Padan, E., and Schuldiner, S. (1980) FEBS Lett. 116, 177-180). This mutant when mated with the wild type yielded wild type-like recombinants that appeared at a map distance of 1.5 min from metB recombinants. Furthermore, genotypes containing repressed operon of glutamate which cannot grow on this substrate still bear the normal phs locus and served as donors for transduction of DZ3 to yield wild type-like transductants. The mutant DZ3 also has been shown (Zilberstein et al., see above) to be impaired in growth at alkaline pH. This fact allowed us to investigate the role of the Na+/ H+ antiporter in pH homeostasis in E. coli. A pH-controlled growth system and rapid filtration technique were used to compare the wild type and the mutant DZ3 with respect to both internal pH and growth during transfers of logarithmically growing cells to different external pHs. Following the shift in external pH of the wild type, a transient state was initiated by reduction of ApH across the membrane. Subsequently, at a specific time course pH homeostasis was re-established. Whereas the capacity of the pH homeostasis mechanism was found to be a function of both the span of the external pH shift as well as the rate at which the change occurs, inhibition of protein synthesis did not affect this process. After stepwise transfer of growing wild type cells from pH 7.2 to 8.3 to 8.6, and then to 8.8, or from 7.2 to 6.4, the ApH was initially zero at each step and growth ceased. Subsequently, within 6 min at the most, the ApH was built up to a magnitude that yielded an internal pH of 7.6-7.8 and thereafter growth was resumed at the initial rate. However, if the shift was made abruptly from pH 7.2 to 8.6, the lag was longer and the buildup of the ApH was slower. The shift between pH 7 to 8.8 appeared to be the limit of the pH homeostasis capacity since the wild type grew normally when allowed to adapt by step transfers over this range and failed to restore both normal internal pH and optimal growth if the transition was made in one step. Since the mutant DZ3 behaved like the wild type after transfer from pH 7.2 to 6.4, but exhibited progressive failure to control internal pH and to grow at the alkaline shifts, we conclude that the Na+/H+ antiporter is * This work was supported by a grant from the United States-Israel Binational Foundation (BSF). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. an absolute requirement for pH adaptability at alkaline pH. It is suggested that the collaborative functioning of this antiporter with the primary proton pumps extruding protons is the basis of the pH homeostasis mechanism at alkaline pH. In all cases. both in the wild type and the mutant, recovery of pH homeostasis always preceded initiation of growth, indicating a tight coupling between the two processes. Most proteins, even those isolated from alkalophilic and acidophilic bacteria, have a narrow pH range of optimum activity and/or stability which falls at around neutrality (I). It is therefore not surprising that a mechanism maintaining the cytoplasmic pH constant at around pH 7.6 was found to form the basis of adaptability to pH in many bacteria (reviewed in Refs. 2 and 3). Furthermore, because in the prokaryotic cell, unlike the eukaryotic cell, the primary proton pumps are located at the cytoplasmic membrane in direct contract with the medium, these pumps have been suggested to be involved in the pH homeostasis mechanism (4, 5) in addition to their established role in energy conversion (6, 7). Thus, when the cell proton pumps linked to electron transport (4), ATP hydrolysis (5), or photochemical reaction (8) are inhibited, the protons equilibrate across the membrane. Upon resumption of the activity of these pumps, a ApH is built to a magnitude dependent on the pH,,' so that a constant pH, of 7.6 is maintained over a wide range of pH,. Like the intact cell, membrane vesicles isolated from bacteria have the propensity to maintain a constant internal pH during changes in pH, (9,10). This physiological facet of the primary proton pumps appears to be a general phenomenon of the prokaryotic cell and many bacteria, both neutrophiles like Escherichia coli as well as acidophiles and alkalophiles, exhibit pH homeostasis in a similar pattern: the proton pumps maintain a ApH sensitive to pH, (review in Refs. 2 and 3). It is evident that the mechanism by which the ApH across the cytoplasmic membrane of the prokaryotic cell varies with pH, forms an essential part of the pH homeostasis of bacteria. It has been suggested that the Na'/H' antiporter is involved in this mechanism in E. coli (4). Evidence has been presented supporting the presence of the Na'/H' antiporter in many energy transducing membranes (ll), including that of E. coli (12)(13)(14). Energy-dependent extrusion of Na' from E. coli cells has long been observed (15) and has been shown to be dependent on the proton gradient maintained across the membrane both in intact cells (12,17) and in right-side-out vesicles 3688 Na+/W Antiporter andpH Homeostasis in Escherichia coli (13). In an anaerobic cell suspension of E. coli, after H+ ions have been translocated outward across the cytoplasmic membrane by a respiratory pulse (12) or inward by a lactose pulse (16), re-equilibration is catalyzed by the presence of Na+. Similarly Na+ flux reduces the ApH (acidic inside) across right-side-out membrane vesicles (13). Furthermore, in line with its suggested role in the control of the ApH with pH,,, the Na+/H+ antiporter has been shown to be pH-dependent. The system was found to be electrogenic with a higher rate at high pH than at acidic pH where it is electroneutral (13, 16). Further analysis of the function of the Na+/H+ antiporter of E. coli has recently become possible by the isolation of a mutant (DZ3) which is impaired in all the Na+/H+ antiporterrelated activities (17,18). This antiporter is the only system known in E. coli which extrudes Na+ and maintains a sodium gradient that is directed inward and acts as a driving force for the uptake of substrates that are co-transported with Na+. The mutant, therefore, was selected by its inability to grow on glutamate and melibiose, two substrates that are co-transported to the cell with Na+ (19-21). Accordingly, the mutant was found to be defective in its sodium extrusion capacity. In the present study, we have extended our preliminary genetic data (17) to show that, indeed, a single locus designated phs and mapped at 89.5 min is mutated in DZ3 and, therefore, it most probably determines the Na+/H+ antiporter activity in E. coli. Strikingly, the mutant DZ3 lost the capacity to grow at alkaline pH, but grew like the wild type up to pH 7.5 (17). This pH sensitivity of growth of DZ3 may be caused by an effect of the phs mutation on control of pHi . Evidently, in this case, this mutant may serve as a powerful experimental tool to demonstrate the role of the Na+/H+ antiporter in the pH homeostasis mechanism of E. coli. To test this possibility, the wild type and the mutant were compared with respect to adaptability to changes in pH,. To exclude a very recent paper (28), previous studies of the control of pHi have been performed with resting cells. Furthermore, with no exception, all studies were designed for determination of the final steady state value of pHi maintained by the cells and no attention was paid to the transient stages following shift in pH, (review in Refs. 2 and 3). For the study of the pH homeostasis mechanism and its relation to growth in both the wild type and the mutant, we therefore used a pH-controlled growth system and a rapid fitration technique for cell separation. This experimental procedure allowed us to monitor rapidly and continuously changes in growth and in pHi induced in growing cells by changes in pH, of the culture and hence unraveled the details of the pH homeostasis mechanism of growing cells. It has been found that following a change in pH, of growing cells the ApH immediately diminishes across the membrane, and only then, at a specific time course and with no need of protein synthesis, the pH homeostasis is reestablished; the capacity and the time course of the pH homeostasis mechanism is a function of both the span of the change in pH, and the rate at which the change occurs; pH homeostasis is a prerequisite for growth and finally and most importantly the Na+/H+ antiporter plays a primary role in the pH homeostasis mechanism of E. coli at alkaline pH. EXPERIMENTAL PROCEDURES Bacteria and Growth Media-The E. coli K-12 strains used are described in Table I. Cells were grown on minimal medium A (22) lacking citrate, supplemented with L-methionine (50 pg/ml) and containing 0.5% glycerol as the carbon source. Solid medium was prepared by the addition of 1.5% Difco agar. L broth used for transduction and conjugation contained KC1 instead of NaC1. Transduction-Plck lysates of the donor bacteria were prepared and transduction was done as previously described (23,24). Mating Experiments-Mating experiments were performed as described elsewhere (25). Growth under Controlled pH-Cells were inoculated into the growth medium in which the MgS04 concentration was reduced to 0.001%. To control the pH of the growth medium, cells were grown in a BioFlo Model C30 chemostat (New Brunswick Scientific) as batch cultures at 37 "C. The pH of the medium was controlled by means of a Modcon (Kiryat Motzkin, Israel) pH titrator. KOH or HC1 were added at a rate of 2.25 or 3.6 meq/min, respectively. Determination of Intracellular pH under Growth Conditions-Intracellular pH was evaluated during growth from the distribution across the cell membrane of either [14C]DM0 or [14C]methylamine (26). Since the aim of the work was to measure pH, at the logarithmic growth phase at different and well defined external pH, it was essential to reduce experimental manipulation to the minimum to avoid distortions in the measurements. Ten ml of a cell suspension (0.1-0.17 mg of cell protein/ml) were immediately transferred from the chemostat into a prewarmed (37 "C) 100-ml flask containing 0.6 pM ["c] methylamine (68 Ci/mol) or 0.32 p~ [14C]DM0 (120 Ci/mol). The suspension was incubated for 1 min with continuous shaking at 37 "C and filtered through a glass fiber filter (GF/C Whatman, 25-mm diameter). The use of these filters allowed the use of large amounts of cell protein which increased the sensitivity of the measurement, and washing could be avoided. The filters were transferred into toluene-Triton scintillation liquid and assayed for radioactivity in a Tricarb scintillation counter. The amount of radioactivity retained on the fiters in the absence of ApH, namely at pH, 7.6 (4) or at pH 7 in the presence of carbonyl cyanide p-trifluoromethoxyphenyl hydrazone, was identical and served as the blank. The steady state level of uptake of the ApH probe at all pH values was attained within 75 s. To calculate the concentration ratio, the volume of the filtered cells was estimated from the amount of cell protein retained on the filters using the factor 5 p.l of osmotic water/mg of cell protein (4). method (27 RESULTS The mutant DZ3 has been shown to be simultaneously defective in growth on glutamate and melibiose and at alkaline pH and in its energy-dependent sodium extrusion capacity (17). It was therefore crucial to demonstrate that a single locus is involved in the mutation of strain DZ3. Table I1 Na+/W Antiporter and pH Homeostasis in Escherichia coli summarizes the results of conjugation experiments between the mutant to which nalA was transduced (DZ31), and a derivative of the wild type (CS72). In these strains, metB maps exactly at half the distance between the me1 and glt loci and mel is expected to penetrate frst (20). Recombinants capable of growth on melibiose were obtained and all were able to grow on glutamate, suggesting that a single locus (phs) is responsible for growth on both carbon sources. Accordingly, all the Glt' recombinants were Mel'. The Mel+ recombinants tested in the presence of methionine appeared 1.5 min before the Met' recombinants (growing on glycerol without methionine). Consistently, then, it was found that although not all the recombinants capable of growth on both melibiose and glutamate were Met+, all the Met+ recombinants were capable of growth on both substrates. Growth of one of the recombinants (Mel+ Met+ Glt') was tested at different pH values and was found to be normal over the pH range 7 to 8.8. These results suggest that a single mutation causes all the defects characterizing the strain DZ3. The mutation of DZ3 maps in a locus at 89.5 min on the E. coli chromosome which we designate phs (Table 11). This locus is far from the operons responsible for the utilization of both glutamate (81.7 min) and melibiose (92.5 min) (39). Hence, genotypes of the latter two operons, which determine phenotypes that cannot grow on the respective carbon sources (Glt-or Mel-) must have a normal phs allele and should therefore still serve as donors for transduction of DZ3 to yield wild type transductants. Such a transduction experiment is described in Table 111. Whether selected for Mel' or for Glt', the transductants were simultaneously both Glt' and Mel' and grew at high pH, suggesting that transduction of the phs wild type d e l e had occurred, thus curing the pleiotropic effects ofphs. The Lac+ phenotype determined by they locus, which is well separated fromphs, was not co-transduced with either Glt' or Mel'. It is concluded that the locus phs determines the sodium proton antiporter as it is the common denominator for the functions affected pleiotropically in DZ3. It has been proposed that the Na+/H+ antiporter has a role in regulation of internal pH (4, 12). Hence, the finding that DZ3 does not grow at alkaline pH can be attributed to its lack of pH homeostasis at the alkaline pH. Internal pH has been measured both in the cells of the wild type and the mutant following transfer between different pH values. Cells were grown in a pH-stat at pH 7.2 to the logarithmic phase and a ApH of 0.55 units (alkaline inside pHi = 7.75) was found. Either KOH or HC1 were then automatically added to yield a new preset pH and both internal pH and growth were followed. For example, upon transfer of wild type cells from pH 7.2 to 8.3, the ApH dissipated immediately and growth ceased (Fig. 1A). Within a lag of about 10 min, the ApH of0.5 was built up yielding an internal pH of 7.8 and growth subse- quently resumed at a rate practically identical with the initial rate (doubling time of 70 min). Upon further stepwise transfers to pH 8.8 or from pH 7.2 to 6.5, essentially the same pattern was observed: first the ApH was built up to reach pH, 7.8 and subsequently growth resumed at the original rate ( Fig. 2A). It is evident that following the shift in pH, a perturbation of pHi was observed and the ApH immediately diminished across the membrane (Figs. 1, 3, and 4). The rise in ApH started only after a lag period of at least 3 min. To test whether this lag was due to a requirement for the synthesis of a new protein, chloramphenicol was introduced to the growing culture 3 min before the pH shift was done. It is shown in Fig. 4 that whereas both protein synthesis and growth were arrested in the presence of the inhibitor, the ApH was normally rebuilt and maintained for at least 45 min. As compared with the wild type, during the shift between pH 7.2 and 8.3, the mutant DZ3 showed a longer lag of 20 min before the ApH was built up and the ApH established was smaller (Fig. 1 B ) . Accordingly, growth of the mutant resumed only after 20 min and at a slower rate (100-min doubling time) than that of the control (Fig. 1 B ) . Further stepwise transfer of the mutant to pH 8.6 and then to pH 8.8 progressively slowed down the growth to complete cessation. Remarkably, the pH homeostasis failed with increasing pH and at pH 8.6 A ) and from 7.2 to 8.8 (B). The pH shift and the measurements were carried out as in Fig. 1 Effect of chloramphenicol on ApH, growth, and protein synthesis after shift of external pH from 7.2 to 8.3. The pH shift and measurements were carried out as described in Fig. 1 except that chloramphenicol (100 pg/ml final concentration) was added 3 min before the pH shift. Proline (A) incorporation was measured using 0.1 p~ [3H]proline (5 Ci/mmol). 0, growth 0, ApH. and higher, pH, was almost equal to the external pH. It is of interest that growth and pH, were normal both at pH 7.2 and 6.5 (Fig. ZB). Na'/H" Antiporter and pH Homeostasis in Escherichia coli When the transfer from pH 7.2 was made directly to pH 8.6 ( Fig. 3) without an "adaptation" period at pH 8.3, DZ3 completely falied to resume growth even during a period of 18 h. The pH homeostasis failed even in the wild type for the fist minutes after such a pH jump. However, ApH was eventually built up at 20 min and shortly thereafter growth was resumed (Fig. 3A). This pH span may indeed be the limit of the homeostasis machinery capacity since after a wider span (7.2 -8.8) the normal pH, was not recovered and growth resumed at a very slow rate (Fig. 3 B ) . A still larger shift (7.2 + 9.0) cannot be coped with at all, even by the wild type. It is clearly evident that, at alkaline pH range which does not permit its growth (17), the mutant DZ3 is impaired in pH homeostasis. We may conclude, therefore, that the Na'/H+ antiporter which is the primary site of lesion in this mutant is required for pH homeostasis at alkaline pH, and that the pH homeostasis is a prerequisite for optimal growth. Indeed, tight coupling was observed between normal pH, and growth, both in the mutant and the wild type. Mutant cells maintained at the nonpermissive pH of 8.6 for at least 8 h were fully viable and when transferred back to pH 7.2 they restored pHi to 7.6-7.8 and grew at the normal rate (not shown). It is therefore highly suggestive that it is the lack of pH homeostasis which hinders growth of DZ3 at alkaline pH. Accordingly, optimal growth of the wild type was observed when pH homeostasis was established. The wild type maintained constant pH, of 7.6-7.8 and grew optimally up to pH 8.8 ( Fig. 2A). At pH 8.8, it grew normally if allowed to adapt slowly by step transfers but failed to restore both ApH (final pHi was 7.9) and normal growth if the transition was made in one step (compare Fig. 2 to Fig. 3). At pH 9, there was no ApH, growth ceased, and viability was reduced to 50% within 6 h. Both in the wild type and the mutant, recovery of pH: always preceded initiation of growth. DISCUSSION This study describes the use of a pH-controlled growth system and rapid filtration technique to investigate the pH homeostasis mechanism in growing E . coli cells. The bacteria were analyzed after exposure to different pH stresses and our previous results with resting cells (4) were extended. Following a shift in pH,, a transient stage of perturbed pHi was clearly observed before pH homeostasis was restored. Thus, upon stepwise transfer of growing wild type cells from pH 7.2 to 8.3 then to 8.6 and 8.8, or from pH 7.2 to 6.4, the ApH was found to be zero initially at each step. Subsequently after a lag of several minutes, the ApH reached a magnitude that yielded a constant internal pH of 7.6-7.8 (Figs. 1A and 2A). Most probably during this transient period, the pH homeostasis mechanism is readjusted to the new state. It is remarkable that this process was found not to involve the synthesis of a new protein (Fig. 4). Since equilibration of the protons across the membrane initiated the transient stage, it is suggested that pH,, pH,, or both may serve as the signal for the new adjustment of the pH homeostasis. The capacity of the pH homeostasis mechanism was found to be a function of both the span of the shift in pH, and the rate at which the change occurred. A change in pH, from 7.2 to 8.6 was followed by a longer lag for the ApH (20 min) than the sum of all the lags observed (about 12 min) during stepwise shifts of external pH to the same final value (7 + 8.3 + 8.6) ( Fig. 2A). One possible explanation for these differences in the duration of the lag is very likely a greater leak of intracellular material that occurred after the more drastic shift, and which could impede the rate of restoration of the ApH. The shift between pH 7 to 8.8 appeared to be the limit of the capacity of the pH homeostasis mechanism in the alkaline range since pHi is not restored to the normal value following such a stress. The conjugation and transduction experiments conclusively showed that a single locus, phs, maps at 89.5 min on the E. coli chromosome, determines the Na'/H+ antiporter, and is mutated in DZ3. As this mutant was also found to be incapable of growth at alkaline pH (17), it was used here to study the role of the Na'/H' antiporter in the pH homeostasis of E. coli. As compared with the wild type, mutant DZ3 behaved normally after the pH jump from 7.2 to 6.4. At the fist alkaline transfer to pH 8.3, however, there was a marked increase in the lag up to 20 min. The internal pH reached was only 8 and the growth was resumed at a rate twice as slow as that in the wild type. At further increase in pH,, the pH homeostasis progressively failed and beyond pH 8.6 the protons equilibrated across the membrane and growth ceased. Hence, it was concluded that the Na+/H+ antiporter is indeed needed for pH homeostasis at alkaline pH. In view of these results, the pH homeostasis mechanism of E. coli can now be reanalyzed. It was previously shown that the pH homeostasis system is based on the primary proton pumps. These maintain the ApH which changes with external pH so that the internal pH is kept constant at pH, 7.6-7.8 (4). Thus, below pH 7.6-7.8, the pH is alkaline inside, and above this pH, acidic inside, suggesting two patterns of control of the ApH, above and below pH 7.6-7.8. Indeed, the functionality of the Na+/H' antiporter, affected by the mutation of DZ3, is not required at the acidic pH. The outward directionality of the primary proton pumps is consonant with the inward alkaline orientation of the ApH observed up to pH 7.6. The rate of respiration remained unaltered all over this external pH range, suggesting a constant rate of proton pumping (4) but does not explain the drastic decrease in ApH observed. However, the observed increase in A* with increasing pH (16) which is also maintained by these pumps is in accordance with the decrease in the ApH, since, as in many other systems, here too, A* limits the number of protons that can be pumped out (4). This change in A* with pH implies that electrogenic movement of ion($ is pH-dependent in E. coli and forms part of the pH homeostasis as previously suggested for Vibrio alginolyticus (30). Above pH, 7.6-7.8, it is clear that an increase in A* does not account for the changes in the ApH occurring in the alkaline range: the A* hardly changes and is still negative inside when the ApH reverse and becomes acidic inside (16). In the present study, we clearly show that the mechanism for the reversion of ApH above pH, 7.6 involves the Na+/H' antiporter. The Na+/H+ antiporter must recycle the protons extruded by the primary pumps to the cell. The antiporter has previously been shown to be electrogenic at alkaline pH (13, 40) as well as pH-sensitive (16). It is envisaged that the rate of translocation of protons by the antiporter, relative to the protons pumped out, is such as to yield an overall net influx of protons required for the ApH, while positive charges are still directed outward (2,3,13). Hence, it is concluded that the cooperative action of the proton pumps and the Na+/H' antiporter constitutes the pH homeostasis mechanism at the alkaline pH range. Although a sodium requirement for growth has not been shown in E. coli, it should be emphasized that almost every commercially available compound as well as glassware is contaminated with Na+ and can yield up to 0.1 mM Na+ (31). For this reason, the possibility that bacteria require a low sodium concentration has not yet been ruled out. Nevertheless, the participation of more than one antiporter system in regulating internal pH is not mutually exclusive and the relative contribution of each may change under different environmental conditions. Indeed, a K'/H' antiporter has recently been suggested to play a role in pH homeostasis in E. The capacity for maintaining a constant cytoplasmic pH is not unique to E. coli. Many other bacteria neutrophiles as well as acidophiles and alkalophiles maintain a constant pH, (reviewed in Refs. 2 and 3). Furthermore, the mechanisms of pH homeostasis seem common to all with proton pumps playing the primary role. In the case of the alkalophiles similar to E. coli, the Na'/H' antiporter has been conclusively shown to take part in the pH homeostasis mechanism at alkaline pH (35, 36). Given the pH sensitivity of most enzymatic reactions and the high efficiency of the pH homeostasis mechanism in bacteria, it is suggested that changes in pHi may serve as regulatory signals of cell physiology. Indeed, in the present study, we observed a tight coupling between pH, and growth. Thus, in all cases both in the wild type and the mutant DZ3, recovery of pHi always preceded initiation of growth. Furthermore, this mutant which was found to be impaired in pH homeostasis at alkaline pH remained viable for many hours at the nonpennissive pH yet it did not grow. Only when pH homeostasis was restored at lower pH was growth resumed. coli (32-34). Other recent observations indicate the possible physiological role of changes in pH,. It has been shown that, in two Bacillus species, the pH within the dormant spore is around 6.3 and it rises to 7.5 upon germination (37). Reversible perturbation of pHi has been shown to be involved in pH taxis in E. coli (29).
2018-04-03T00:49:44.461Z
1982-04-10T00:00:00.000
{ "year": 1982, "sha1": "d3905a8f494796e5ad30d07e9505b83c5c35c01c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)34835-x", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "585b8c67b52b774133326f4d45e012a9f6118e83", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
234026820
pes2o/s2orc
v3-fos-license
Industrial Symbiosis: A Sectoral Analysis on Enablers and Barriers Industrial Symbiosis (IS) around the world in the last 20 years had been characterized through an extensive analysis of scientific papers on the IS emerging process, with a special focus on its early stages. The literature suggests that in this process there are key factors (enablers, barriers, triggers, and challenges) that play a critical role in Industrial Symbiosis. From those factors, the enablers and barriers have been highlighted in most of the studies in their different dimensions (social, economic, policy, technological, management, or geographical, amongst others). Several implementation cases suggest that the relevance of these factors rely on the dominant economic sectors involved. This study aims to reveal the key enablers and barriers in various economic sectors and its behaviour according to each one. To accomplish this objective, a comprehensive assessment methodology was designed and performed. This methodology is divided in two sequential phases: the first, sectoral analysis, focuses on the identification of the more relevant dimensions per economic sectors; in the second phase, incidence analysis, the individual behaviour of the enablers and barriers per economic sector are identified. This new approach correlates the economic sectors and factors incidence in order to provide new insights on the key barriers, and enablers on different dimensions. The main result of this study consists in the identification of a set of recommendations that might be critical to reinforce the emerging synergies process and to help overcome the barriers in each economic sector analysed. Introduction Industrial Symbiosis (IS) is commonly associated to Industrial Ecology (IE) and constitutes a strategy to promote circular economy, as it replicates or mimics nature in an industrial environment or industrial ecosystem [1]. This business model recreates an ecosystem where the elements or industrial actors actively share resources and wastes. One of the most accepted definitions establishes IS as the use by one company or sector of underutilised resources broadly defined (waste, by-products, residues, energy, water, logistics, capacity, expertise, equipment, and materials) from another, with the result of keeping resources in productive use for longer [2]. The firms involved in these kinds of synergies through these exchanges achieve economic, environmental and social benefits [3]. In practice, the companies may benefit from reduced operational costs [4], reduced taxes [5,6], job creation [7] and reduced emissions of CO 2 [8]. Among the various successful implementation cases, the Kalundborg Eco-Industrial park (EIP) is one of the best-known examples in the world [9], since it has been establishing exchanges and industrial cooperation for over 50 years, involving nine partners and 25 stream exchanges [10]. This study is based on a systematic and detailed literature review directed to identify publications associated with the IS implementation case studies (CS). The objective of this literature review is to reveal the enablers and barriers associated to the Industrial Symbiosis. In this sense, a search was conducted in the search engines Scopus and Web of Science. This search was developed using the principal keyword "Industrial Symbiosis" and combining with other keywords, as presented in Table 1. Our main information source for the case studies characterization was based on scientific peer-reviewed journal articles. In addition, a complementary research was developed through Internet searches for technical reports and technical documentation of European initiatives, such as European projects, sectoral clusters and IS networks. This complementary information, enabled the identification of critical aspects that were not registered in the scientific literature. Figure 1 presents the research strategy and the analytical procedures considered in the development of this study. Through this structure, it was possible to obtain an initial sample of 126 references, which were analysed and resulted in the identification of 51 articles describing relevant symbiosis case studies. For the processing of the information, an analytical procedure consisting of the following steps, was adopted: Step 1: Case study characterization; Step 2: Identification of the enablers and barriers involved in the case studies; Step 3: Development of an interpretive approach on the behaviour between the enablers/barriers and economic sectors; Step 4: Proposal of a set of generic recommendations for the IS implementation Industrial Symbiosis Case Studies Identification The analysis of the 51 papers allowed for the identification of 26 implementation case studies. These case studies represent diverse implementation approaches, such as, synergies (internal or external), urban industrial symbiosis and EIP Industrial symbiosis. It was also possible to identify other important aspects such as geographical distribution, economic sectors and the streams. Table 2 present the 26 case studies and the characterization of their economic/industrial sectors. Table 2. Characterization of case studies. Cases Denomination Case Economics Sector Country Source CS1 The Göta Ä lv region case Energy Sweden [54] CS2 The Nanning Sugar Co., Ltd., Nanning Agribusiness, cement China [33,55,56] CS3 The Kawasaki Eco-town Metal, paper, waste management, manufacturing, chemical Japan [57][58][59] CS4 The Eco-Industrial Park, Rizhao Chemical, manufacturing, cement, metal, logistic, paper industry, energy, agribusiness China [35] CS5 The Relvão Eco Industrial Park, Municipality of Chamusca, Portugal Paper industry, waste management, agribusiness, chemical, paper industry Portugal [55,60] CS6 The Liuzhou city case Metal (Iron and steel), cement, construction China [8,61] CS7 The symbiotic industrial district of Guayama Through this structure, it was possible to obtain an initial sample of 126 references, which were analysed and resulted in the identification of 51 articles describing relevant symbiosis case studies. For the processing of the information, an analytical procedure consisting of the following steps, was adopted: Step 1: Case study characterization; Step 2: Identification of the enablers and barriers involved in the case studies; Step 3: Development of an interpretive approach on the behaviour between the enablers/barriers and economic sectors; Step 4: Proposal of a set of generic recommendations for the IS implementation Industrial Symbiosis Case Studies Identification The analysis of the 51 papers allowed for the identification of 26 implementation case studies. These case studies represent diverse implementation approaches, such as, synergies (internal or external), urban industrial symbiosis and EIP Industrial symbiosis. It was also possible to identify other important aspects such as geographical distribution, economic sectors and the streams. Table 2 present the 26 case studies and the characterization of their economic/industrial sectors. Geographical Distribution and IS Approach Regarding the alternative approaches to perform IS, in the sample there is a clear tendency for external IS, specifically referring to cases with two or more industries that are not necessarily located in an industrial park or EIP that manage to develop streams [46]. The approaches less represented were the internal IS and urban IS. The low representativity of internal IS can be associated with the fact that this type of approach usually requires a comprehensive restructuring of the company's business model [27,34] and strategic changes [11], one of the best known cases is The British Sugar Plc CS24 [27] whose processes optimization took 30 years of development and an important economic investments. The concept of the urban IS approach is represented in the final sample by CS3 [57][58][59]. The EIPs require joint efforts of several different stakeholders (firms, industries, local government, agencies) and face various barriers in its development [77]. In terms of geographical distribution, our sample it is mainly focused on European cases. Figure 2 represents (a) IS approach, (b) geographical distribution and (c) number of published papers per year in the final sample. Industrial Symbiosis Streams There are several streams or exchanges that can take place within the scope of IS [2], for the purpose of this study were categorized the different streams identified in the final sample. Regarding the underutilized resources identified in the CS, two main aspects were targeted. The surpluses, defined as the materials produced by an industrial activity (wastes and by-products) and the new materials (raw material). On the other hand, the utilities were targeted, since some synergies involve sharing of infrastructure to perform the stream. It is important to highlight that IS also considers the share of facilities and services [11,78,79], normally called as under-utilized capacity or sharing of over capacities. Nevertheless, for the purposes of this study, were just identified approaches concerning underutilized resources and utilities. Figure 3 shows the categorization of the various IS streams and their distribution in the final sample. Industrial Symbiosis Streams There are several streams or exchanges that can take place within the scope of IS [2], for the purpose of this study were categorized the different streams identified in the final sample. Regarding the underutilized resources identified in the CS, two main aspects were targeted. The surpluses, defined as the materials produced by an industrial activity (wastes and by-products) and the new materials (raw material). On the other hand, the utilities were targeted, since some synergies involve sharing of infrastructure to perform the stream. It is important to highlight that IS also considers the share of facilities and services [11,78,79], normally called as under-utilized capacity or sharing of over capacities. Nevertheless, for the purposes of this study, were just identified approaches concerning underutilized resources and utilities. Figure 3 shows the categorization of the various IS streams and their distribution in the final sample. It was observed that the type of exchange implemented is intrinsically associated with the economic sector and its activities. For instance, the sectors of energy production or high energy consumption are characterized by implementing energy streams, such as energy exchange and heat recovery, as observed in case CS1 [54]. On the other hand, sectors like waste management and agribusiness are characterized by exchange of waste and by-products in an inter-company perspective. Industrial Symbiosis Streams There are several streams or exchanges that can take place within the scope of IS [2], for the purpose of this study were categorized the different streams identified in the final sample. Regarding the underutilized resources identified in the CS, two main aspects were targeted. The surpluses, defined as the materials produced by an industrial activity (wastes and by-products) and the new materials (raw material). On the other hand, the utilities were targeted, since some synergies involve sharing of infrastructure to perform the stream. It is important to highlight that IS also considers the share of facilities and services [11,78,79], normally called as under-utilized capacity or sharing of over capacities. Nevertheless, for the purposes of this study, were just identified approaches concerning underutilized resources and utilities. Figure 3 shows the categorization of the various IS streams and their distribution in the final sample. It was observed that the type of exchange implemented is intrinsically associated with the economic sector and its activities. For instance, the sectors of energy production or high energy consumption are characterized by implementing energy streams, such as energy exchange and heat recovery, as observed in case CS1 [54]. On the other hand, sectors like waste management and agribusiness are characterized by exchange of waste and by-products in an inter-company perspective. It was observed that the type of exchange implemented is intrinsically associated with the economic sector and its activities. For instance, the sectors of energy production or high energy consumption are characterized by implementing energy streams, such as energy exchange and heat recovery, as observed in case CS1 [54]. On the other hand, sectors like waste management and agribusiness are characterized by exchange of waste and by-products in an inter-company perspective. Economic Sectors and Activities The case studies were organized by economic sectors as presented in Table 3 and, although various authors suggest that industrial symbiosis initiatives are mainly concentrated in primary sectors and manufacturing [15,80], the sample obtained is very varied including other sectors such as logistics, waste management, pharmaceutical, and others. This indicates that the opportunities for IS are not only restricted to those sectors and activities. Enablers and Barriers: Uncovering Industrial Symbiosis in Each Economic Sector The nomenclature given to the key symbiosis factors in the literature can diverge: drivers, enablers, incentives, barriers, and challenges [14,15,81]. However, in general terms, all the studies that were analysed have the same purpose: to propose a categorization of these factors. This division of factors is intended to categorize or group them into two groups: factors that can unlock, facilitate and support the consolidation of synergies (enablers, drivers, triggers), and factors that can block or hinder the concretization of an initiative (barriers, challenges). As a consequence, we have focused on the assessment of enablers and barriers, which are amply discussed in the literature [15][16][17][18][19], and therefore, two concepts that limit the boundaries were defined: Enabler: A factor that facilitates and supports the concretization of symbiotic synergies. Barrier: A factor that hinders or obstructs the development of symbiotic synergies. Enablers and barriers can be presented in various dimensions and levels [3,16,19,22,[82][83][84][85][86] and we suggest seven fundamental dimensions to be considered, namely social, economic, policy, management, technological, geographical and intermediaries. Any of these dimensions can be relevant in three levels of implementation [7,80,87,88]: (1) a local level that involves the most direct and close to industrial agents such as chambers, industrial park and local government; (2) a level of regional perspective that involves regional government and authorities; and (3) a national level that involves macro elements such as general government, agencies and others. Figure 4 represents the identification framework developed for processing the enablers and barriers. The identification of enablers and barriers was developed through a critical analysis of each paper in order to identify which were the enablers and barriers associated to each of the case study. It was possible to conclude that the majority of the studies identify the factors that had helped to overcome obstacles (enablers). However, the barriers identification was not so straightforward, but the analysis allowed for the identification of descriptors that can be used to describe the key enablers and barriers. This is presented in Tables 4 and 5, where the descriptors are grouped by dimensions. The abbreviations are defined as follows: social (S); economic (E); policy (P); management (M); technological (T); geographical (G) and intermediaries (I). The identification of enablers and barriers was developed through a critical analysis of each paper in order to identify which were the enablers and barriers associated to each of the case study. It was possible to conclude that the majority of the studies identify the factors that had helped to overcome obstacles (enablers). However, the barriers identification was not so straightforward, but the analysis allowed for the identification of descriptors that can be used to describe the key enablers and barriers. This is presented in Tables 4 and 5, where the descriptors are grouped by dimensions. The abbreviations are defined as follows: social (S); economic (E); policy (P); management (M); technological (T); geographical (G) and intermediaries (I). Table 4. Key enablers descriptors. Key Enablers Overview Nomen Description Social Trust environment S1 Openness relation between companies, sharing information and promoting trust between the involved parts Environmental awareness S2 Knowledge at the company level, concern for the impacts of industrial activities on the environment Spontaneous and self-organized approach S3 Leaders, entrepreneurs, firms motivated by the implementation of concepts such as industrial ecology, willing to take the initiative Internal and external network of the relation between companies S4 Networks that allow the creation of common spaces between companies, knowledges agents, government entities Community Awareness & Education Activities Programs S5 Interfaces and programs that relates the industries sides and the local community for sustainable development Economic Operational cost reduction E1 Identification of saving in resources (mainly water, energy, raw material) New business opportunities E2 Incorporation of new revenues through the integration of new products and services that a synergy involves Identification of saving in the waste management E3 Identification of savings, mainly in landfill tax, wastes management cost, etc. National funding E4 Policy that promotes and allows to have National, regional and local funds to support circular economy (such as operational programs and projects) Private contribution E5 Banks and entities promoting private funds through innovations projects and initiatives in order to support industries and firms Social Trust environment S1 Openness relation between companies, sharing information and promoting trust between the involved parts Environmental awareness S2 Knowledge at the company level, concern for the impacts of industrial activities on the environment Spontaneous and self-organized approach S3 Leaders, entrepreneurs, firms motivated by the implementation of concepts such as industrial ecology, willing to take the initiative Internal and external network of the relation between companies S4 Networks that allow the creation of common spaces between companies, knowledges agents, government entities Community Awareness & Education Activities Programs S5 Interfaces and programs that relates the industries sides and the local community for sustainable development Economic Operational cost reduction E1 Identification of saving in resources (mainly water, energy, raw material) New business opportunities E2 Incorporation of new revenues through the integration of new products and services that a synergy involves Identification of saving in the waste management E3 Identification of savings, mainly in landfill tax, wastes management cost, etc. National funding E4 Policy that promotes and allows to have National, regional and local funds to support circular economy (such as operational programs and projects) Private contribution E5 Banks and entities promoting private funds through innovations projects and initiatives in order to support industries and firms Policy Promotion of the industrial policy P1 Desegregated industrial policy framework (Nacional, regional and local) that allows the implementation of synergies between industries through the simplification of waste declassification Environmental tax policy P2 Policy increase especially in landfill tax, CO 2 emission control and wastes managements policies that banned the environmental impacts of industrial activities Promotion of network and waste market P3 Promotion instrument to commercialized industrial wastes in simple method Promotion of framework for CE P4 Plans and policies that allow the implementation the Circular Economy in industrial activities The technology improvement in industry will allow better control of production processes, data availability, wastes, resources Geographical Geographical proximity G1 Short distances between the involved synergy elements Logistic networks G3 Availability of infrastructure to improve the communication and transport of materials (high way, airport, and ports) Involvement of R&D institution and universities I1 Promoters of knowledge transfer to the industry in order to consolidate the initiatives Government involvement I2 Increase the participation of government as a promoter of industrial sustainability (Diverse levels local, regional and national), aiming the transition to a less polluting industry Anchor companies' involvement I3 Large companies with prestige and multinational presence involved in this type of initiatives Regional and national entities promoting synergies I4 National/regional entities promoting IS practices, at various levels such as business, technological, strategic, etc.) Availability of fund in order to support the creation of synergies, especially for the acquisition of equipment, utilities and others. Market immaturity E5 The market itself could be inadequately prepared for the incorporation of IS (economically and environmentally) Enablers and Barriers Assessment The identification of the most important enablers and barriers in each economic sector was performed in two phases. A first phase (sectoral analysis) evaluates the dimension's relevance (social, economic, policy, management, technological, geographical, and intermediaries) in the economic sectors. The second phase (incidence analysis), consists of the individual evaluation of barriers and enablers in each economic sector. Figure 5 represents this two-phase approach. Lack of participative network I3 Liking different interesting part in the implementation of IS, such as policy actors, community, knowledge agents, and industries Enablers and Barriers Assessment The identification of the most important enablers and barriers in each economic sector was performed in two phases. A first phase (sectoral analysis) evaluates the dimension's relevance (social, economic, policy, management, technological, geographical, and intermediaries) in the economic sectors. The second phase (incidence analysis), consists of the individual evaluation of barriers and enablers in each economic sector. Figure 5 represents this two-phase approach. The first phase (sectoral analysis) aims to identify which dimensions are more relevant in the economic sectors. For the design of the sectoral analysis, the number of cases with the same economic sector (nc) was correlated with the number of presences per dimensions (nd). The final results are presented in a heat diagram that allows for the visu- The first phase (sectoral analysis) aims to identify which dimensions are more relevant in the economic sectors. For the design of the sectoral analysis, the number of cases with the same economic sector (nc) was correlated with the number of presences per dimensions (nd). The final results are presented in a heat diagram that allows for the visualization of the most important dimensions in each sector. In the second phase (incidence analysis), the barriers and enablers were separated of their dimensions, in order to evaluate the prevalence of each barrier/enabler individually. This separation of the factors dimensions is due to the fact that the first phase only allows us to obtain the overview by sector, and the second phase allows us to verify the individual behaviour of the enablers and barriers per sector. Phase 1: Sectoral Analysis For the purpose of the phase 1, the enablers and barriers relevance are represented on a matrix basis. This heat diagram allows for the visualization of barriers and enablers relevance by economic sector. In this sense, darkest green represents the dimension with the highest presence ranging in colour degraded until darkest red, that represent those with no presence. Figure 6 shows the heat diagram obtained for the purpose of the first phase of the factor's assessment. Regarding the enablers, the most relevant dimensions are the policy and interme aries, followed by geographical and economic enablers. Concerning the barriers ident cation, social, technological and economic barriers are the ones that are represented w higher relevance in this first phase of the final sample. Phase 2: Incidence Analysis In phase 2, the incidence analysis was performed to identify the enablers and barri with the highest prevalence and represent their importance in each sector. Figures 7 a 8 represent the results of the second phase regarding the enablers and barriers. Append A shows the final key enablers and barriers considered to the incidence analysis. Regarding the enablers, the most relevant dimensions are the policy and intermediaries, followed by geographical and economic enablers. Concerning the barriers identification, social, technological and economic barriers are the ones that are represented with higher relevance in this first phase of the final sample. Phase 2: Incidence Analysis In phase 2, the incidence analysis was performed to identify the enablers and barriers with the highest prevalence and represent their importance in each sector. Figures 7 and 8 represent the results of the second phase regarding the enablers and barriers. Appendix A shows the final key enablers and barriers considered to the incidence analysis. In the enablers incidence analysis, the sectors with the highest number of enablers are energy, waste management, and chemical. The main reason for these high number of enablers on the mentioned sectors is that they are highly developed sectors, with advanced processes, regulations, and technologies, such as CS 7, 10, 18. The agribusiness sector also has an important representation. A possible explanation for this important number of enablers, can deal with the fact that it is a sector with modest profit margins and, therefore, it is a sector that is quite receptive to any opportunities to create new profits [23], such as the incorporation of synergies that can increase profits and reduce costs. This situation is presented in CS 17,19. The cement sector is one of the best represented in existing symbiotic exchanges [89,90] and one of the sectors with the greatest potential for IS implementation [91,92], due to the fact that this sector can process diverse wastes as substitute for raw material [89]. However, predominant enablers that met the phase 2 criteria of our analysis were not identified. Regarding the enablers, the most relevant dimensions are the policy and intermediaries, followed by geographical and economic enablers. Concerning the barriers identification, social, technological and economic barriers are the ones that are represented with higher relevance in this first phase of the final sample. Phase 2: Incidence Analysis In phase 2, the incidence analysis was performed to identify the enablers and barriers with the highest prevalence and represent their importance in each sector. Figures 7 and 8 represent the results of the second phase regarding the enablers and barriers. Appendix A shows the final key enablers and barriers considered to the incidence analysis. In the enablers incidence analysis, the sectors with the highest number of enablers are energy, waste management, and chemical. The main reason for these high number of enablers on the mentioned sectors is that they are highly developed sectors, with advanced processes, regulations, and technologies, such as CS 7, 10, 18. The agribusiness sector also has an important representation. A possible explanation for this important number of enablers, can deal with the fact that it is a sector with modest profit margins and, therefore, it is a sector that is quite receptive to any opportunities to create new profits [23], such as the incorporation of synergies that can increase profits and reduce costs. This situation is presented in CS 17,19. The cement sector is one of the best represented in 13, x FOR PEER REVIEW 13 of 22 existing symbiotic exchanges [89,90] and one of the sectors with the greatest potential for IS implementation [91,92], due to the fact that this sector can process diverse wastes as substitute for raw material [89]. However, predominant enablers that met the phase 2 criteria of our analysis were not identified. The enablers related to geographical and intermediaries' dimensions were the ones with the greatest incidence in general terms, mostly in sectors such as wastes management, agribusiness and energy. Those sectors frequently process wastes with relatively low prices (waste management and agribusiness), and therefore, significant transport costs could make synergies unfeasible [80]. Another premise is related to thermal losses associated to synergies on those sectors, that impose geographical proximity. This was a fact verified in energy synergies (e.g., steam and heat exchange) where the long distances could make the synergies technically and economically feasible [93]. In regard to the enablers with less representation, in general terms, they were management and technological. This result is due to the fact that both enablers usually stand at the bottom of implementation priorities. Frequently, the implementation of management strategies or technology optimization for synergies is supported by intermediaries with strong background in IS implementation (knowledge and historical background in the development of synergies). In fact, this was confirmed in CS 23 where aspects, such as protocols for implementation of I4.0, were promoted by intermediaries with strong background [62]. Regarding the barriers incidence analysis, the sectors with the highest representation are the agribusiness and chemical. Concerning the agribusiness sector, the technological limitations (due to the nature of the activities), social factors and reduced profit margins end up causing this great number of barriers. In the chemical sector, factors such as finan- The enablers related to geographical and intermediaries' dimensions were the ones with the greatest incidence in general terms, mostly in sectors such as wastes management, agribusiness and energy. Those sectors frequently process wastes with relatively low prices (waste management and agribusiness), and therefore, significant transport costs could make synergies unfeasible [80]. Another premise is related to thermal losses associated to synergies on those sectors, that impose geographical proximity. This was a fact verified in energy synergies (e.g., steam and heat exchange) where the long distances could make the synergies technically and economically feasible [93]. In regard to the enablers with less representation, in general terms, they were management and technological. This result is due to the fact that both enablers usually stand at the bottom of implementation priorities. Frequently, the implementation of management strategies or technology optimization for synergies is supported by intermediaries with strong background in IS implementation (knowledge and historical background in the development of synergies). In fact, this was confirmed in CS 23 where aspects, such as protocols for implementation of I4.0, were promoted by intermediaries with strong background [62]. Regarding the barriers incidence analysis, the sectors with the highest representation are the agribusiness and chemical. Concerning the agribusiness sector, the technological limitations (due to the nature of the activities), social factors and reduced profit margins end up causing this great number of barriers. In the chemical sector, factors such as financial unviability due to lack of commercially viable technologies, lack of intermediaries, and social barriers have placed it as the sector with the highest characterization of barriers. In regard to the individual incidence of barriers in the economic sectors, those that had the greatest presence were the technological and economic barriers. On the other hand, the economic barriers appeared in sectors such as agribusiness because it usually has modest profit margins. The barriers with the lowest representation were the policies and management. It is important to note that the lack of policies and a legal framework for IS is often seen as a problem [15,68,94]. Even though an exception to this statement was found in CS17 [23,69], where firms use this absence to take advantage of the ambiguity and have flexibility to build up an incentive to promote synergies. There are also other factors that have been represented transversally in some studies [3,15] and have also been identified in this analysis, reinforcing the importance of these factors. For instance, the lack of intermediaries (knowledge agents, consultancy, and companies) was represented in almost all economic sectors of the sample. Additionally, the geographical factors re-emerge as a great barrier, especially geographical proximity. As previously mentioned, this barrier has a greater impact in low-value wastes or cases with technical specificities that require close proximity, for instance, heat and steam recovery synergies [93]. It is important to highlight that for waste with low commercial value, the purchase of raw material often ends up being more economically viable for companies, since the distance travelled will affect the economic value of a synergy (logistics and transport) [95]. For wastes with greater market value, proximity might not be a relevant factor [24]. Generic Recommendations for IS per Economic Sector The generic recommendations for symbiosis implementation are based on the key points and results of the factors' assessment. For its promotion, has been performed a triangulation methodology in which the results of the two phases (Sectoral analysis and incidence analysis) and the literature review were triangulated in order to obtain the recommendations. This allows the main findings by means of data crosschecking from the different analysis considered to be unveiled. Hence, the generic recommendations of this study are obtained through our own methodology and results. These recommendations are provided in order to reinforce the emerging synergies process and to help overcome the barriers identified in this study. Table 6 present the generic recommendations for symbiosis implementation per economic sector. Generic Recommendations Waste management • Strengthen the relationships between industries and waste management companies. With this purpose, participation in collaborative networks, clusters, and associations has proven to be an important facilitator • Promotion of the policy framework is essential for waste management. Companies and policy actors should work together in order to simplify processes and improve negotiations (e.g., waste market) • The promotion of formal protocols and agreements will help to simplify the negotiation process in an initial phase ensuring the long-term commitment and overcome conflicts of interest Agribusiness • Strengthen the participation and partnerships with knowledge agents, such as R&D entities will help the synergies planning process (Promoting practice-oriented research) • Promotion of sectoral financing programs could be a valuable tool to overcome barriers, such as lack of financing, payback time, and initial investment • Reinforce the participation and long-term commitment trough collaborative networks, cluster and associations in order to overcome social barriers • Promotion of dissemination and training programs that bring the community, local government, and firms to work together • Reinforce the negotiation process through formal protocols and agreements to simplify the initiatives in an initial phase Energy • Promote trust environment between stakeholders, through collaborative networks, clusters, and associations in order to overcome social barriers • Promotion of financing funds (private or public) is a fundamental tool to overcome the economic barriers • Geographical proximity is essential due to technical specificities of energetic synergies. In the development of synergies, geographical proximity should be an important factor to consider in order to ensure the synergies feasibility • Encourage the participation of regional and government entities (e.g., development agencies and, energy agencies) Paper Industry • Promote sectoral funds, financing programs (Private and public), R&D projects, and economic incentives in order to overcome economic barriers • Geographical proximity should be an important factor to consider due to volumes to be directed in the flows. The promotion of partnerships and agreements with logistics companies will also help consolidate this process • Encouraging the government role as a driven for IS, through action plans, clustering actions, and awareness-raising (training and dissemination actions) Chemical • Promote policies that simplify the by-product classification process have proven to be an important mobilizer in this sector • Promotion of strategic partnership and formal protocols with logistics companies in order to overcome geographical limitations • Encouraging the government role as a driven for IS, through action plans, clustering actions, and awareness-raising (training and dissemination actions) • Promotion of sectoral financing programs could be a valuable tool to overcome barriers, such as lack of financing, payback time, and initial investment • Promote trust environment between stakeholders through collaborative networks, clusters, and associations in order to overcome social barriers • The distance travelled could make synergies economically unfeasible, in this regard the development and promotion of mechanisms (methodologies and tools.) that allow measuring the synergy value could be a useful tool • Reinforce the participation of intermediaries and promoters, such as government entities and companies have proven fundamental. Encourage approximation between industries and logistics companies through strategic partnerships • The promotion of collaborative and participatory networks will support to engage more stakeholders and promote interaction, overcoming social barriers Cement • Promotion of collaborative and participatory networks will increase awareness on the opportunities offered by the cement sector (incentivize collaboration between anchor companies and government) • Promotion of sectoral financing programs and economic incentives for synergies achievement • Promote practice-oriented research and collaboration between knowledge agents, can support to demonstrate the potential of the sector and create awareness • The promotion of formal protocols and agreements will help to simplify the initiatives in an initial phase and ensure the long-term commitment Manufacturing • Promotion of framework supporting IS, with special focus on simplifying by-product classification process • Promote sectoral funding or economic incentives to allow the acquisition of infrastructure, utilities, and services required for developing synergies • Boost partnership and formal protocols between logistics companies and industries, in order to overcome geographical limitations • Encouraging the government role as a driven for IS, through action plans, clustering actions, and awareness-raising (training and dissemination actions) Metal • The involvement of government entities is fundamental, not only as policy promoters but also as mobilizers of initiatives for the approximation between various industrial actors. Collaborative networks have proven to be a valuable tool for this purpose • Promotion of framework supporting IS, with special focus on simplify the by-product classification process and promote sectoral financing • Promote the participation and collaboration between knowledge agents (R&D units and universities) and industries to perform specialized studies that allow the standardization of sectoral synergies Pharmaceutical • Promotion internal organizational IS structure dedicated to explore and drive synergistic opportunities • The promotion of formal protocols and agreements will help to simplify the initiatives in an initial phase and ensure the long-term commitment • Promotion of partnerships with logistics companies in order to overcome geographical limitations Results Discussion Considering the first question that refers to the enablers and barriers with greatest presence, the main output about this point was the extensive characterization of the barriers and enablers. In general terms, this study suggests that the most important enablers dimensions are: intermediaries, geographical and policy dimensions. Concerning the intermediaries, government involvement [35,55,63] and regional/national entities promoting synergies are the most prominent in this dimension [28,29,60,70]. In policy terms, the most relevant enabler is the promotion of framework supporting IS [3,9,15,16,41,96]. In various cases of the sample, the availability of a framework to promote industrial symbiosis was a defining factor for the realization of synergies, namely CS5, CS7, and CS22. The literature suggests that there are diverse levels of frameworks supporting IS: Macro (e.g., Waste Framework Directive [97], Circular Economy Package [98], Nationals plan for EC [99]); meso (e.g., UK NISP [88], ENEA Italy [62]) and micro (e.g., Relvão Eco Park [60]). Regardless of the level of governance, the framework should focus on strategic investment, promote regulatory instruments, promote incentives for IS, and increase the awareness on IS benefits and opportunities. The taxation instruments also had an important role in this dimension, they can be separated into two main approaches: those that penalize environmental pollution or excessive and inefficient use of resources, and those taxes that promote the use of alternative methods with less environmental impact. Another important dimension consists on the economic enablers, and we specifically refer to funding and access to finance support in order to tackle economic barriers, such as co-funding investment [7,81], R&D projects [100], and the local and regional funding for IS [7]. Lastly, the geographical enablers, such as the proximity and availability of logistic networks [20][21][22][23], end up being the most prominent in this dimension. On the other hand, the most important barriers dimensions are: technological, social, intermediaries, and geographical dimensions. Regarding intermediaries and social barriers, the conflict of interest [22,67,68], the lack of trust environment [28,64] are the most important barriers in this dimension. In technological terms, the lack of appropriate investment and technical integration problems ended up being the most relevant in this dimension [54,55,72,75]. In social terms, the lack of interest and trust is a key barrier for IS implementation [22,76,101]. Lastly, the presence of long distances is the most important geographical barrier [9,33]. Answering to the second question about the behaviour of the barriers and enablers according to its economic sector, there is no transversal answer, since the behaviour of the enablers and barriers will depend on the nature of each sector, streams exchanges, resources, and materials. Nevertheless, it can be concluded that primary sectors such as agribusiness, mineral extraction, and processing tend to have preference for the implementation direct exchanges of surpluses (raw material substitution) that allow to complement their operations and reduce operational costs. Secondary sectors such as manufacturing industry, steel, and wastes management, usually are characterized by high development levels, processes and regulations clearly stipulated. For this reason, those sectors tend to favour more ambitious and complex actions such as diversification of the business model, generation of new products and alteration of regulations. In relation to the third question that refers to the recommendations for the economic sectors, it is concluded that each economic sector has different realities, priorities and interests, and, generic recommendations for each sector could be identified. It can be highlighted that we could identify as well some key aspects to promote industrial symbiosis. In this sense, the recommendations were addressed in order to reinforce these aspects (policy, economic, and social). In policy terms, the promotion of an industrial wastes framework that effectively supports the long-term IS implementation was recommended. This framework should incentivize synergies creation through strategic investment, policy promotion and raising awareness. In economic terms, the attribution funds for synergies implementation, which will support overcoming cost barriers and uncertainties, was highlighted since many of the companies do not have the required funds to implement synergies, especially for purchase of infrastructure and utilities [60,68]. In the social aspect, the actions were directed in order to reinforce critical aspects such as government participation as a driving agent [35,55,63], creation of collaborative approach [60,85] trust environment [16,22,68,76,101], and reinforcing of strategic partnership. Conclusions and Recommendations This paper has systematically reviewed the enablers and barriers for IS, in order to correlate their behaviour in the economic sectors based on their incidence. The methodology developed allowed to extensively identify and synthesize the enablers and barriers in the case studies analysed. In a second phase of the study, it was possible to detail the analysis, and conclude about the key enablers and barriers in the various economic sector, and we could propose a set of generic recommendations for each economic sector. The methodology used for the assessment was based on the interpretation of the analysis provided in the papers that characterize 26 case studies and the incidence of their enablers and barriers. As main recommendation for future studies, we suggest the development a more comprehensive methodology that may allow for addressing more directly the economic sectors and to obtain greater precision in the results. Author Contributions: J.H. performed the literature review, selection of references, methodology and wrote the initial versions of the paper. P.F., R.C. and J.A. contributed to develop the paper structure, improvement of the research approach and the paper writing. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article.
2021-05-10T00:02:56.942Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "ed80c8b10e8dfe925d2d22cc449dbd31c0b7610e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/4/1723/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "80e4f5fee5d94f98aefb416a22f29a02f8c5b018", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
91184620
pes2o/s2orc
v3-fos-license
An Interdisciplinary Perspective on Elements in Astrobiology: From Stars to Planets to Life Stellar elemental abundances direct impact planetary interior structure and mineralogy, surface composition, and life. However, the different communities that are necessary for planetary habitability exploration (astrophysics, planetary science, geology, and biology) emphasize different elements. If we are to make progress, especially in view of upcoming NASA missions, we must collectively broaden our communication regarding lists of useful elements, their impact on planetary systems, and how they can be observed in the near and long term. Geophysicists and mineral physicists often concentrate on processes that occur on Earth's surface and within its interior. Predominantly, this entails how gravity, heat flow, seismic waves, radioactivity, fluid dynamics, and magnetism have shaped the planet − influences that essentially ignore life. When considering exoplanets, it is possible to measure the stellar elemental abundances most important for planetary and rock formation --such as Fe, Mg, Si, C, O, Al, Ca and to a lesser extent C, Na, P, S, Ni, then extrapolate the likely planetary interior structure and mineralogy using mass-radius models. However, when using stellar abundances to understand planetary interiors, the standard model is made complicated because it does not matter whether an element is considered "volatile" or "refractory" from an astrophysical standpoint; it only matters what reactions these elements participate in and how they are redistributed in the planet. In addition, there is a scarcity of key elements from stellar spectroscopy that are necessary for planetary formation and evolution. However, biology and the fundamental mechanisms of living systems, extending from more standard surface conditions to extremophiles, require a different suite of major (H, C, N, O, P, S), intermediate (F, Na, Mg, Cl, K, Ca, Mn, Fe), and minor (V, Cr, Co, Ni, Cu, Zn, Mo) elements. From the viewpoint of exoplanets and habitability, we must also consider that not all life, or even most life, exists on the surface of a planet. Stellar Abundances of Planet Hosts The iron ratio within stars, or [Fe/H], is one of the most important properties of a star: it indicates age and history, has hundreds of lines within a star's optical spectrum, and is often used as a proxy for overall metallicity, or those elements that are heavier than H or He. In Fischer & Valenti (2005), the "planet-metallicity" relationship was introduced, indicating that stars that host giant, gaseous planets are preferentially enriched in heavy elements compared with stars that do not host planets. An example of abundance exploration in stellar hosts is the Near InfraRed Disk Survey (NIRDS, Lisse et al. 2012) team who observed the absorption lines of 16 Kepler host stars selected by Kane et al. (2016) as most likely to harbor habitable Earth-like planets. Three of the stars show unusual Fe/Si vs Mg/Si atomic abundance ratios, suggesting their Earth-like planets might have very different sized cores and mantles than our own. Another has a very low C/Si ratio, suggesting that organic material needed for life might be rare on any terrestrial planets in its system. As a check, they performed the same study on 14 bright nearby G-star planet-hosting systems the NIRDS database, and found a similar occurrence rate of unusual abundances and improper stellar characterizations. However, while dozens of different studies have confirmed that giant planet-hosting stars are enriched in [Fe/H], specifically, the same trend has not been found when looking at other individual elements, such as C, Mg, Si, or Ni. Instead of relying on statistical techniques that analyze the differences between known and unknown host stars one element at a time, Hinkel et al. (2019) employ a machine learning algorithm that allows multiple elements to be compared at the same time, as an ensemble. They are therefore able to define markers within the chemistry of the host star, as opposed to more standard planetary detection techniques which rely on physical stellar properties, that may indicate the presence of a yet-undetected giant planet. In order to take advantage of a variety of element abundances measured within nearby stars, they use the Hypatia Catalog of stellar abundances (Hinkel et al. 2014, www.hypatiacatalog.com , shown in Fig 1.). The number of stars for which particular elements have been measured as per the Hypatia Catalog, i.e. the largest catalog of elemental abundances for stars near to the Sun ( www.hypatiacatalog.com , Hinkel et al. 2014). Of note are the lack of measurements for biologically important, "interdisciplinary elements" such as N, F, P, Cl, and K. Hinkel et al. (2019) utilized their machine learning algorithm to predict the likelihood that +4200 FGK-type stars host a giant exoplanet, implementing five different ensembles of elements composed of volatiles, lithophiles, siderophiles, and Fe. Between the ensembles they found that C, O, and Fe, as well as Na although to a lesser extent, are the most important features for predicting giant exoplanet host stars. When they allowed the algorithm to predict on a hidden "golden set" of known host stars, they had an average of ~75% probability of hosting a giant exoplanet, where more than half had a prediction probability ~90%. In addition, they investigated archival HARPS radial velocities for the top 30 predicted planet-hosting stars and found significant trends that HIP 62345, HIP 71803, and HIP 10278 host long-period giant planet companions with estimated minimum M p sin i of 3.7, 6.8, and 8.5 M J , respectively. They conclude that those stars with a high prediction probability are therefore likely to host a giant planet. Unterborn et al. (2018), as calculated by the ExoPlex mass-radius-composition calculator for the best-fit interiors using stellar abundances from the Hypatia Catalog (Hinkel et al. 2014). Stellar Hosts to Planetary Interiors Stars and planets are formed at the same time from the same bulk materials within the molecular cloud. Since it is not currently possible to directly measure the composition or surface materials of a planet, we must use stellar elemental abundances as a proxy for the interior makeup of the planet. For example, Thiabaud et al. (2015) analyzed whether the C, O, Mg, Si, and Fe were the same within the planet as a star, using a planet formation and composition model, and determined that these important rock-forming elements were the same in both the star and planet. To this end, a variety of models have utilized stellar abundances to constrain the interiors of terrestrial exoplanets, i.e. ExoPlex --which determines the mineralogy and density of planetary interiors (Unterborn et al. 2016, 2018a/b, Hinkel & Unterborn 2018) and a Bayesian-based probabilistic inverse analysis (Dorn et al. 2015(Dorn et al. ,2017. For example, stellar abundances from the Hypatia Catalog (Hinkel et al. 2014) were used to determine the interior structure of the TRAPPIST-1 planets (Gillon et al. 2017) via ExoPlex (Unterborn et al. 2018). Figure 2 shows the phase diagram of TRAPPIST-1c, with 8wt% water, versus -1f, with 50wt% water. The Earth, by comparison, is 0.02wt% water, meaning that these planets are not only water worlds, but likely do not have the important geochemical or elemental cycles that are absolutely necessary for life. The overall, holistic habitability of an exoplanet is dependent on its surface conditions, internal structure, mineralogy, and atmosphere (Foley & Driscoll 2016), where we must use host star measurements to understand the majority of these properties. Interdisciplinary Elements The discovery and subsequent drive to characterize nearby exoplanets has laid the foundation between stellar astrophysics, planetary science, geology, and biology. As a result of these new interdisciplinary relationships, it has become clear that there is a dearth of data for many of these planetary and biologically important elements. The lack of measurements for these interdisciplinary elements, such as N, F, P, Cl, and K, are shown in Fig. 1, which is a histogram produced from the Hypatia Catalog, the largest database of stellar abundances for stars near to the Sun (Hinkel et al. 2014). The Hypatia Catalog is a comprehensive collection of literature data and encompasses the largest number of element abundances of any dataset or survey. (Hinkel et al. 2014) and some of those important for planet formation (red) and biology (green). Note, elements labeled "Uncommon in Stars" compare those important to rocks/planets and life versus the number of stars for which those elements have been measured within stars. When going from stars to exoplanets --geophysics, and geobiology are the next step after planet formation. We compare those elements most influential to stellar spectroscopy, mineral physics, and astrobiology (shown in Table 1), the physical and chemical processes that govern their important interactions, as well as the flow of information between disciplines. The details of these interdisciplinary fields need to be made accessible, such that observations and trends about elements that have been here-to-fore difficult to measure or underappreciated can be preferentially targeted ("Uncommon in Stars") in future observations and missions. Our ultimate goal is to understand and define holistic planetary habitability from star to rock to microbe. Major Stars Looking ahead, we will also need to understand how a protoplanetary disk fractionates especially as it pertains to the distance from the star, since condensation temperatures of various elements will ultimately change how the properties, and where they end up, within the disk. Multiplanetary systems also have a big influence on the elements available to comprise a planetary core and its outer layers. In addition, planetary heating sources need to be considered, especially radiogenic heating which is produced from the decay of radioactive elements in the interior. The amount of heat produced by the radioactive decay of the radionuclides 235 U, 238 U, 232 Th, and 40 K is entirely dependent on the absolute abundances of these isotopes inherited from the host star upon formation. The radiogenic heat sources account for 30%-50% of the Earth's current heat budget and were present in larger fractions 4.5 billion years ago at the Earth's birth. (Schubert et al., 1980;Huang et al., 2013). Of the three radiogenic heat producing elements, Th and U are both refractory, such that their stellar abundance ratio relative to other refractory elements (e.g. Si) are likely mirrored in the resulting rocky planets (Unterborn et al., 2015). Future Observations With the discovery of ~4000 exoplanets with a variety of masses, radii, compositions, and 1 geometric configurations from their host star (and other planets), these sub-disciplines have found themselves asking important questions and looking to each other for answers: What are the elements most important to rocky planetary formation and to biology? and how much does an average star have? What is the cut-off between a rocky (Earth-like) and gas giant (Jupiter-like) planet? What does the interior of a planet 2.5 times the radius of the Earth look like? What is the compositional difference between planets that form far away from the host star compared with those that form nearby? or is Earth H-and N-poor because of Jupiter? Is elemental distribution enough to model whether a planet is potentially habitable? What is the minimum amount of bio-essential elements (e.g., P) that could sustain life on a exoplanet? What is the most compositionally extreme planet (compared to the Solar System planets) that can be formed? Does life create predictable changes in elemental cycling, such that we can potentially detect these elements and how do geophysical processes influence these changes? We do not know the answers to these questions. But they are all important to the definition and characterization of habitability, and will be at the forefront of exploration with the launch of large upcoming missions such as NASA's JWST and WFIRST.
2019-04-01T20:03:45.000Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "50fcf105a02c84c35033a948d55beeb6c9d72304", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "50fcf105a02c84c35033a948d55beeb6c9d72304", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
264801241
pes2o/s2orc
v3-fos-license
Theoretical research of the layout of shear studs on steel-concrete composite beams . This research is focused on the distribution of shear studs on steel-concrete composite beams. In practise, two different centre to centre spacings of coupling elements are commonly used, depending on the shear force. The main aim of this work is to find the best position on the beam to change the axial distance of the coupling elements, so that the smallest possible number of them can be used. Making a change at this point on the beam will ensure the most economical solution for the amount of the shear studs used as well as their appropriate utilization. In this work, equations for the design of the amount and the placement of the coupling elements on the beam, which can speed up the design in practice, were derived. In addition, the influence of the individual input parameters of the calculation on the distribution of shear studs was also investigated. Introduction Steel-concrete constructions use the favourable properties of the two materials: the high tensile strength of steel and conversely the high compressive strength of concrete.For the active interaction of the steel profile and the concrete slab, it is necessary to ensure their effective connection.Coupling elements, which can have different shapes, are used for this purpose.The Eurocode 4 [1] for the design of composite structures only contains an assessment procedure for welded shear studs with head, which is the most often used type of shear couplings.Their advantage over other types of coupling elements is their easy and quick connection and identical properties in all direction due their symmetrical shape. Research focused on the optimization of coupling elements and development of new types is in progress in many countries.Linear shear couplings are often used in bridge structures.The linear type is e.g., a puzzle-like cut of web of the steel profile, which ensures the connection with the concrete slab.Research [2] dealing with the shape of the individual dowels, specifically, the radius of their rounding, was conducted at the Czenstochowa University of Technology in Poland.Experiments focused on the fatigue of the rib shear connectors were carried out by a united team from Germany and Belgium at the university in Aachen [3].The researchers examined rib connectors with clothoid shaped dowels and exposed them to cyclic loading.Data from the experiment should be used to predict the service life of the structures.Another research effort is to achieve the demountability of composite structures and the possibility of reusing its parts after the end of their service life.This is related to the development of removable coupling elements, e.g., blind bolts.Experimental measurements of the load-bearing capacity of various blind bolts and their comparison with shear studs were carried out at the University of Isfahan in Iran [4].This research tested bolts of different diameters and strength classes.The results show that the initial stiffness of the beam with demountable coupling elements is higher than the beam with welded shear studs.On the other hand, a beam with blind bolts exhibits an earlier slip and nonlinear behaviour.A similar experiment was performed at the Guangzhou University in China [5].Here, the researchers investigated the influence of the individual input parameters on the load-bearing capacity of blind bolts.Specifically, the parameters were the size of the preload, the diameter of the bolt, the strength class of bolts and the size of pre-drilled hole.The obtained data served to create a model for a more detailed parametric study.Joint research by a team from Iran and Australia [6] was focused on the cyclic behaviour of bolted shear connectors.Retrofitted coupling elements were examined in a united work of Australian universities [7].This work compared classic shear studs and blind bolts.Retrofitted shear couplings can be used in reconstruction of construction.Most research efforts deal with the optimization of the shape of the coupling element and do not pay the attention to their distribution on the beam.The layout of shear couplings was examined on Kumming University of Science and Technology in China [8].The researchers compared the magnitude of force, slip and deformation on a two-span continuous beam with two different centre to centre spacings of shear studs.The results were compared with a beam with an equal distribution of the coupling elements. The distance between the shear connectors is directly dependent on the shear force on the beam.The denser location is therefore situated near the supports, where the greatest shear force occurs.This force decreases towards the centre of the span, and thus the centre to centre spacing of the coupling elements gradually increase.If all the distances were different from each other, the assembly would be very time-consuming.Thus, only two different spacings of shear studs are used in practise.The aim of this research is to follow up on the previous works [9] and [10] and to find out how which parameters affect the distribution of coupling elements.At the same time, we also try to find the most suitable position on the beam to change the centre to centre spacing of shear studs, so that shear coupling are used effectively and the smallest possible number of then can be used. Methods There are two types of the coupling calculation -elastic and plastic.Plastic calculation is possible for the 1 st and 2 nd cross-section class and allows to take in account the redistribution of the shear force to all coupling elements.This allows the shear studs to be placed equally along the length of the beam.Specifically, it means, that the calculated amount of shear studs is equally distributed between the support and the point on beam with the maximum bending moment (e.g., for a simply supported beam loaded with uniformly distributed load, this is exactly half the span).As was already said in the previous work [10], the actual value of the load is not included in the plastic calculation.Instead, the maximum load-bearing capacity is considered, which leads to frequent oversizing of the beam.This can be prevented by using the partial coupling where the number of the shear couplings can be reduced.By modifying the relationship for the calculation of the partial coupling by substituting the actual value of the bending moment on the beam, the calculation gets closer to reality.On the other hand, the actual load enters the elastic calculation in the form of a shear force.Although the elastic calculation can be used for the 1 st and 2 nd cross-section class, it produces very conservative values.In fact, the coupling elements near the supports are used the most, because this is where the greatest shear force occurs.Their use towards the center of the span gradually decreases along with the shear force.For this reason, the coupling elements are placed more densely near the supports.The spacing of the shear studs is proportional to the shear force on the beam.However, since the shear force does not figure in the plastic calculation, the elastic calculation must be used for the placement of the coupling elements. When calculating in the elastic region, it is assumed that the shear force acting on one shear stud corresponds to the increase of the normal force in the concrete, which can be expressed as where VEd is the acting shear force, Sc is the first moment of concrete slab, sl is the centre spacing of shear studs, n = Ea / Ec,eff is the working factor and Ii is the second moment of ideal cross-section. According to the reliability condition, the ratio of the shear force and the number of the coupling elements in the transverse direction must be smaller than the design load-bearing capacity of one shear stud PRd.From this condition and from equation (1), a relationship can be derived for the calculation of the maximum possible centre to centre spacing of the shear couplings for the given force.The maximum shear force on the beam corresponds to the axial distance of the shear studs, which will be marked as sl,min.As the shear force decreases, the centre to centre spacing of coupling elements increases further.Therefore, the axial distance of the shear studs corresponding to the maximum shear force on the beam is calculated as where PRd is the design load-bearing capacity of the shear stud, nr is the number of shear studs in transverse direction, n is the working factor, Ii is the second moment of ideal crosssection, Vmax is the maximum shear force on the beam and Sc is the first moment of the concrete slab. If the maximum shear force Vmax corresponds to the centre to centre spacing of coupling elements sl,min, then the force V(x) at any point x on the beam corresponds to the axial distance sl.(x). The above equations apply in general to all beams.For a simply supported beam with a uniformly distributed load, the maximum shear force is calculated as 𝑞𝑞𝑞𝑞 (3) where q is the value of the uniformly distributed load on the beam and L is the span of the simply supported beam. Similarly, the force V(x) at any point of the beam can be expressed as ) where q is the value of the uniformly distributed load on the beam, L is the span of the simply supported beam and x is the distance from the support to the point on the beam where the centre to centre spacing of the coupling elements changes. The number of shear studs on a section of a beam can be found as the length of a given section divided by the size of the centre to centre spacing used on that section.If two different sizes of the axial distance of the coupling elements are used on the beam, one corresponding to the maximum shear force and the other for the mid-span section corresponding to the force V(x) at the point x, where the axial distance of studs has changed, then the total number of shear couplings is calculated as where L is the span of the simply supported beam, x is the distance from the support to the point on the beam where the centre to centre spacing of the coupling elements changes, sl,min is smallest axial distance of the coupling elements on the beam corresponding to the force Vmax and sl,(x) is the maximum possible axial distance of the coupling elements corresponding to the force V(x).By substituting equations ( 3) and (4) into relationship ( 5) and other adjustments, the equation for calculating the number of shear studs is created depending on the distance from the support to the place where the centre to centre spacing of shear coupling will change.It can be seen from the equation that it is a quadratic function in the form where L is the span of a simply supported beam, x is the distance from the support to the point on the beam where the centre to centre spacing of coupling elements changes and sl,min is the smallest axial distance of coupling elements on the beam corresponding to the force Vmax. The graph of a quadratic function is a parabola.Equation ( 6) can be easily converted into vertex form to obtain the coordinates of the vertex where the first coordinate indicates the position on the beam where it is most appropriate to change the axial distance of shear studs, so that the smallest possible number of the studs can be used.The second coordinate indicates the specific minimum number of shear studs, when the centre to centre spacing of coupling elements is changed at the desired point. It follows from the first coordinate of relation ( 7) that for a simply supported beam of any length loaded with a uniformly distributed load, it is always most advantageous to change the axial distance of the studs in the quarter of the span.This would of course be true under ideal conditions, where the maximum centre to centre spacing of the coupling elements can theoretically go to infinity when considering the shear force near the centre of the span.Normally, the axial distance of studs is limited by design principles.The maximum centre to centre spacing should not exceed the smaller of the values 6•hc or 800 mm, where hc is the height of the concrete slab.By combining equations ( 2) and ( 4), where the axial distance is replaced by the maximum permissible value, we create the relationship = where L is the span of the simply supported beam, PRd is the design load-bearing capacity of the shear stud, nr is the number of the shear studs in transverse direction, n is the working factor, Ii is the second moment of the ideal cross-section, Sc is the first moment of the concrete slab, sl,max is the maximum allowable centre to centre spacing of shear studs according the design principles and q is the value of the uniformly distributed load on the beam.By modifying the equation ( 8), a form can be obtained in which it is evident that it is a linear function whose graph is a straight line.Limiting the centre to centre spacing of studs means that from a certain point on the beam, the axial distance can not be increased anymore.Graphically, it looks like a straight line that intersects the parabola at a certain point, as can be seen in Fig. 1.To find the minimum number of coupling elements on the beam, it is essential whether the straight line crosses the parabola before or after the vertex.If the intersection of the parabola and the straight line is beyond the vertex, the limit of design principles is irrelevant and the best position to change axial distance of studs is in the quarter of the beam.On the other hand, if the intersection is before the vertex, it means the limit of design principles have to be considered.Then the ideal position for the change of axial distance of studs is precisely at the intersection of these function, which is the value calculated by equation (8).This means that the best place on the beam to change the centre to centre spacing of the coupling elements is the smaller of values of L / 4 and the result of equation (8). The calculation is a bit more complicated for continuous beams.Again, equations ( 1) and ( 2) apply.For simplicity, only beams with equally length of span are considered.Specifically, beams with 2 and 3 spans are further considered.The calculation here is complicated by the different magnitude of the shear force in the outer and inner supports, as well as by more options for arranging of the uniformly distributed load on the spans.For now, a uniformly distributed load along the entire length of continuous beams is considered. For a beam with two equally sized spans, the maximum shear force is on the internal support with the value Vmax = (5 / 8) qL.The shear force at any point x on the beam is therefore calculated as Vmaxqx, where x is the distance from the inner support.There is a different value of the shear force above the outer and inner support.Therefore, it is not possible to consider changing the centre to centre spacing of studs at the same distance from outer and inner supports.A place must be found on the beam with the same value of the shear force.So, the distance x is measured from the inner support and the distance from outer supports for the change of the centre to centre spacing of the coupling elements (point on the beam with shear force of value V(x)) is at a distance of x -(1 / 4) L, as shown in Fig. 2. From Fig. 2, the equation for the number of the shear studs on a beam with two equal spans loaded with a uniformly distributed load can be derived.Its form is similar to equation ( 5) for a simply supported beam, only the lengths of the individual sections are different.The equation is therefore as follows: where L is the length of the span of the two-span beam, x is the distance from the inner support to the point on the beam where the centre to centre spacing of coupling elements changes, sl,min is the smallest axial distance of the coupling elements on the beam corresponding to the force Vmax and sl,(x) is the maximum possible axial distance of the coupling elements corresponding to the force V(x).Fig. 2. A continuous beam with two spans of the same length loaded with a uniformly distributed load with a marked distance from supports to points on the beam with shear force of value V(x). By substituting of calculations for V(max) and V(x), equation ( 9 where L is the length of the span of the two-span beam, x is the distance from the inner support to the point on the beam where the centre to centre spacing of the coupling elements changes, sl,min is the smallest axial distance of the coupling elements on the beam corresponding to the force Vmax. It is again a quadratic function, only with different parameters.Equation ( 10) can be converted into vertex form, where the first coordinate indicates the ideal position on the beam for the change of the axial distance of the studs under ideal conditions.For a beam with two spans, it is (5 / 16) L from the inner support (i.e.(1 / 16) L from the outer support).By introducing the limit of the design principles into equation ( 2) with the shear force V(x) applied, the equation can be obtained for determining the ideal position for centre to centre spacing of studs in the form = where L is the length of the span of the two-span beam, PRd is the design load-bearing capacity of the shear stud, nr is the number of the shear studs in transverse direction, n is the working factor, Ii is the second moment of the ideal cross-section, Sc is the first moment of the concrete slab, sl,max is the maximum allowable centre to centre spacing of the shear studs according the design principles and q is the value of the uniformly distributed load on the beam. As with a simply supported beam, this is a linear function, graphically a straight line.And again, it is decisive whether it intersects the parabola before or after the vertex.The ideal position for the change of the axial distance of the coupling elements can then be determined as the smaller of the values of (5 / 16) L and the result of relation (11), where x is the distance measured from the inner support. The same method can be used for a continuous beam with three equal spans loaded with a uniformly distributed load.In this case, the maximal shear force is on the inner support too.Its value is Vmax = 3 / 5 qL.Then the shear force V(x) at any point x of the beam is equal to Vmaxqx, where x is the distance from the inner support measured towards the edge of the beam.Fig. 3 shows the length of the individual sections of the beam to places with the same value of the shear force V(x).From Fig. 3 where L is the length of the span of the three-span beam, x is the distance from the inner support measured towards the edge of the beam to the point on the beam where the centre to centre spacing of the coupling elements changes, sl,min is the smallest axial distance of the coupling elements on the beam corresponding to the force Vmax.Fig. 3.A continuous beam with three spans of the same length loaded with a uniformly distributed load with a marked distance from supports to points on the beam with shear force of value V(x). As in the previous case, relation ( 12) is a quadratic function, where first coordinate of the parabola's vertex gives the ideal point for the changing of the centre to centre spacing of the shear studs without affecting the design principles.Specifically, it is a value of (3 / 10) L from the inner support towards to edge of the beam.The distance from the outer support and in the middle span can be read from Fig. 3. Again, by introducing the limit of the design principles into equation ( 2) with the shear force V(x) for the three-span beam applied, the equation can be obtained for determining the ideal position for the centre to centre spacing of the studs in the form = where L is the length of the span of the three-span beam, PRd is the design load-bearing capacity of the shear stud, nr is the number of the shear studs in transverse direction, n is the working factor, Ii is the second moment of the ideal cross-section, Sc is the first moment of the concrete slab, sl,max is the maximum allowable centre to centre spacing of the shear studs according to the design principles and q is the value of uniformly distributed load on the beam.So, the ideal place on the beam to change the axial distance of coupling elements is found as the smaller of the values of (3 / 10) L and result of relation ( 13), where x is the distance from the inner support towards to edge of the beam. As mentioned above, this is not always applied for continuous beam, as it depends on the distribution of the uniformly distributed load on individual spans of the beam.For a two-span beam, if only one span is loaded, this position obtained from the vertex of parabola can shift by (1 / 16) L, i.e., 6.25 % of the span length.For a three-span beam considering different placement options for uniformly distributed loads, the position of the ideal point on the beam can shift by a maximum of (1 / 20) L, i.e., 5 % of span length.The distance of places on the beam with marked the same value of V(x) for different configurations of loads can be seen in Fig. 4. Results The derived equations were inserted into a spreadsheet and compared with the result of a parametric study, where the number of the coupling elements on the beam was calculated.A parametric study was carried out for a simply supported beam and for continuous beams with two and three spans of the same length.The distance x was varied, which indicates the position on the beam, where the centre to centre spacing of the shear studs was changed.Specifically, the distance x was increased in units of percentages of span length. As basic samples for the parametric study, beams from IPE 300 profile made of steel S235 and concrete slabs of height 80 mm and effective width 2200 mm, made of concrete C25/30 were considered.The coupling was ensured by shear studs of length of 50 mm, shank diameter of 16 mm made from strength class 4.8.The beams were assumed with a length of spans of 10 m, loaded with a uniformly distributed load of 15 kN/m. For a simply supported beam, the individual input parameters were further changed and their effect on the position of point x was observed.The length of the span and the value of the load have a direct influence on the ideal position for changing the axial distance of the coupling elements.However, other parameters can also affect the position of point x.These are the variables that will affect the load-bearing capacity of the shear stud or the value of the maximum centre to centre spacing of the coupling elements, e.g., the strength of the concrete, the height of the concrete slab, the size of the steel profile or the strength class and the dimensions of shear studs. Finally, the minimum amount of the coupling elements obtained by the elastic calculation using the point x on the beam to change of centre to centre spacing of shear studs and the number of coupling elements from the plastic calculation considering the partial coupling were compared. Discussion The results of the parametric study prove the correctness of the derived relations for a simply supported beam as well as for the continuous beams with two and three spans.In the parametric study, the number of the coupling elements on the beam was calculated with different position of point x, where the axial distance of the coupling elements was changed.The position of point x was changed by units of percent of length of the beam span.The minimum amount of the shear studs was achieved by changing the centre to centre spacing of the coupling elements at the point of the beam, which is equal to the value from the above relationships, specifically, the value from relation (8) for the simply supported beam, from relation (11) for the two-spans beam and the relation (13) for the three-span beam.The correctness of derived equation can also be seen from the graphs in Fig. 5.The dependence of the amount of the shear studs on the position of point x for a simply supported beam is shown there.It is clear from the shape of the curves, that there are parabolas, but only up to the certain point, where the maximum axial distance of the shear studs according to the design principles is reached.From this point the parabola becomes a straight line.The shapes of the function are also similar in the case of the twospan and three-span continuous beam. In Fig. 5 we can also see that the percentage increase in the length of the beam and the value of the uniformly distributed load causes a different increase in the number of the coupling elements.When the span of the beam changes the amount of the shear studs increases faster than when the load changes.However, regarding the ideal position for changing the centre to centre spacing of the coupling elements, for the same percentage increase in both quantities, the position on the beam is the same.(Locations with the minimum number of the coupling elements are marked in the graphs.)This dependence is better visible in the graph in Fig. 6, where the position of point x is shown at the percentage increase in the length of the beam and the value of load.The same dependence applies to both parameters.At the same time, the graph shows that increasing the distance x from the supports stabilizes at a value of 25 % of the span and does not increase further.This is due to the fact that here is an extremum of the function for a simply supported beam and there is no more advantageous position for changing the axial distance of the coupling elements further up the beam. The values of the input parameters were changing in the parametric study and their influence on the position of point x on the beam was investigated.The height of the concrete slab, the strength of the concrete, the strength of the steel, the size of the steel profile and the dimensions and strength of the shear studs were considered.These parameters have only a limited effect on the position of point x because they interact with each other.E.g., the load bearing capacity of shear studs PRd (which occurs in the calculation of the position of x) is calculated as the smaller of two values and depends on which one enters the calculation.In Fig. 7 there are graphs for the position of point x on the beam depending on the selected parameters.Moving the position for the ideal change of the axial distance of the coupling elements closer to the support will cause an increase in the strength of the concrete (Fig. 7a), the height or diameter of the shank of the shear studs (Fig. 7b) or an increase in the steel profile (Fig. 7c).A smaller value of x is also achieved by reducing the span of the beam, the value of the uniformly distributed load, the height of the concrete slab (Fig. 7d) and the effective width of the slab. At the end, a comparison of the minimum amount of the coupling elements in the elastic and plastic calculations was made.For the elastic calculations, the number of the shear studs was calculated using point x on the beam for the ideal the centre to centre spacing change.For plastic calculations, the partial coupling was considered.The number of the shear studs was calculated for the values of the bending moment on the beam corresponding to the percentage quotient of the plastic load-bearing capacity Mpl,Rd from 50 % utilization to the full capacity.This means that in some cases the elastic calculation would no longer comply here.A parametric study was carried out for different beams, where the height of the concrete slab, the strength class of concrete and steel and the size of the IPE profile was varied.For each configuration, the size of the bending moment (equal to the percentage value of the plastic load-bearing capacity) was determined, when the same number of shear studs is used in both calculations, as can be seen in the graph in Fig. 8.These intersections of plastic and elastic calculations were found to be dependent on the degree of shear coupling.This dependence is shown in the graph in Fig. 9.In the graph we can see that with a degree of shear coupling from 0.79 to 0.99, the match of the number of shear studs from both calculations is achieved when the value of the bending moment is approx.92-99 % of Mpl,Rd.The beam no longer complies in the elastic calculations for this value of the bending moment.On the other hand, the elastic calculations to cross-section classes 1 and 2 are quite conservative.This dependence proves that even in plastic calculations, the coupling elements can be placed using the elastic area, because in both cases, similar numbers of the shear studs are used.This means that there is a more dense placement of the shear studs at the supports.By redistributing the shear studs, a better utilization is achieved than with an even distribution. Conclusion The results of the research proved the correctness of the derived relations for the ideal position on the beam of the changing the centre to centre spacing of the coupling elements.This was done for the simply supported beam as well as for the two-span and three-span continuous beam with spans of the same length.It is always the smaller of the two values, where the first is given by the extremum of the function, i.e., the vertex of the parabola, and the second follows from the result of the equation, according to the type of beam.The span of the beam and the value of uniformly distributed load have a direct effect on the determination of the position x.The location of load on the individual spans of a continuous beam can also influence the ideal position for the change of the axial distance of the shear studs.In this case, the point x on the beam moves by the maximum of 6.25 % of the span for the two-span beam and by the maximum of 5 % of the length of the span for the threespan beam.The ideal position for changing the centre to centre spacing of the coupling elements can also be affected to a limited extent by other parameters such as the strengths of the materials or the dimensions of the individual parts of the composite beam or the shear stud. Furthermore, the research shows that even in the case of cross-section classes 1 and 2, where the beam is commonly calculated by plastic calculations (the elastic calculation would not be comply), the coupling elements can be redistributed using the elastic calculation.This is possible due to the approximate matching of the number of the shear studs in the elastic and plastic calculation using partial coupling.The shear studs can be located more densely near the supports and with a greater axial distance in the middle of the span.In this way, we achieve a better use of the individual coupling elements than in the case of their equal distribution on the beam, when the force redistribution is assumed. This research was only concerned with beams loading with uniformly distributed load.The same principle can be probably also used for other types of beams with different type of load. The article was prepared as a part of the Specific University Research project at the Faculty of Civil Engineering of the Brno University of Technology No. FAST-S-22-8006 and No. FAST-S-23-8317. Fig. 1 . Fig. 1.Graphs showing the dependence of the number of shear studs on the change of their centre to centre spacing at the distance x from the edge of the simply supported beam without and with the limitation of design principles.Left for x<L / 4 and right for x>L / 4. ) can be modified to the form 10 𝐿𝐿⋅𝑠𝑠 we derived the equation for the number of the studs on the beam with three spans = Fig. 4 . Fig. 4. Options for the distributing of the uniformly distributed load on individual spans of a continuous beam with two and three spans of same length. Fig. 5 . Fig. 5. Graphs of the dependence of the amount of coupling elements on the distance of point x from the edge of the simply supported beam for the percentage increase in the span length (left) and uniformly distributed load (right).Minimum values are marked in the graphs. Fig. 6 .Fig. 7 . Fig. 6.Graph comparing the position of point x depending on the percentage increase in the span length and the value of the uniformly distributed load. Fig. 8 . Fig. 8. Graph comparing the number of shear studs depending on the utilization of the cross section for plastic and elastic calculation. Fig. 9 . Fig. 9. Graph showing the dependence of the degree of shear coupling on the utilization of the cross section of the composite beam (the ratio of the bending moment to the plastic load-bearing capacity of the cross-section).
2023-11-01T15:09:38.759Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "7bdebe8dc57e7aaa9d6a97461ead53e4972619a1", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2023/12/matecconf_ys2023_01026.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b72c4a9f4679bffc51e3f71db8281c0476bba7c1", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
17195758
pes2o/s2orc
v3-fos-license
Altered GABAA receptor density and unaltered blood–brain barrier [11C]flumazenil transport in drug-resistant epilepsy patients with mesial temporal sclerosis Studies in rodents suggest that flumazenil is a P-glycoprotein substrate at the blood–brain barrier. This study aimed to assess whether [11C]flumazenil is a P-glycoprotein substrate in humans and to what extent increased P-glycoprotein function in epilepsy may confound interpretation of clinical [11C]flumazenil studies used to assess gamma-aminobutyric acid A receptors. Nine drug-resistant patients with epilepsy and mesial temporal sclerosis were scanned twice using [11C]flumazenil before and after partial P-glycoprotein blockade with tariquidar. Volume of distribution, nondisplaceable binding potential, and the ratio of rate constants of [11C]flumazenil transport across the blood–brain barrier (K1/k2) were derived for whole brain and several regions. All parameters were compared between pre- and post-tariquidar scans. Regional results were compared between mesial temporal sclerosis and contralateral sides. Tariquidar significantly increased global K1/k2 (+23%) and volume of distribution (+10%), but not nondisplaceable binding potential. At the mesial temporal sclerosis side volume of distribution and nondisplaceable binding potential were lower in hippocampus (both ∼−19%) and amygdala (both ∼−16%), but K1/k2 did not differ, suggesting that only regional gamma-aminobutyric acid A receptor density is altered in epilepsy. In conclusion, although [11C]flumazenil appears to be a (weak) P-glycoprotein substrate in humans, this does not seem to affect its role as a tracer for assessing gamma-aminobutyric acid A receptor density. Introduction Flumazenil binds to the gamma-aminobutyric acid A (GABA A ) receptor, but has no agonistic or antagonistic actions on this receptor. The positron emission tomography (PET) radioligand [ 11 C]flumazenil is used widely for assessing changes in GABA A receptor density and to assist in determining the site of seizure onset prior to resective surgery in medically refractory epilepsy patients. 1,2 Recent ex vivo and in vivo data, however, suggest that [ 11 C]flumazenil is a P-gp substrate in rodents. [3][4][5] It has been hypothesized that P-glycoprotein (P-gp) is upregulated in areas with epileptic activity. 6 If [ 11 C]flumazenil is indeed a P-gp substrate in humans, upregulation of P-gp at the blood-brain barrier (BBB) due to epilepsy could lead to reduced cerebral uptake of [ 11 C]flumazenil, and thus to erroneous interpretation of GABA A receptor density changes. The impact of P-gp on cerebral uptake of [ 11 C]flumazenil can be investigated by pharmacological inhibition of P-gp with tariquidar. This compound is one of the most effective P-gp inhibitors, binding to P-gp with high selectivity and affinity. [7][8][9] The purpose of the present study was to assess whether [ 11 C]flumazenil is a P-gp substrate in humans and, if so, to what extent changes in cerebral [ 11 C]flumazenil uptake in drug-resistant patients are due to changes in P-gp activity rather than GABA A receptor density. To address these issues, scans were performed in drug-resistant patients with temporal lobe epilepsy (TLE) and evidence of unilateral mesial temporal sclerosis (MTS) on magnetic resonance imaging (MRI). This syndrome with partial seizures is one of the most prevalent and refractory types of epilepsy with only 20% of patients achieving seizure freedom on medication. [10][11][12] Seizures often originate in the areas identified on MRI, have a highly characteristic videoelectroencephalography (EEG) pattern, both in terms of ictal rhythms and with respect to the ictal semiology. 13 These areas may show stronger increase in P-gp activity than other brain regions and thereby present a suitable target to test the proposed hypothesis. Materials and methods Participants Eleven drug-resistant patients with TLE and unilateral MTS between 18 and 60 years of age were recruited from the outpatient clinics of the tertiary referral centre for epilepsy patients Stichting Epilepsie Instellingen Nederland (SEIN). Diagnosis of TLE and unilateral MTS was based on clinical evaluation, EEG, and MRI. All subjects underwent standard screening, including medical history, physical and neurological examination, screening laboratory tests, and brain MRI to exclude serious medical conditions, psychiatric illness, drug abuse, and coagulation problems. Patients with MRI abnormalities other than those suggesting presence of MTS, white matter changes, or an incidental small lacunar lesion were excluded. Other exclusion criteria were use of benzodiazepines, non-steroidal anti-inflammatory drugs, antithrombotics, acetylsalicylic acid, or drugs known to interfere with P-gp, [14][15][16] other than antiepileptic drugs. Written informed consent was obtained from each participant. The study was approved by the Medical Ethics Review Committees of VU University Medical Center and SEIN. MRI All patients underwent a structural MRI scan using a 3T scanner (Signa HDXt, General Electric, Milwaukee, USA) according to a fixed protocol, including T1-weighted 3D magnetization-prepared rapid acquisition gradient echo (MPRAGE) covering the whole brain, T2-weighted axial fluid-attenuated inversion recovery images, and T2 axial images. The coronal T1-weighted MPRAGE sequence was used for coregistration with the PET scan and for region of interest (ROI) definition. PET data acquisition All patients underwent two identical PET scans on the same day. Scans were performed on an ECAT EXACT HRþ scanner 17 (Siemens/CTI, Knoxville, USA), which enables acquisition of 63 transaxial planes of data over a 15.5 cm axial field of view, thus allowing the whole brain to be imaged in a single bed position. To minimize movement artifacts, the head was immobilized and, using laser beams, its position checked for movement during scanning. All patients received an indwelling radial artery cannula for blood sampling and a venous cannula for tracer administration. During the entire PET scanning day, patients were monitored using EEG and video to identify possible ictal events. Subsequently, video-EEG recordings were reviewed by two qualified neurophysiologists with experience in EEG-CCTV seizure monitoring (DNV and JZ). Before tracer injection, a 10-minute transmission scan was performed in 2D acquisition mode using three retractable rotating line sources. This scan was used to correct the subsequent emission scan for photon attenuation. After the transmission scan, a dynamic emission scan in 3D acquisition mode was started simultaneously with an intravenous injection of about 370 MBq [ 11 C]flumazenil by means of an infusion pump (Med-Rad, Beek, the Netherlands; injection rate 0.8 mLÁs À1 followed by a flush of 35 mL saline at 2.0 mLÁs À1 ). The emission scan consisted of 16 frames with increasing frame duration (4 Â 15, 4 Â 60, 2 Â 150, 2 Â 300, 4 Â 600 s) with a total duration of 60 minutes. Using an online blood sampler (Veenstra Instruments, Joure, the Netherlands), arterial blood was withdrawn continuously at a rate of 5 mLÁmin À1 for the first 5 min and 2.5 mLÁmin À1 , thereafter until 30 minutes after tracer injection. Continuous withdrawal was briefly interrupted at 2.5, 5, 10, 20, and 30 minutes post tracer injection (p.i.) for collection of 10 mL manual blood samples. At 40 and 60 minutes p.i., additional 10 mL blood samples were obtained from the arterial cannula. Blood samples were used to calibrate the online blood sampler curve, and to measure plasma to whole blood radioactivity concentrations and parent [ 11 C]flumazenil fractions in plasma, enabling generation of a metabolite-corrected plasma input curve. The last 30 minutes of the input curve, when there was no continuous blood sampling, were extrapolated using the manual samples collected at 40 and 60 minutes. After a resting period of at least 3 hours to allow for decay of carbon-11, the scanning procedure was repeated, but this time after tariquidar administration at a dose of 2 mgÁkg À1 body weight administered as a 30 minutes intravenous infusion 110 minutes prior to the [ 11 C]flumazenil scan. For formulation of the tariquidar infusion, vials containing 7.5 mgÁmL À1 of tariquidar free base in 10 mL of 20/80 ethanol/propylene glycol (v/v) (AzaTrius Pharmaceuticals Pvt Ltd, London, UK) were diluted with aqueous dextrose solution (5%) to yield a final volume of 250 mL. During the post tariquidar [ 11 C]flumazenil PET scan one additional manual blood sample was taken to determine plasma concentration of tariquidar (T ¼ 5 minutes p.i.). Subjects were monitored for changes in blood pressure and heart rate during and after tariquidar administration. PET data analysis All PET sinograms were corrected for dead time, scatter, randoms, decay, and tissue attenuation, and were reconstructed with a standard filtered back-projection algorithm, using an image matrix size of 256 Â 256 Â 63, resulting in voxel sizes of 1.2 Â 1.2 Â 2.4 mm. After reconstruction a transaxial spatial resolution of $7 mm in the center of the field of view was obtained. Further data processing was performed using noncommercial software packages. The sum of all PET images was coregistered with the MRI image using Vinci software. 18 Thereafter, obtained transformation coefficients were applied to all PET frames, resulting in dynamic PET and MRI scans having the same orientation. ROIs were defined using PVElab, a software program that utilizes a previously validated probability map of 38 delineated grey matter ROIs. 19 In addition, an attempt was made to identify, in each patient, a region of decreased cerebral [ 11 C]flumazenil uptake of at least 0.5 mL by visual inspection of a summed image of the last 30 minutes of each baseline [ 11 C]flumazenil scan. Visual inspection was performed by a nuclear medicine physician (EC) with extensive experience in reading clinical flumazenil scans. Next, exactly the same ROI was defined on the contralateral side. Template ROI data were analyzed using two complementary methods, a single-tissue compartment (1TC) model, and the simplified reference tissue model (SRTM). 20 Typically, interpretation of the volume of distribution (V T ) as an index of receptorbinding density assumes that differences in V T estimates (V ND þ V S ) are due to differences in specific binding (V S ) and not to those in the nondisplaceable volume of distribution (V ND ¼ K 1 /k 2 ). Similarly, estimates of the nondisplaceable binding potential (BP ND ) from SRTM assume that K 1 /k 2 (V ND ) is constant across the brain. By combining both analyses, it is possible to look for changes in the ratio of the rate constants for [ 11 C]flumazenil transport across the BBB (K 1 /k 2 ), and hence [ 11 C]flumazenil efflux (k 2 ) across the BBB, with sufficient estimation to enable investigation of potential P-gp effects. For this purpose, first, V T and the rate constant K 1 were obtained using a 1TC model with metabolite-corrected plasma input function and blood volume as a fitting parameter. 20 V T represents the ratio of flumazenil concentrations in tissue and (metabolite-corrected) plasma under equilibrium conditions. Next, BP ND was determined using the SRTM with pons as reference tissue. 20 To this end, the pons was manually segmented on the MRI image and then projected onto the coregistered dynamic PET data. In order to reduce variability, the pons was delineated once and used for both (coregistered) PET scans. BP ND represents the ratio of GABA A receptor association over dissociation constants (k 3 /k 4 ) of flumazenil. In terms of a twotissue compartment (2TC) model, V T corresponds to K 1 /k 2 Á(1 þ BP ND ). Consequently, the ratio of the rate constants for [ 11 C]flumazenil transport across the BBB (K 1 /k 2 ) was calculated as V T /(1 þ BP ND ). Finally, to explore whether tariquidar has an effect on [ 11 C]flumazenil influx (K 1 ) or efflux (k 2 ) across the BBB, k 2 was calculated as K 1 divided by K 1 /k 2 . As the approach mentioned above is an indirect way to obtain K 1 /k 2 , data were also analyzed using a reversible 2TC model. 20 For each patient both ROIs were projected onto the dynamic PET scans and values of BP ND , V T , K 1 /k 2, K 1 , and k 2 were obtained as described above. Regional analyses were performed with all regions in the hemisphere where evidence of unilateral MTS on MRI was found at one side. This side was called ipsilateral, and all corresponding regions in the other hemisphere were named contralateral. Statistical analysis First, in order to test whether [ 11 C]flumazenil is a P-gp substrate, differences in whole brain values of V T , BP ND , K 1 /k 2 , K 1 , and k 2 between pre-and post-tariquidar scans were evaluated using nonparametric Wilcoxon signed-rank tests. For further exploration, regional values of the significant parameters of the whole brain analyses as well as pons V T , K 1 , and k 2 (1TC model) were compared between pre-and post-tariquidar scans. Second, baseline regional values of V T , BP ND , and K 1 /k 2 from standard ROIs at the side of MTS were compared to corresponding contralateral ROIs. Third, baseline values of V T , BP ND , and K 1 /k 2 from the manually defined epileptic foci were compared with corresponding regions on the contralateral side. p < 0.05 was considered significant. Spearman's rank-order correlation test was used to assess a correlation between tariquidar plasma concentration levels and V T , K 1 /k 2 , and k 2 changes in response to tariquidar. Finally, the nonparametric Wilcoxon signedrank test was used to assess whether there were significant differences between baseline and posttariquidar values of injected dose, specific activity, fraction of radio-labeled plasma metabolites, and plasma radioactivity concentrations. For all tests p < 0.05 was considered significant, except for the latter 2 for which Bonferroni correction for multiple (i.e., 7) comparisons was necessary (p < 0.007 was considered significant). Data are presented as mean AE standard deviation (SD) unless stated otherwise. Results [ 11 C]flumazenil scans were performed in 11 patients with TLE and unilateral MTS. Due to technical problems with the online blood sampler in one patient and occlusion of the arterial line in another, scans from nine patients could be analyzed. Table 1 shows patient and scan characteristics. Three patients experienced events on the scanning day. One patient (subject 1) experienced a complex partial seizure approximately 90 minutes after tariquidar administration, which was documented on the video-EEG registration. The second patient (subject 3) was temporarily unresponsive, but it was not clear whether this was caused by epileptic activity, as no electrographical abnormalities were observed at that time. Afterwards she was disorientated for a short while and experienced a headache. The third patient (subject 4) became nauseous near the end of the second PET scan, approximately 150 minutes after tariquidar administration. Video-EEG monitoring during the day revealed no seizure activity in any of the patients, except for subject 1. There were no differences between baseline and posttariquidar scans with respect to injected dose (364 AE 33 and 366 AE 30 MBq, respectively; p ¼ 0.91), specific activity (131 AE 76 and 162 AE 37 GBq mmol À1 , respectively; p ¼ 0.27) of [ 11 C]flumazenil. Furthermore, tariquidar had no effect on levels of tracer metabolism and not on plasma activity concentrations. Tariquidar plasma concentrations 5 minutes after [ 11 C]flumazenil injection were on average 278 mgÁL À1 (range 156-550 mgÁL À1 ). No correlation was found between tariquidar plasma concentration levels and changes in V T , K 1 /k 2 , and k 2 in response to tariquidar. Whole brain analyses using the combination of a 1TC model and SRTM showed that K 1 /k 2 , V T , and k 2 significantly altered by 23, 10, and À15%, respectively after tariquidar (p ¼ 0.008, p ¼ 0.012, and p ¼ 0.008, respectively; Table 2). Whole brain values of BP ND and K 1 were not significantly different between pre-and post-tariquidar scans (both p ¼ 0.20; Table 2). With respect to K 1 /k 2 , V T , and k 2 responses to P-gp inhibition, no differences were observed between ipsilateral and contralateral hemispheres, not even at a regional level. Pons V T significantly increased by 17% after tariquidar (p ¼ 0.011; Table 3), and a trend was found for a decrease in k 2 of 10.8% after tariquidar (p ¼ 0.051). Pons K 1 did not significantly differ between pre-and post-tariquidar scans (p ¼ 0.57; Table 3). Using the same models for assessment of baseline differences between the ipsi-and contralateral ROIs, regional analyses revealed significantly lower V T and BP ND in ipsilateral hippocampus (À18%, p ¼ 0.008 and À20%, p ¼ 0.008, respectively), amygdala (À15%, p ¼ 0.012 and À17%, p ¼ 0.012, respectively), and medial inferior temporal gyrus (À4%, p ¼ 0.020 and Table 2. Whole brain V T , BP ND , K 1 /k 2 , K 1 , and k 2 of [ 11 C]flumazenil before and after tariquidar administration derived using the 1TC model and SRTM. À5%, p ¼ 0.020, respectively) as compared with the contralateral side. In all other ROIs no significant left-right differences were observed. In addition, significantly lower K 1 /k 2 ratios in ipsilateral hippocampus (p ¼ 0.008), superior temporal gyrus (p ¼ 0.039) and thalamus (p ¼ 0.039) were found, as compared with the contralateral side, although these differences were not substantial (namely betweenÀ0.4 and À2%). In only three patients a region of decreased cerebral [ 11 C]flumazenil uptake could be identified by visual inspection. V T and BP ND were significantly lower in these regions than in the corresponding contralateral ROIs, whereas no differences in K 1 /k 2 between regions were observed (Table 4). In the reanalysis using the 2TC model, one patient (subject 4) had to be excluded because only 40 minutes of the posttariquidar scan data were available, resulting in unreliable fits. In the remaining eight patients, whole brain V T was 5.69 AE 0.71 and 5.96 AE 0.70 for pre-and post-tariquidar scans, respectively. These values were comparable with those obtained using the 1TC model, but for the 2TC model the difference in V T between preand post-tariquidar scans was not significant (p ¼ 0.20). It was not possible to fit regional data reliably using the 2TC model, as too many results had to be rejected because of nonphysiological parameter estimates with high standard errors. Discussion The main finding of this study was an increase of 23% in the K 1 /k 2 ratio of [ 11 C]flumazenil after partial P-gp blockade. As these rate constants are related to transport of [ 11 C]flumazenil across the BBB, this finding supports the notion that [ 11 C]flumazenil is indeed a P-gp substrate, as demonstrated previously in in vivo studies in rodents using both a genetic disruption model and the same pharmacological inhibition model. 3,21 It is also in line with results from an ex vivo study in mice. 4 On the other hand, the comparison between assumed site of seizure onset and contralateral side did not show differences in K 1 /k 2 , and therefore does not provide evidence that P-gp activity is altered at the site of seizure onset in TLE patients. Interestingly, regional analyses showed substantially lower V T and BP ND in hippocampus and amygdala at the ipsilateral side, but no corresponding change in K 1 /k 2 . This suggests that the reduction in [ 11 C]flumazenil uptake exclusively reflects a decrease in GABA A receptor density due to epileptic activity in these regions. In theory, the increase in K 1 /k 2 after partial P-gp blockade could be due to either increased influx (K 1 ) of [ 11 C]flumazenil from the circulation into the brain or decreased efflux (k 2 ) of this tracer from the brain to the blood, or both. If [ 11 C]flumazenil is a substrate of P-gp, administration of tariquidar, which results in P-gp inhibition, should lead to decreased efflux (k 2 ) rather than influx (K 1 ). 21 The present study confirmed that k 2 rather than K 1 was affected by tariquidar. The finding that V T was significantly affected by tariquidar, though to a lesser extent than K 1 /k 2 , is also in line with previous findings, as the brain-to-plasma ratio is expected to increase due to tariquidar. In addition, as pons is almost devoid of GABA A receptors, the significant increase of 17% in pons V T (1TC model) after P-gp inhibition also suggests that [ 11 C]flumazenil is a P-gp substrate. Altogether, these results are in line with the notion that flumazenil is a P-gp substrate in humans. In addition, BP ND was not affected significantly by P-gp inhibition, which is also in line with animal studies showing that tariquidar had no effect on [ 11 C]flumazenil binding to the GABA A receptor in both naı¨ve and kainate-treated rats. 21 Previous in vitro transport assay studies have reported that flumazenil is not transported by human P-gp, 22,23 which is in contrast with findings of previous in vivo rodent studies and the present study. These differences have been attributed to species differences in BBB transport of [ 11 C]flumazenil, a phenomenon that has also been observed for other PET radioligands. 24 It is more likely, however, that these contradicting results are due to the lower sensitivity of in vitro assays for detecting weak to moderate P-gp substrates. 22 There may be several reasons for the difference in degree of inhibition of flumazenil transport across the BBB after P-gp blockade with tariquidar in humans (23%) than in rodents ($70%). First, in the in vivo rodent studies full P-gp blockade was obtained, 3 whereas this was not possible in the present study in humans, 25 as it was considered unsafe to administer higher tariquidar doses than 2 mgÁkg À1 body weight 25 to patients who also use antiepileptic drugs. Second, species differences in BBB transport of several P-gp substrates have shown a more pronounced increase in cerebral uptake of these substrates after P-gp inhibition in rats than in higher species. 24 The finding that GABA A receptor density is focally altered due to epilepsy is in line with earlier studies. 5,21,26 However, the fact that no evidence for locally altered P-gp function was found needs further consideration. Perhaps flumazenil is too weak a P-gp substrate to detect regional alterations in P-gp function. Previous studies on resected brain tissue of refractory epilepsy patients, animal studies and an in vivo PET study with the P-gp substrate tracer (R)-[ 11 C]verapamil in humans have shown that both P-gp overexpression and P-gp upregulation play a role in drug resistance in epilepsy. 6,27 However, direct clinical proof for P-gp upregulation is scarce. Therefore, more human PET studies with both P-gp substrate and P-gp inhibitor tracers are needed to provide further insight into the presence and, if so, clinical relevance of altered P-gp functionality and expression in drug resistance in epilepsy. As [ 11 C]flumazenil is less affected by P-gp than substrate tracers such as (R)-[ 11 C]verapamil and [ 11 C]N-desmethylloperamide, 25,28 presence and severity of P-gp upregulation can better be assessed using one of those tracers. The principle analysis used in the current study was based on the combined use of 1TC and SRTM models. Ideally, results should be derived from a single model and, in theory, all kinetic parameters can be obtained using a reversible 2TC model. Unfortunately, the latter model did not provide reliable estimates of individual rate constants and BP ND , which is in line with a previous study showing that distinguishing the two compartments from each other is quite difficult, especially for higher noise levels (small ROIs). 29 Nevertheless, the similarity of whole brain V T values derived from 1TC and 2TC models, which is in agreement with previous studies, 20,29 indicates that possible bias by lumping the two compartments together in the 1TC model is small. Interestingly, the difference in whole brain V T between pre-and post-tariquidar scans was significant for the 1TC, but not for the 2TC model. This probably also is due to increasing uncertainty in parameter estimates with increasing number of parameters. The finding that no correlation was found between tariquidar plasma concentration levels and changes in V T , K 1 /k 2 , and k 2 in response to tariquidar probably is due to the fact that flumazenil is only a weak P-gp substrate. One of the limitations of the present study was the relatively small sample size. In addition, decreased focal cerebral [ 11 C]flumazenil uptake of at least 0.5 mL could be observed with certainty in only one-third of the patients. Therefore, further studies are needed to assess whether there really is no (effect due to) altered P-gp activity at the site of seizure onset. In addition, although tariquidar has been developed as a potent Pgp inhibitor, recently it has been shown that it also inhibits breast cancer resistance protein (BCRP), 7,30 which is another important efflux transporter at the BBB. On the other hand, BCRP inhibition is thought to occur only with pharmacological doses, which are much higher than the tariquidar dose of 2 mgÁkg À1 body weight administered in the present study. 31 Therefore, it is unlikely that BCRP inhibition played a role in the present study. Finally, full P-gp blockade at the BBB could not be obtained because of safety issues. Nevertheless, even partial P-gp blockade indicated that [ 11 C]flumazenil is a P-gp substrate. In conclusion, this study provides evidence that [ 11 C]flumazenil is a (weak) P-gp substrate in humans. Most importantly, although a P-gp substrate, this does not appear to affect its clinical use as a tracer of GABA A receptors for localizing the site of seizure onset. Funding The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 201380. and critically contributing to the manuscript, and approving the final content of the manuscript. E. Bakker and P. Schober contributed to acquiring PET data, critically contributing to or revising the manuscript, and approving the final content of the manuscript. R. Boellaard contributed to analyzing and interpreting PET data, critically contributing to and revising the manuscript, enhancing its intellectual content, and approving the final content of the manuscript. N.H. Hendrikse contributed to conception and design, quality control of pharmaceutical aspects, revising the manuscript enhancing its intellectual content, and approving the final content of the manuscript. E.F.I. Comans and J.J. Heimans both contributed to analyzing and interpreting medical data, critically contributing to the manuscript, and approving the final content of the manuscript. R.C. Schuit contributed to acquiring chemical data, analyzing and interpreting chemical data, critically contributing to the manuscript, and approving the final content of the manuscript. D.N. Velis and J. Zwemmer both contributed to conception and design, acquiring data, analyzing, and interpreting video-EEG data, critically contributing to the manuscript, and approving the final content of the manuscript. A.A. Lammertsma, R.A. Voskuyl, and J.C. Reijneveld contributed to conception and design, interpreting data, critically contributing to and revising the manuscript, enhancing its intellectual content, and approving the final content of the manuscript.
2018-04-03T05:11:11.003Z
2015-11-19T00:00:00.000
{ "year": 2015, "sha1": "958fbbd694ab974769fccae5969b2c28491789be", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0271678X15618219", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "958fbbd694ab974769fccae5969b2c28491789be", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4556632
pes2o/s2orc
v3-fos-license
Image texture and radiation dose properties in CT The aim of this study was to compare image noise properties of GE Discovery HD 750 and Toshiba Aquilion ONE. The uniformity section of a Catphan 600 image quality assurance phantom was scanned with both scanners, at different dose levels and with extension rings simulating patients of different sizes. 36 datasets were obtained and analyzed in terms of noise power spectrum. All the results prove that introduction of extension rings significantly altered the image quality with respect to noise properties. Without extension rings, the Toshiba scanner had lower total visible noise than GE (with GE as reference: FC18 had 82% and FC08 had 80% for 10 mGy, FC18 had 77% and FC08 74% for 15 mGy, FC18 had 80% and FC08 77% for 20 mGy). The total visible noise (TVN) for 20 and 15 mGy were similar for the phantom with the smallest additional extension ring, while Toshiba had higher TVN than GE for the 10 mGy dose level (120% FC18, 110% FC08). For the second and third ring, the GE images had lower TVN than Toshiba images for all dose levels (Toshiba TVN is greater than 155% for all cases). The results indicate that GE potentially has less image noise than Toshiba for larger patients. The Toshiba FC18 kernel had higher TVN than the Toshiba FC08 kernel with additional beam hardening correction for all dose levels and phantom sizes (120%, 107%, and 106% for FC18 compared to 110%, 98%, and 97%, for FC08, for 10, 15 and 20 mGy doses, respectively). PACS number(s): 87.57.Q‐, 87.57.nf, 87.57.C‐ I. INTRODUCTION Computed tomography (CT) is widely used for medical diagnostic purposes. The CT usage worldwide has increased rapidly in the last decades. (1,2) CT is the imaging modality with the highest radiation dose among medical radiography techniques, (3) and CT examinations may increase the additional risk of cancer for patients. (4,5,6) Thus, the radiation dose should be as low as possible while at the same time, obtaining the diagnostic information vital for patient safety. (7) Therefore, all the CT vendors have improved the CT technology to improve image quality and reduce radiation dose over the last decade. However, the different vendors have different scanner designs, different technological platforms, and different reconstruction algorithms, resulting in different image quality at equal dose level, even if the scan parameter settings are as similar as possible and the scanned object is the same. It is important to realize that details of the image reconstruction algorithms and reconstruction kernels are kept private by the vendors. The radiographers and radiologists receive reconstruction filter recommendations from the vendors, based on different indications and anatomical areas of interest. As new CT scanners are introduced in a radiological department in a hospital, optimizing the image quality can be challenging because the image texture is different for different scanners. Often, the image texture of the old scanner is familiar to the radiologists, and the new scanner represents a new image texture to which they must get accustomed. In addition, new reconstruction methods are introduced without all necessary information about the functionality, so it might be difficult to choose which type of reconstruction kernel is the most advantageous for a given specific examination. Image noise is one of the critical factors with respect to soft-tissue lesion detectability. The aim of this study was to compare image noise texture for different reconstruction kernels, different dose levels, and different phantom diameters for two different CT scanners to objectively evaluate differences in image noise properties between these two CT scanners. II. MATERIALS AND METHODS Two CT scanners were used in the study: GE Discovery HD 750 (GE Healthcare, Milwaukee, WI) and Toshiba Aquilion ONE (Toshiba Medical Systems, Tokyo, Japan). For short, they will be referred to as GE and Toshiba, respectively. The "Image uniformity module" (CTP 486) in a commercially available image quality phantom, Catphan 600 (The Phantom Laboratory, Salem, NY), (8) was used to measure image noise levels and image texture. Additional annuli (later referred to as rings) were mounted outside the phantom, to simulate patients of different sizes (Fig. 1). CT scans were performed at three different dose levels (CTDI vol 10, 15, and 20 mGy) on both scanners. On all dose levels, scans were performed for Catphan 600 and for Catphan 600 with additional rings. These rings were of oval shape, with following dimensions: CTP579 -25-35 cm oval OD uniformity material body annulus; CTP651 -30-38 cm oval OD uniformity material body annulus; CTP599 -45-55 cm oval OD uniformity material body annulus. These are referred to as the first ring, second ring, and third ring. All parameter settings are listed in Table 1. The scan parameters used, were as similar as possible for the two scanners, in order to compare the reconstruction kernels and noise properties. A noise power spectrum (NPS) presents the noise distribution over all spatial frequencies, and will indicate whether the noise texture is coarser or grainier structured. The formula used for NPS was: where FT represents two dimensional Fourier Transform, u and v are spatial frequency [mm -1 ] in X-Y directions, I is the mean pixel value across the ROI (subtracted from each pixel to remove dc component and reduce artifacts), and I(x,y) includes the CT number at pixel location (x,y). (9) The ROI was placed in the uniform, central part of the phantom. Figure 2 shows a comparison of ROI for the different scanners and with different rings for 20 mGy dose level. All images had a matrix of 512 × 512 pixels. As the displayed field of view (DFOV) increased with the addition of rings, the single pixel size was increasing. Thus, the dimensions of the ROIs matrix sizes were adjusted in order to keep them constant in object size (millimeters) (Fig. 3, Tables 1 and 2). Two cases of NPS were analyzed. In order to achieve removal of systematic noise, the NPS was calculated as follows. The NPS of four slices were calculated separately. These four NPS were averaged, resulting in one averaged NPS dataset. The two-dimensional NPS spectra were radially averaged to provide possibility to compare the shapes of the NPS spectra. An example is shown in Fig. 4 Furthermore, the one-dimensional NPS curves were filtered with a human visual response curve, (9,10) and adjusted according to display fields of view: where r is the spatial frequency in mm -1 , ρ is the radial spatial frequency seen by an observer (cycles per degree), FOV is the field of view in mm, R is the viewing distance, D is the displayed image in mm, η is a normalizing factor to ensure that V(ρ) is set to 1 at its maximum, and the parameters a 1 , a 2 , a 3 are set to 1.5, 0.98, and 0.68. A viewing distance R of 40 cm and display size D of 30 cm was used in this study. The resultant curves present the sensitivity of standard human observer for the noise present in the ROI (Fig. 5). Peak frequency of NPS eye , which is the frequency at which its peak value was, (9) is computed by fitting a third degree polynomial to NPS eye , and finding the point where the derivative is equal to zero. After obtaining the NPS eye , the root mean square difference (RMSD) was calculated for the datasets, using the equation described by Armstrong and Collopy: (11) where N is the number of samples, D 1 is the first dataset, D 2 is the dataset that is compared to D 1 . If the RMSD equals zero, the two datasets are equal. A higher RMSD indicates that there is a difference between D 1 and D 2 . Data from the Toshiba datasets with convolution kernels FC18 and FC08 (without and with compensation for beam hardening effect, respectively) were compared to the GE standard datasets. The higher the RMSD, the more different the Toshiba datasets are from GE datasets (and vice versa). All comparisons were performed at the same dose levels and with the same phantom size. The NPS eye presents the total visible noise spectra, and thereby gives an opportunity to compare the scanners with respect to visual noise. An example of NPS eye with a ring is shown in Fig. 6, and with the second ring in Fig. 7. In order to compare numerous curves, the area under curves was used as a measure representing the total visible noise (TVN): where f r is the spatial frequency. TVN gives simple one-value information about the amount of the noise in the dataset, and was used to organize the datasets. III. RESULTS The NPS peak frequencies for different phantom sizes and 20 mGy dose level for the GE scanner and the Toshiba scanner (see Table 3) indicates that, without extension rings, the noise texture for the GE scanner is grainier compared to that of the Toshiba scanner. As the phantom diameter increased, the image texture for both scanners became coarser. The change was larger for GE than Toshiba, and the Toshiba scanner had the grainiest noise for the largest ring. For the phantom without additional rings, the Toshiba scanner showed the smallest total amount of visible noise (TVN) compared to GE for all dose levels (with GE as a reference with 100%, FC18 had 82%, and FC08 had 80% for 10 mGy, FC18 had 77% and FC08 74% for 15 mGy, FC18 had 80% and FC08 77% for 20 mGy, as can be seen in Table 4). There was no observable difference in TVN between the reconstruction kernels with and without beam hardening correction for the Toshiba scanner without additional rings. For the phantom with the smallest additional extension ring, the GE scanner and the Toshiba scanner had similar TVN for 20 and 15 mGy, while Toshiba images had higher TVN than GE for the 10 mGy dose level (FC18 had 120% and FC08 had 110%). For this phantom diameter, the FC18 had higher TVN than the TC08 for the Toshiba scanner (120%, 107%, and 106% for FC18 compared to 110%, 98%, and 97% for FC08, for 10, 15, and 20 mGy dose, respectively). For the second and third ring, the GE images had lower TVN than the Toshiba images for all dose levels (Toshiba TVN is greater than 155% for all cases). In some cases, the TVN was even lower for lower dose GE images than for the higher dose Toshiba images, as can be seen in Fig. 8. The FC18 had generally higher TVN than FC08 also for these phantom diameters. Differences of percentages for the different scanners, doses, and rings are shown in Table 4, where GE is used as a reference. The comparison of Toshiba standard protocols FC18 and FC08, without and with compensation for beam hardening effect, respectively, was done both without and with human visual response filtering of the NPS. In the evaluation of NPS, one can realize that the differences between the reconstruction kernels were minimal. The noise magnitude increased with lower dose, and with larger phantom size, but the NPS curves had very similar shapes. The differences were negligibly small for small rings. The TVN was larger for FC18 than for FC08 for all phantom diameters except the smallest one, as seen in Fig. 8. On the other hand, the peak frequency was exactly the same for both kernels, indicating that the image texture is the same, independent of reconstruction kernel. This result indicates that the compensation for the beam hardening effect in the FC filters from Toshiba has some influence on the amount of image noise for large objects. The results can be seen in Table 3 and Fig. 9. For increasing phantom sizes, the NPS spectra for GE and Toshiba became increasingly deviant. Interestingly, the FC08 kernel corresponded better to the GE standard kernel as the phantom diameter was increasing, indicating that the additional beam hardening for this filter corresponds to the beam hardening correction in GE's standard kernel. The results are shown in Fig. 10. IV. DISCUSSION The NPS measurements show that the image noise texture varied between the two scanners tested. Toshiba had a more coarse noise pattern than GE for the small phantoms tested, which might also support the radiologist's impressions of the images as the Toshiba scanner was installed in the hospital. These findings are also supported by Singh et al. (12) For the smallest phantom diameter, the Toshiba scanner had the lowest TVN for all dose levels compared to the GE scanner, but as the phantom diameter increased, the TVN increased more for the Toshiba scanner compared to the GE scanner. These results indicate that the GE scanner is compensating better for increased patient sizes, and in fact, for the phantom with the second extension ring, GE has less visible noise for 15 mGy than Toshiba has for 20 mGy. Correspondingly, 10 mGy dose level for the GE scanner resulted in lower TVN than 15 mGy GE/Toshiba reconstruction filters similarity Toshiba FC18 Toshiba FC08 Fig. 9. Comparison of Toshiba standard protocols FC18 and FC08 (without and with compensation for beam hardening effect, respectively) shows that the difference is larger with increasing phantom diameter. for the Toshiba scanner. For the largest phantom diameter, GE outperforms Toshiba with respect to total visible noise for all dose levels. There is a small additional peak in the NPS in the higher frequency region in the Toshiba images. In those images a weak ring artifacts were present in the center. Toshiba has ring artifact corrections in their algorithms, but still there might have been some weak artifacts in the center of the images. The second peak of this image becomes more visible after filtering with the human visual response function, as the human visual system is more sensitive in this frequency region. The FC08 datasets have lower TVN than FC18. The results for FC08 and FC18 were favoring FC08, which was the expected result, because with increasing size, increased beam hardening would disrupt the image and FC08 should compensate for this effect better than FC18. This shows that the compensation for the beam hardening has some effect on the images of larger objects. Only the datasets without any extension ring were spatially symmetrical, due to ellipsoid shape of the rings. The datasets were compared within the same phantom size, and thereby had similar spatial distributions of NPS. Due to the decreasing number of pixels within the ROIs with increasing size of the rings, radial averaging was performed to preserve sufficient amount of data for averaging and obtaining smooth curves. For the largest phantom diameter, a different method of representing 1D NPS, like averaging from line/few lines of pixels would provide very little information. Although it is common in literature to use few ROIs from each slice, (9,13,14) this method was not used in this study in order to achieve smoothness of the NPS of phantom with increasing phantom diameter. With such division, the resultant NPS has less samples, and normally, more noise. This becomes more pronounced and problematic with increasing phantom diameters, since the number of samples decreases either way. The NPS eye presents the TVN, and thereby gives an opportunity to compare datasets with respect to visual noise. This is the information that could be directly related to the impression that the observers have about the quality of the image, and his/her ability to see the diagnostically significant features. In Fig. 5, the change of human visual response function for different phantom sizes can be seen. This occurred due to different FOV used for different rings, since FOV is one of the parameters of the equation for human visual response function. This study had limitations. First of all, only image noise properties were evaluated, not spatial resolution or other image quality parameters. To fully evaluate the image quality for different scanners and different reconstruction techniques, both noise properties and spatial resolution should be considered. The TVN calculations were only performed for 40 cm distance. The GE and Toshiba reconstruction kernels may be affected by FOV, and this might have affected the results. Still, the only possible way to compare the image quality for different CT scanners is by using as similar a reconstruction technique as possible, and therefore the abdominal filters recommended by the vendors were used in this study. For the TVN measurements, the distance is important. Other distances would have given other results. Still, in this study relative differences between different reconstruction filters and different scanners were performed. For all measurements, the distance was the same. For NPS analysis, introduction of extension rings decreased the number of pixels in the ROIs used for NPS calculation. Decreased numbers of pixels might have influenced the NPS analysis. Still, the same method was used for all measurements and the results were compared relatively to each other. V. CONCLUSIONS The results indicate that the noise texture is different for the two scanners in this study. For small objects, Toshiba had a more coarse noise pattern than GE, while GE had a coarser noise pattern than Toshiba for the largest objects. Toshiba's beam hardening correction filter improved the noise properties as the phantom size increased, compared to the filter without this correction. Overall, the GE scanner had less total visible noise compared to the Toshiba scanner, except for the smallest phantom diameter. This means that the GE scanner will produce CT images with less noise for normal to larger patient sizes compared to the Toshiba scanner, and thereby potentially give better diagnostic information. COPYRIGHT This work is licensed under a Creative Commons Attribution 4.0 International License.
2018-04-03T05:54:09.049Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "b6aae6fc071e491757b62b7bd879c98fb5f19e02", "oa_license": "CCBY", "oa_url": "https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1120/jacmp.v17i3.5900", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfc5ef0d72d6d43000d219178bca1f4db55d0f4c", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
56079994
pes2o/s2orc
v3-fos-license
Methodology of multiaxial machines formats volumetric accuracy comparative evaluation The main characteristic of machine tool system, providing the most efficient multi-axis processing, is machine volumetric accuracy, which may be demonstrated in the machine working space. The development of methods and means of controlling volumetric accuracy is a very important engineering and process task. Solving this issue may ensure competitive ability of the domestic fixtures and tools for multi-axis form making during the treatment with the help of power tools. Early project stage is the most important one, as the success of the machine project itself highly depends on the right choice of arrangement. In this research, we presented design scheme of machine-tool unit, stated its initial and boundary conditions, and also there was developed a model of comparative assessment of elastic displacement in the working space. Volumetric accuracy is formed by means of inaccuracy space field of the processing system, which occur in the formation zone under the influence of agitation, which may be determined by simulation and analytical modeling of machine tool system. Within the framework of this research, there was developed the method of comparative assessment of volumetric accuracy of multi-axis machine compositions with not less than 5 interpolate axis. Introduction The modern stage of technological system development for high-technology treatment is characterized by increase of competitiveness in the sector of produced manufacturing equipment.It is shown in raised demands for quality, reliability and multitask functionality of technological machines, which are caused by appearance and study of new high-precision and high-production technologies [1]. Development of methods and means for formation precision control is a very important project and technological task for multi-axis treatment of details with curved surfaces [7,8].Solving this issue may ensure competitive ability of the domestic machine-tools and increase effectiveness of multi-axis form making. Low level volumetric accuracy of machine tool systems and the absence of multifaceted approach for ensuring the multi-axis form making during the stage of preparing and introduction of new constructive decisions for multi-axis machine tools are the reasons for the lack of effectiveness in multi-axis formation, which is certainly an engineering and manufacturing problem.[2]. The main characteristic of the machine tool system, which ensures most effective multi-axis treatment is the machine volumetric accuracy which is displayed in its working space.[6,10,14] The aim of this research is the creation of the method of comparative assessment of multi-axis machine compositions volumetric accuracy (with not less than 5 interpolate axis) in order to find the best constructive decisions "for technological tasks" considering certain manufacturing conditions. The development of methods and means of controlling volumetric accuracy is a very important engineering and process task. Principal provisions of comparative assessment methods for volumetric accuracy of multi-axis machine tools Volumetric accuracy of metal-working machine means integral characteristics showing its ability to provide certain form-making accuracy level as well as the technological system error law within the limits of working space [3]. The basis of the multi-axis machine volumetric accuracy modelling method is formed by the following approaches and assumptions: 1. Multi-axis machine working space is formed with the help of linear axis motion lengths with overlapping of sectors from rotary kinematic pairs rotation movements.Rotary kinematic pairs which do not change coordinate position of the instrument and work piece coordinate system are not considered in configurations. 2. Multi-axis machine tool system may be decomposed into simple elements -units with kinematic pairs ("joints") with its own constructive volume.Each kinematic pair consists of two joint elements -movable and immovable, construction of which depends on certain project synthesis principles, which are numerically formalized by the system of restrictions. 3. Machine-tool unit models are the basis for the formation of volumetric accuracy in the working space.It is supposed that each machine tool unit has position deviations due to perturbance, and they are transferred to working space for further integration considering machine n-degrees of movement, distributed on kinematic branches of the instrument (IB) and work piece (WB).As per the typical scheme for unit fixation by means of guide surfaces, machine-tool movable units are considered as fixed-end beams on condition that the area from load point to fixing point (joint) is non-deformable [4]. 4. Within the limits of the working space all the functional properties of the machine tools are represented, and are expressed by means of its working characteristic: geometric and kinematic accuracy, hardness, heat and vibration resistance, fast action etc, which are subject to calculation and analytical or experimental identification during the assessment of the technical level of the machine being produced. 5. Volumetric accuracy of the multi-axis machine is formed by the integration of the set of characteristics of the resulting error, on the whole set of points in the working space.Level of working space scanning detalization by means of multitude of points may be determined antecedently in correspondence with the aims and tasks of the certain research.Average machine performance parameters are formed on the basis of the value data in the central point of the working space. Implementation assessment method for multi-axis machines and its graphic interpretation Geometric image of the synthesized machine constructive arrangement consists of the combination of movable unit separate constructions, which are located in the global coordinates respective to working space, according to the earlier chosen scheme of movable unit degree distribution on machine branches. In accordance with [13] it is necessary to carry out a synthesis of machine constructive arrangements.Each arrangement corresponds to the certain code which characterizes the distribution of the kinematic mobilities on machine branches (IB and WB) relative to stationary (O). Each movable unit (rotary or translation) is characterized by the set of constructive parameters expressing its ability to ensure necessary coordinate movability and resilience to loads and perturbances.(fig.1). The last one is shown through the size of guidelines and contact characteristics of joint elements.Besides, the units has all the necessary movement parameters, this setting the required form and machine working space size.Joint failures or distortion which may occur in the joints of the movable units are the result of the whole set of perturbances (forced, heat, technological manufacturing and assembly defects or others).Failures in the joints influence the location of machine end units (instrument and workpiece) from the nominal state during formation.In order to assess the degree of influence of each unit failures on to the accuracy of treatment, they should be integrated or differentiated along the working space, estimating the instability.This may be carried out by the procedure of separate joints discrepancy transference into the treatment points in the working space. While setting initial and boundary conditions of the machine unit calculation scheme the following conditions and allowances are taken into account: -unit form persistence condition as for absolutely solid body, which is considered during the calculation of joints contact flexibility considering unit form persistence, located from the joint to formation point; -elasticity condition, as per which machine movable unit fixed on the guidelines has certain elasticity as per Hooke's law [9] and joint's non-disclosure during tipping moment from the cutting forces and the weight of movable unit; -stability condition, according to which joint area inertia moments are opposed to tipping and twisting moments from the loads which move, push and perturb due to the cutting force and weight; -cutting force is applied in the working space calculation point (action point), and weight is applied in the centre of unit mass; -during the joint displacement, there within the working space may occur deviations from the location of the joints carrying instrument and work piece, which influence geometric, static and dynamic machine accuracy. For the volumetric accuracy analytical assessment of the multi-axis machine Euler laws were used, which allow to form multiple resultant displacements within the working space, caused by cutting force and the weight of movable units.So, for example, the developed model of elastic displacement within the working space along the coordinate axis i, j, k (x-y-z) for the unit moving along the axis i of the machine in the coordinate system is as follows: where , , -displacement of the calculated point p along the axis i, caused by rotation of the units moving along the axis i, j, k; , , -cutting force components which influence the instrument and are directed along the axis i, j, k; , , , , и -sizes of guiding units moving along the axis i, j, k, and parallel to axis j и k; и , -escape (along the length, width and height of the unit) from the treatment point p to the unit elastic centre measured along the axis i , j , and k; Displacement along the coordinate axis i (δij) and j (δik) for the units moving relative to coordinate axis j and k, are determined in the same way. During the modelling there was used superposition principle, according to which resultant displacement in the working point of the working space is determined by means of transferring all the discrepancies from the whole set into this point.[9,12].Displacement, characterising machine arrangement accuracy as a whole, was determined by the following expression: where δ ∑ x =δ x (IB)-δ x (WB), δ ∑ y =δ y (IB)δ y (WB), δ ∑ z =δ z (IB)-δ z (WB). Here δ x (IB), δ y (IB), δ z (IB) -displacements of the calculated point of the instrument branch along the axis x,y,z , δ x (WB), δ y (WB), δ z (WB).-displacements of the calculated point of the workpiece branch along the axis x,y,z . Calculated scanning of the working space discrete points allows to determine average displacement along the working zone, as well as their spreading. For the correct comparison of the arrangement variants there must be assured certain conditions of comparability due to the statement of equal guideline sizes in the correspondent units, as well as equal passage length along the axis on all the arrangements.Also there was considered constructive condition of joint associativity which ensures stability of machine operation as a whole [11,12,15]. Developed method of the volumetric accuracy comparative assessment of the multi-axis machines includes the following stages: 1) Creation of the initial geometric 3D-image of the machine; 2) Geometric synthesis of alternative variants of arrangement; 3) Volumetric accuracy calculation for the variants of arrangement. 4) Comparative analysis of the results, and choosing the most appropriate machine arrangement as per the volumetric accuracy. On the pictures (2) and (3) there is an example of visualization of 3D-geometric synthesis of the arrangement (fig.2) and volumetric accuracy assessment (fig.3).The results of comparative assessment allow to forecast achievable accuracy of multi-axis treatment and choose the best project decision for the machine as per the criterion of volumetric accuracy. Conclusion Formalized results allow to set the connections between construction layout factors and volumetric accuracy output parameters of the multi-axis machine.This may allow to control machine volumetric accuracy on the early project stages while choosing the composition and parametrization.Visualization of the calculation method is represented by the synthesis of parametric 3D-geometric image of machine arrangement, and formation of the volumetric accuracy graphics in the working space discretized by the multitude of design points. The analysis of received results allows to justify the best constructive decisions in correspondence with stated technological requirements and restrictions, and also to make recommendations on different use of machine arrangement for various manufacturing conditions. A further development of the study is the adaptation of the proposed new methodology for technologically conditioned synthesis of multi-axis machines in high-tech engineering industries for the manufacture of constructively complex parts of the aviation and defense industries. Fig. 1 . Fig. 1.Machine unit model with translation movement, implementing the joint "kinematic couple P"
2018-12-12T23:13:21.622Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "0bed4398c682fde88ef6d32ec8da8ea6b7f0d9a8", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/43/matecconf_icmtmte2017_01046.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0bed4398c682fde88ef6d32ec8da8ea6b7f0d9a8", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248203054
pes2o/s2orc
v3-fos-license
Infant brain imaging using magnetoencephalography: Challenges, solutions, and best practices Abstract The excellent temporal resolution and advanced spatial resolution of magnetoencephalography (MEG) makes it an excellent tool to study the neural dynamics underlying cognitive processes in the developing brain. Nonetheless, a number of challenges exist when using MEG to image infant populations. There is a persistent belief that collecting MEG data with infants presents a number of limitations and challenges that are difficult to overcome. Due to this notion, many researchers either avoid conducting infant MEG research or believe that, in order to collect high‐quality data, they must impose limiting restrictions on the infant or the experimental paradigm. In this article, we discuss the various challenges unique to imaging awake infants and young children with MEG, and share general best‐practice guidelines and recommendations for data collection, acquisition, preprocessing, and analysis. The current article is focused on methodology that allows investigators to test the sensory, perceptual, and cognitive capacities of awake and moving infants. We believe that such methodology opens the pathway for using MEG to provide mechanistic explanations for the complex behavior observed in awake, sentient, and dynamically interacting infants, thus addressing core topics in developmental cognitive neuroscience. | INTRODUCTION The earliest phases of human development invoke a special fascination because they allow invaluable insights into the origins and functions of the human mind. The last decades have produced rapid advances in noninvasive brain imaging techniques that provide a window into infant brain function. Magnetoencephalography (MEG) measures the magnetic fields produced by neuronal currents in the brain (Hämäläinen, Hari, Ilmoniemi, Knuutila, & Lounasmaa, 1993). Unlike other noninvasive neural measures such as, electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS), MEG has both excellent temporal resolution (<1 ms), and advanced spatial resolution. MEG is noninvasive, silent, and generally does not require participant sedation. Setup is simple and quick, and minimally demanding on the participant. The participants can be easily monitored and optionally accompanied by a caregiver or assistant during the measurement. For these reasons, MEG makes an excellent tool to study infants and young children. | Challenges presented by infant MEG measurements Nonetheless, a widespread belief persists that collecting infant MEG data presents a number of limitations that are difficult to overcome (Azhari et al., 2020;Nevalainen, Lauronen, & Pihko, 2014). Due to this notion, many researchers either avoid conducting infant MEG research or believe that in order to collect high-quality data, they must impose limiting restrictions on the infant. Besides the many general challenges of collecting data from infants and young children, the two most prevalent technical challenges are: (a) compromised signal-tonoise ratio (SNR) due to increased scalp-to-sensor distance, and (b) signal distortion caused by participant movement. In mathematical terms, the leading term of the magnetic field, the dipolar component, decays as the square of the distance between the source and the point where the magnetic field is evaluated. The higher-order components, corresponding to more complex features of the magnetic field, decay even faster. Consequently, the magnitude and the information content of the detected MEG signal decays as the head moves farther away from the (noisy) sensors. The obvious solution to overcome such challenges is to request the participant to keep the head still and as close to the sensors as possible. In the case of infants, such a request would limit the MEG studies to sedated or sleeping infants. The smaller head size of infants also allows for a considerable range of movement inside adult-sized whole-head helmets. This leads to distortions of the spatial topography in the MEG signal distribution and errors in subsequent source localization if these distortions are not compensated for. Interestingly, from a physics point of view, head movements can be equivalently considered as movements of the sensor array around a static head. Provided that the distance between the head and the sensors remains reasonably short, this can actually lead to more comprehensive spatial sampling of the field, leading to increased information of the underlying neural currents (Medvedovsky et al., 2016). Thus, head movements do not necessarily deteriorate signal quality as long as their effects are taken into account mathematically. This requires that the head be transformed to a representation that is independent of the measurement device. Existing solutions can be divided into types of methods that either utilize a standard representation of the signal at the level of cortical sources, such as the minimum-norm estimate (MNE) (Uutela, Taulu, & Hämäläinen, 2001), or a series expansion of the magnetic field with minimal assumptions about the neural current configuration. The latter approach can be accomplished by signal space separation (SSS) Taulu, Simola, & Kajola, 2005), which is the method applied in this article. However, our suggested workflow could also utilize the MNE-based movement compensation in which the sensor-level signals are transformed into a source-level MNE estimate for a device-independent representation. The signals are then transformed back to a sensor-level representation corresponding to a specified target head position. In SSS, the sensor-level signals are transformed to device-independent magnetostatic multipole moments, followed by a transformation back to the target sensor-level representation. A benefit of SSS is that the effect of external interference signals can be compensated for in the same processing step that does the head movement compensation. In this article, we focus on the overall workflow of conducting infant MEG studies for a wide range of neuroscience questions while our recent methodological article (M. D. Clarke et al., 2022) describes the associated mathematical signal processing and source analysis in more detail. Many researchers have avoided some of these challenges (i.e., lack of compliance and movement) by only performing MEG experiments with infants or young children who are sleeping (Hartkopf et al., 2016;Pihko et al., 2004), or in some cases, sedated (Birg, Narayana, Rezaie, & Papanicolaou, 2013). While these measurements are appropriate for studying the brain during sleep, active forms of cognition such as language, visual perception, attention, memory, decision-making, social interaction, and theory-of-mind, can only be measured in awake infants. Furthermore, a common practice is to position a sleeping infant's head in a particular location in the helmet closest to a small number of sensors in a region of interest. Although this method reduces movement and ensures close head-tosensor distance, it also limits the scope of a study, and suggests the brain process in question is tied to activity exclusive to that brain region, which is not always the case. Using whole-brain imaging, studies have shown that even speech sound processing in infancy recruits a large network of brain regions (e.g., Bosseler et al., 2021), including bilateral frontal, auditory, and parietal cortices. Furthermore, the contribution of these different brain regions changes as a function of development and experience with language (e.g., Ferjan Ramírez, Ramírez, Clarke, Taulu, & Kuhl, 2017;Kuhl, Ramírez, Bosseler, Lin, & Imada, 2014). Considerable advances in MEG analysis methods and hardware designs in recent years have helped to address the issues listed above (Chen et al., 2019;Kao & Zhang, 2019). There are several review articles that provide guidelines for adult MEG studies. Gross et al. (2013) provided detailed guidelines for general MEG data acquisition and analysis suitable for use with adults. Several articles include comprehensive reviews on basic MEG physiology, general acquisition, and analysis of MEG signals and clinical applications (Bagi c et al., 2011;Bowyer, Zillgitt, Greenwald, & Lajiness-O'Neill, 2020;Hari et al., 2018;Pernet et al., 2020;Puce & Hämäläinen, 2017). Kao and Zhang (2019) and Chen et al. (2019) provided extensive reviews on infant paradigms and analyses for various protocols, and infantspecific systems and hardware. To date, there are no articles detailing methods specific to MEG measurements of awake infants. In the current article, we will discuss the various challenges unique to imaging awake infants and young children with MEG, and share general best-practice guidelines for data collection, acquisition, and preprocessing. These guidelines have been developed and refined over the roughly two decades of collecting infant data at the University of Washington's Institute for Learning & Brain Sciences (I-LABS) MEG Center and at collaborating institutions. We believe these methods can serve as helpful general guidelines for other researchers, and also serve as a basis for further discussion and development as MEG technology and software are improved. While the data acquisition guidelines are specific to awake infant measurements, our guidelines for data preprocessing and analysis can be applied to all infant protocols or any adult clinical populations where patients are unable to remain still during recordings. These improvements have the potential to yield insights into the dynamics of neural processes in the developing brain. | Data acquisition: Background For adults, the MEG data acquisition process typically involves a number of standard steps to obtain high-quality data (Gross et al., 2013) including system setup, experimental design, general acquisition setup, and preparation of the participant. We have adapted this process to accommodate for the technical and behavioral challenges that infants present. Most notably, infants have a limited time window while they are awake, alert, and compliant. A supplemental video demonstrating this process for infant data acquisition is available at https://youtu.be/WfKRQSjHOJ8. | Data acquisition: Recommendations Efficiency is critical when it comes to successful infant MEG data collection; however, it is important to strive for an environment that is not frantic or disruptive to an infant's calm, alert state. Here we provide recommendations for equipment, and suggest experimental design modifications to adult protocols. These equipment recommendations are made for use with a standard adult-sized MEG system with superconducting quantum interference device (SQUID) sensors, but can easily be modified for a system with optically pumped magnetometers (OPM) or an infant-sized helmet system. | Equipment: Prior to data acquisition Outside the magnetically shielded room (MSR), it is important to prepare equipment prior to the arrival of the family for head digitization and other processes. The digitization area can be set up with MEGcompatible toys and chairs for the infant and caregiver (Figure 1). • Digitization device As is the case for adult participants, head digitization is important for accurate co-registration and subsequent source localization. When choosing a digitization method for infants, the tolerance level for movement and speed of the digitization process must be considered. Our lab uses the Polhemus FASTrak system (Polhemus, Colchester, VT), which includes a stylus, a receiver attached to a wooden chair, and a sensor placed on the participant's head. The sensor-receiver set will adequately account for participant movement, and manual digitization with the stylus can be done quickly by experienced personnel. For infants, we use a wooden highchair with an attachment for the receiver and the sensor is taped to the top of a soft cloth cap on the infant's head. In principle, other digitization devices may also be used, such as a 3D camera system. • Digitization highchair Most participants ages 5 months or older are able to sit upright in a high chair with five-point VELCRO safety straps and with an adult nearby. However, any infant unable to support their head or sit upright can be held in the arms of a caregiver or research assistant. • Head position indicator coils If a stationary sensor array (i.e., typical SQUID whole head system) is used, it is highly recommended to compensate for any movement of the head in reference to the sensors. Head position indicator (HPI) coils are used to continuously output sinusoidal signals that can be localized offline after the experiment to provide the head position (translation and rotation) relative to the MEG sensors with millisecond accuracy (Ahlfors & Ilmoniemi, 1989). These head F I G U R E 1 Digitization setup for use with the Polhemus device. Left panel: A toy-waver is engaging with an infant as a researcher digitizes the anatomical landmarks, head position indicator (HPI) coil locations, and additional head points. Top right panel: An infant wears a soft cap equipped with the HPI coils. Bottom right panel: A researcher places the foam halo on the infant's head before the infant is positioned under the MEG helmet positions will later be used for continuous head movement compensation. • Infant cap An elastic infant-sized cap can be used in order to serve two purposes: (a) for temporary placement of the Polhemus sensor during head digitization, and (b) to adhere the HPI coils to the infant without having to tape the coils directly onto the infant's skin/hair. A soft and stretchy cap that fits snugly onto the infant's head helps to avoid movement of the coils after digitization (Figure 1, top right panel), which is essential for further signal processing and analysis. We recommend having a wide variety of cap sizes to accommodate different head sizes. The cap is secured to the head using a soft VELCRO chin strap that can be easily adjusted. Additionally, pieces of soft medical tape can be used to secure the front of the cap to the infant's forehead to prevent the cap from sliding. Our caps also include holes for the ears to allow easy access to the anatomical points during digitization. The ear holes also ensure that the ears can be accessed throughout the appointment if insert earbuds are used. • ECG/EOG electrodes Electrodes for electrocardiography (ECG) and electrooculography (EOG) should be used to measure fields from the eyes and heart if the infant will tolerate them, as it assists with later artifact removal. The electrodes are placed on the infant during the preparation process. Using pre-prepped disposable electrodes allows the placement to happen quickly. Using small electrodes made specifically for infants ensures that the electrode adhesive does not cause discomfort during application or removal. • Toys Toys that entertain infants are essential equipment for the entire infant MEG process, including preparation. A large collection of toys appropriate to different stages of infant development can be set up for immediate access during digitization. All toys must be tested prior to use to ensure that they are nonferrous and do not interfere with MEG sensors. Examples of toys that can be used during the preparation process include: stacking cups, squishy bath toys, touch-and-feel board books, hand puppets, bubbles, masks, and balls. • Run sheets Documenting important information throughout the MEG process on run sheets is critical for subsequent data analysis. Refer to 2.2.2 | Equipment: During data acquisition Both pediatric and adult MEG systems may be used to acquire functional imaging data from infants. For a fixed sensor noise level, a pediatric-sized helmet allows for a higher SNR because the sensors in the helmet are closer to the sources of electrical activity in the infant's head. An adult helmet, on the other hand, allows for a certain degree of movement from infants during data collection while keeping them comfortable and engaged. The ability to move may reduce anxiety in infants who do not like to feel confined. For awake infant protocols, keeping the MEG in an upright position increases success rate and reduces attrition. Replacing all adjustable straps from the car seat with soft fivepoint double-sided VELCRO straps. The VELCRO straps must be long enough to reach over the torso to allow movement of the legs and arms, but keep the infant from falling out or slouching. Adding an appropriately sized head rest to the top of the back of the car seat for head support. Adding a booster cushion to the car seat ensures the infant is seated high enough that their head is positioned above the top of the seat. Having a variety of booster cushion sizes available is helpful to accommodate different ages and sizes. Covering the seat with a soft blanket that can be removed and washed between participants. Placing soft MEG-safe pillows around the sides of the seat for additional support and comfort. • Toys A second set of toys can be set up in the MSR so that they are immediately accessible to the assistant during MEG measurements. Toys can be placed in bins on MEG-safe rolling carts so they can be easily transported as needed. Avoid toys that interfere with the specific study design, for example, toys that rattle or squeak during an auditory paradigm. • Videos Infant-appropriate videos projected onto a screen in the MSR can be used to distract the infant during data acquisition. • Video recording Video monitoring cameras inside the MSR give MEG researchers full view of the infant. We also recommend video recording the sessions using an MEG-safe video setup, which allows for coding behavior after data collection is complete. • Additional seating in MSR An MEG-safe chair can be placed inside the MSR next to the MEG system so that the caregiver can be seated nearby, but out of sight from the infant. The area in front of the MEG system is left clear to easily access the infant as needed. | "Toy-waving": The key to successful infant data acquisition Researchers must be trained to help regulate an infant's mood and behavior during a data collection session. Having skillful professionals focused on reducing infant discomfort in an unfamiliar environment helps to reduce high attrition rates (Bell & Cuevas, 2012). Using toys to produce visual stimulation is a cornerstone of effective technique (Ballard et al., 2017;Hoerneke & Schoch, 2019). This practice of facilitating infant participation in research using toys will be referred to in this section as "toy-waving" (Werner Olsho, Koch, Halpin, & Carter, 1987). The toy-waver's main goal is to maintain affect and arousal modulation, while ensuring that the infant's environment and position during the appointment complies with the experimental protocol (Kuhl, 1985). The target disposition for infant participants is a state of "pleased interest." Ideally the infant is highly attentive, neither laughing nor crying. Similarly, the ideal arousal level is calmness without drowsiness. Excited infants frequently wave their arms and kick their legs; a crying infant contracts muscles throughout the face, arms, and torso. In either state, the infant response causes muscle and movement artifacts in the MEG data. While efficient movement compensation methods are available, muscle artifacts, especially in the neck and head area, are problematic (Muthukumaraswamy, 2013). Depending on the experiment, a toy-waver can use auditory, visual, tactile, and social stimuli to engage or soothe the participant as needed. The toy-waver must rapidly assess participant preferences and in response, adjust their affect, proximity, and mode of stimulus presentation to evoke the desired infant response. | Head digitization The increased level of head movement makes the digitization process especially important for infants. Good representation of the head shape as well as accurate digitization of the cardinal points and HPI coils are essential for accurate source modeling. Once the infant cap is placed onto the head and fastened with the VELCRO strap, the toywaver uses toys to engage and distract the infant. We suggest collecting a minimum of 200 additional points on the head surface to ensure a comprehensive representation of the head shape. In addition, documenting the precise location of the cardinal points is critical for proper co-registration and subsequent source localization. External electrodes are placed on the infant at either the beginning or the end of the digitization process. Electrode placement is important in order to obtain a high-quality signal and to prevent the electrodes from being torn off during the measurement. We place two ECG electrodes on the infant to measure the electrical signal of the heart; one electrode placed on the chest slightly to the left of the sternum and the second electrode in a similar location on the back. We also place two EOG electrodes on the infant to measure the electrical signals produced by eye movements; one placed near the orbital rim slightly above the eye and the second placed slightly below the opposite eye. This electrode placement allows us to capture both blinks and saccades, while minimizing the number of electrodes that are used (Figure 3, bottom panel). As with many experimental design modifications directed to reduce infant attrition, researchers may choose to forgo the use of external electrodes if an infant will not tolerate them but would otherwise be successful. F I G U R E 2 An example of an MEG session run sheet. Documentation of MEG data acquisition parameters, for example, the integrity of the HPI coils and digitization parameters (LPA, RPA). Sketch of an infant head model to document the location used for anatomical landmark digitization | Infant data collection Continuous head position tracking, channel saturation monitoring, and online averaging inform the researcher about the quality of the MEG measurement during data acquisition. Because infants can move erratically, continuous head position monitoring is necessary. Channels must be continuously monitored for sources of noise and for saturation. If too many channels become saturated through the course of the measurement, we determine and remove the source of noise before resuming data acquisition. This is important because the signal processing methods that are essential for successful infant data analysis, such as head movement compensation, suffer from channel saturation. Saturation is indicated by the sensor signals presenting as horizontal lines, and is in principle, straightforward to detect. However, continuous visual inspection of all channels in search of saturation is not feasible, and we therefore employ a specific saturation monitor that alerts the operator if too many channels become saturated (Nurminen, 2019). | Data acquisition: Reporting As with adult recordings (Gross et al., 2013), reporting for infant protocols is essential. Reports should include equipment diagrams and specifications with static information about the system setup and run sheets to document information about appointments, such as participant preparation, or details of the stimulus delivery. However, for infant data acquisition, a few extra items are worth reporting. Infant behavior during the MEG session, including periods of crying or excessive movement, should be documented in order to apply data-driven methods to suppress any residual artifacts in the data. Depending on the type of digitization method, it may be difficult to digitize points that are very close to the infant's eyes or ears (e.g., LPA/RPA), and therefore any deviations during digitization from the true locations should be documented on a run sheet and ideally also with a camera to ensure accurate coregistration between MEG data and the head model. | Data preprocessing: Background Data preprocessing is necessary to suppress noise in the data which contaminates the brain signal of interest. The measured MEG signal is made up of a combination of brain activity, environmental interference (e.g., power lines, electronics), physiological interference (e.g., heart, eye blinks, other muscle activity), and sensor noise (e.g., transducer or electronic noise). Magnetic fields from the brain are extremely weak (Hämäläinen et al., 1993) and the amplitude of interfering signals is often orders of magnitude larger in comparison. With infants, several additional factors adversely affect the SNR. Infants can become fussy and irritable during data collection, resulting in fewer usable trials. Additionally, the smaller head size of infants can increase the distance of the head to the sensors, especially with adultsized helmets, lowering the SNR. Furthermore, infants tend to move much more than adults inside the helmet, which can lead to a loss of spatial information and potentially result in inaccurate localization of the brain activity, unless properly compensated for (Larson & Taulu, 2017). Below we provide recommendations for noise suppression methods, and suggest parameters to optimize the SNR of infant data. A sample example script demonstrating these stages of analysis for a single participant's data from (Mittag, Larson, Clarke, Taulu, & Kuhl, 2021) is available at https://github.com/ilabsbrainteam/2022-Best-Practices-Infant-MEG. | Software There are a number of packages available for MEG analysis. We use MNE-Python (Gramfort et al., 2013), an open-source Python software package for processing, visualizing, and analyzing human neurophysiological data, including MEG. Specifically for infant data, it contains implementations of the most recent advancements for signal quality enhancement using spatial filtering and movement compensation (M. Clarke, Larson, Tavabi, & Taulu, 2020;Helle et al., 2020). | Visual inspection Efficient signal processing is crucial for infant MEG. Modern automated signal processing methods efficiently achieve robust data quality even under very challenging data collection conditions. However, it is always good practice to visually inspect the data before and after applying preprocessing algorithms for artifacts, bad or flat channels, and bad segments. To ensure optimal data quality, we mark all bad channels or segments, and repair or remove these by subsequent processing methods. Any modifications to the data processing made on the basis of visual inspection should be explained in detail to ensure reproducibility. | External noise suppression SSS and its temporal extension, temporal signal space separation (tSSS) (Taulu & Hari, 2009;Taulu & Simola, 2006), are methods that compensate for external interference artifacts and are commonly used in MEG preprocessing. They are based on the vector spherical harmonic expansion of multichannel MEG signals under the quasi-static assumption of Maxwell's equations. While SSS is not effective against artifacts arising from sources very close to the sensors (roughly <50 cm), the tSSS method additionally suppresses the contribution of nearby artifact sources by utilizing temporal information. We recommend processing all infant data with tSSS because it is especially useful in cases where multiple people are moving close to the sensor array. When using tSSS with infant data, we recommend adjusting the following parameters based on the age and size of the participant: (a) correlation limit (CL), and (b) tSSS internal subspace. The effect of adjusting the tSSS correlation limit has been studied in detail (M. Clarke et al., 2020;Medvedovsky, Taulu, Bikmullina, Ahonen, & Paetau, 2009) and it is recommended that data with higher SNR use higher correlation limits, while data with lower SNR use lower correlation limits. The internal subspace truncation value for infant populations should be adjusted depending on the size and geometry of the sensor array. A value of 8 for the internal subspace has been recommended and was optimized for adult-sized heads, while six generally yields higher SNR data for infants due to head size and distance from the sensors. The reduced truncation value is justified based on the analysis of the cumulative signal power for different source-to-sensor distances as outlined in . | Movement compensation Head movements in MEG distort the magnetic field distribution measured by the sensors (Medvedovsky, Taulu, Bikmullina, & Paetau, 2007). Head movements are common in infants and can result in large errors in source localization. However, head movement compensation can restore localization accuracy even if infant data is collected using adult-sized helmets (Larson & Taulu, 2017). During movement compensation, the time-varying position of the head with respect to the sensors is estimated and the data are transformed to "virtual sensor" locations corresponding to a target head position, specified by the user or software. The recommended target position is the time weighted-average position. This achieves an effect as if the head had remained in a static spatial relationship to the sensors during the MEG measurement. | HPI coil SNR Given that movement compensation is essential, the accuracy of head position information is very important. Continuously measuring each HPI coil's SNR and location in reference to the other coils during acquisition ensures that the coils are functioning properly and have not moved on the head. The SNR calculation is based on estimating the amplitudes of the individual HPI signals oscillating at precisely specified frequencies and comparing these amplitudes to the sensor noise level. If fewer than three coils were functional or if the coils moved or during portions of data collection, then it is necessary to remove those segments of data to avoid biasing source localization. | Physiological noise suppression Heart artifacts are prominent in infants and young children due to their small body size and the closer proximity of the heart to the sensor array. These artifacts are problematic when compensating for head movements because of the time-varying spatial relationship between the brain and the heart. Multivariate signal processing techniques such as independent component analysis (ICA) and principal component analysis (PCA) (Uusitalo & Ilmoniemi, 1997) can identify and spatially remove noise sources that arise from the body, such as cardiac or blink artifacts. | Sensor noise suppression Intrinsic sensor noise is weaker than environmental and physiological noise. However, in cases where the SNR tends to be low, which is typical of infant data, suppressing sensor noise can improve both the data quality and the detectability of the signals of interest. Oversampled temporal projection (OTP) can effectively suppress sensor noise in MEG data (Larson & Taulu, 2018) and can be used in combination with existing methods, such as tSSS (M. Clarke et al., 2020) or other noise suppression algorithms. Parameters of subsequent noise suppression algorithms (e.g., tSSS) may need to be adjusted after the application of OTP (refer to M. Clarke et al., 2020 for details). | Source reconstruction: Background A growing number of MEG studies are focusing on source space analyses to directly assess neural generator activity as a function of development (Chen et al., 2019;Kao & Zhang, 2019). MEG source reconstruction consists of two components: computing a forward model that maps neural currents in the brain to MEG sensor values, and choosing a strategy for tackling the corresponding inverse problem that maps MEG data to brain currents. Modeling the sources in infant MEG data is generally the same as in adult data, but has unique challenges due to the immature structure of the infant brain and the lower SNR as com- | Conductor model for infants For EEG analysis, the conductor model for infant heads can be particularly complex to construct because in infants, the skull is not fully formed. Sutures and fontanels are highly variable across participants, which makes modeling of the electric conductivity profile difficult and participant-specific. While the electric potential distribution is susceptible to the details of the spatial profile of the electric conductivity in the head, the magnetic field pattern is not as significantly affected by changes in the conductivity geometry. Thus, the infant skull may be modeled as a homogenous conductor without significantly compromising source localization results (Lew et al., 2013). As with adults, the conductor model for infant MEG may be limited to a single homogenous layer, either formed from a sphere or the surface of the inner skull (using a BEM) as described by anatomical MRIs. | Anatomical source models for infants When anatomical data is used as a part of the forward model, whether derived from a template or individual MRIs, an additional process of MRI co-registration (of the MRI and head coordinate frames) must be performed (Chella et al., 2019). The resulting transformation matrix, together with estimated head-to-MEG transformation, establishes the proper geometrical relationship between the sources and the sensors. When individual anatomical MRIs are available for infant participants, the co-registration process is much the same for infants as for adults. However, in addition to being expensive, individual MRIs for infants are typically much more difficult to obtain, so we recommend using suitable age-matched templates from O'Reilly, Larson, Richards, and Elsabbagh (2021), as they overcome many typical issues with infant source modeling (e.g., lack of surface and volumetric anatomical labels). Surfaces from anatomical templates should be warped to match the participant's head digitization. From there, a volumetric or surface source space can be constructed, as with adult data (Gross et al., 2013). In practice, this can be achieved in MNE-Python by using the "mne coreg" manual co-registration tool. In this way, the individual anatomy can be matched as closely as possible by the template. When using a surrogate MRI, free orientation sources should be used even for surface source space (dipoles along the gray-white matter cortical surface boundary) because the cortical folding of the surrogate is not precisely matched to that of the individual. | Inverse modeling for infants The inverse model combines measured MEG data with the forward model to estimate the amplitude of brain activity over time, while accounting for sensor noise structure. The steps for performing inverse calculations are largely the same for infants and adults; however, results must be checked when performing inverse calculations on infant data to determine that source localization errors due to low SNR or noise in the data are minimized. There are many potential inverse methods. Regardless of the source localization method, almost all are tested and validated on adult data, not infant data. Therefore, source estimation must be closely examined. Researchers should iteratively produce evidence of the quality of source reconstruction steps and then adjust to minimize errors as needed. Ideally, some sort of known ground truth for localization (e.g., primary auditory onset response in A1) can be used to validate a given approach. | Data preprocessing and source reconstruction: Reporting Automated data quality reports, produced upon completion of source estimation, are an essential tool for infant researchers. These reports allow rapid visual inspection of the data and the results of both preprocessing and source estimation (Figure 4). These reports allow for inspection of the co-registration alignment, source space, forward model, noise covariance, SNR, and source estimates. | DISCUSSION In this manuscript, we have discussed the various challenges with infant MEG and proposed some basic best-practice guidelines for data collection, acquisition, and analysis. Using these techniques, we are able to reliably obtain high-quality, robust infant brain data from our adult-sized SQUID system. The goal of this article is to allow our existing pipeline and practices to be used as a foundation for other laboratories to adapt and build upon, and to improve standards for MEG data collection, analysis, and reporting. These guidelines will surely change and adapt as exciting new advances in MEG technology and hardware emerge, including OPM sensors and infantspecific systems. CONFLICT OF INTEREST The authors declare that there is no conflict of interest. AUTHOR CONTRIBUTIONS Maggie D. Clarke contributed to writing, conception, design, investigation, methodology, and the supplementary video. Alexis N. Bosseler contributed to writing, investigation, methodology, and the supplementary video. Julia C. Mizrahi contributed to writing, investigation, methodology, and the supplementary video. Erica R. Peterson contributed to writing and investigation. Eric Larson contributed to review and editing and created the sample data pipeline and analysis. Andrew N. Meltzoff contributed to review and editing. Patricia K. Kuhl contributed to review and editing of the manuscript and the supplementary video. Samu Taulu contributed to writing, the conception, and design of the manuscript and provided supervision. All authors contributed to manuscript revision, read, and approved the submitted version. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request.
2022-04-17T06:22:54.200Z
2022-04-16T00:00:00.000
{ "year": 2022, "sha1": "e93513dbd95d6eb01ee0f4ef8f613396cd17f03d", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hbm.25871", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "bf5586e299a778ee63ff6190436b5795c80e733e", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
245553818
pes2o/s2orc
v3-fos-license
small cell Neuroendocrine carcinoma in Head and Neck Introduction: Poorly differentiated neuroendocrine carcinomas (NECs) originating from the eye are rare and very highly malignant diseases with a poor prognosis. Small cell NEC of the head and neck is a rare disease and highly aggressive. Early recognition and treatment are crucial for reducing morbidity and mortality. case presentation: A 19-year-old male visited our oncology surgery outpatient department due to the progressive neck mass enlargement originating from the eye. The patient was previously diagnosed with invasive choroid malignant melanoma of the left eye which had metastasized to the lymph nodes of the left neck. He underwent a surgical removal/exenteration of the left eye. The result showed that the patient’s survival with poorly differentiated tumors was about 14% while patients with well-differentiated NEC had a survival rate of 34%. It also indicates that the prognosis of these tumors is very poor with a total of over 90% of patients having distant metastatic disease. Histopathological examination showed the tumor tissue and its immunohistochemistry with positive streaks of CD56, NSE, Synaptophysin, and Ki67 suggested A B S T R A C T Introduction: Poorly differentiated neuroendocrine carcinomas (NECs) originating from the eye are rare and very highly malignant diseases with a poor prognosis. Small cell NEC of the head and neck is a rare disease and highly aggressive. Early recognition and treatment are crucial for reducing morbidity and mortality. case presentation: A 19-year-old male visited our oncology surgery outpatient department due to the progressive neck mass enlargement originating from the eye. The patient was previously diagnosed with invasive choroid malignant melanoma of the left eye which had metastasized to the lymph nodes of the left neck. He underwent a surgical removal/exenteration of the left eye. The result showed that the patient's survival with poorly differentiated tumors was about 14% while patients with well-differentiated NEC had a survival rate of 34%. It also indicates that the prognosis of these tumors is very poor with a total of over 90% of patients having distant metastatic disease. Histopathological examination showed the tumor tissue and its immunohistochemistry with positive streaks of CD56, NSE, Synaptophysin, and Ki67 suggested small cell NEC. conclusions: it is crucial to establish an early diagnosis of these tumors to reduce morbidity and mortality. No optimal treatment for such disease has yet been established. S o n d a n g n o r a H a r a H a p & d a a n K H a m b r i Small Cell Neuroendocrine Carcinoma in Head and Neck 1c). The patient complained of a weight loss of 10 kg in one month. Another medical history was remarkable. There was no history of cigarette smoking and alcohol intake. From the physical examination, he was found to have a Karnofsky score of 50, a general state of serious illness to consciousness compos mentis cooperative, blood pressure of 110/70 mmHg, heart rate of 80x/ min, respiratory rate of 18 x/min, a temperature of 36.5 degrees Celsius, and VAS 6. Based on the examination of the physical region of the head and neck, there was a large lump on the left side of the face starting from the left neck to the left temple measuring three times the size of the patient's face. The lump appeared lumpy large with the surface of the skin partially shiny and ulcerated at the top of the lump, visible enlargement of the superficial veins, and the partly blackish skin. On palpation, there was a solid hard consistency, fixed, measuring 40 x 30 x 19 cm. The intraoral examination found 1.5-cm trismus and a protruding left buccal mass with mucous intake. On the examination of the right colli KGB, a lump was felt in the size of 7 x 6 x 4 cm. Laboratory tests were performed, showing hemoglobin of 9.95 g/dL, leucocytes of 9,950/mm 3 , platelets of 337/mm 3 , hematocrit of 48%, albumin of 3.5 g/dL, urea of 34 mg/dL, blood creatinine of 0.9 mg/dL, sodium of 136 mmol/L, potassium of 4.3 mmol/L, and chloride mmol/L. On the CT scan of the head and neck with contrast (Figure 3), a very large mass of inhomogeneous isodense density was found with a distinct lobulated edge measuring 37.8 x 17.7 x 20 cm accompanied by multiple calcifications from the frontal to the left superior mediastinum. The mass appeared to enhance lightly on the intravenous contrast administration. The mass appeared to extend to the frontal sinus, the left orbital extending to the left ethmoid sinus, left maxillary sinus, left nasal cavity, and nasopharynx. The larynx was pushed to the right. Intra cerebral was within normal limits. In the Thoracic CT scan, there was no picture of metastases. On the abdominal ultrasound examination, there was no picture of metastasis to the liver and other intraabdominal organs. The patient was diagnosed based on the histopathological examination and immunohistochemistry with small cell NEC T4bN3bM0. The patient was unresectable with a severe general condition accompanied by the possibility of tumor compression in the trachea and esophagus. He was managed for general condition improvement by providing adequate nutritional intake via a nasogastric tube, and gastrostomy might be done if the nasogastric tube was not possible to continue with preparation for palliative chemotherapy. If the tumor size was reduced, radiation was performed. The patient's condition decreased and the lumps in the head and neck were getting bigger rapidly, especially towards the left hemithorax. In preparation for chemotherapy, the patient was declared dead. DIscUssIoN NEC is a composite tumor of the nervous and endocrine systems that can release neuropeptides into the systemic circulation. 1 Small cell carcinoma histopathologically is a malignant epithelial tumor consisting of small cells with little cytoplasm, fine granular nucleus chromatin, unclear cell boundaries, and absent or inconspicuous nucleoli. More than 90% of small cell carcinomas have neuroendocrine features [4]. The WHO has characterized them into four subtypes of neuroendocrine: typical carcinoid tumors, atypical carcinoid tumors, small cell neuroendocrine tumors, and paragangliomas [5]. Some risk factors reported were age > 50-65 years, smoking, male gender, history of alcohol consumption, and history of radiation. However, these risk factors do not cause causative small cell NEC. In this patient, no risk factors were found, except gender [6]. Symptoms generally depend on the specific tumor site. Dysphagia, a globus sensation, and respiratory distress can occur with laryngeal lesions. Pain or a palpable mass suggests a diagnosis of salivary gland involvement. Facial nerve palsy is a more ominous sign that typically identifies a malignant rather than benign disease [7]. Conventional anatomical imaging and functional imaging using radionuclide scintigraphy and positron emission tomography/computed tomography can be complementary for the diagnosis, staging, and monitoring of treatment response. Multislice CT facilitates rapid coNcLUsIoNs It is crucial to establish an early diagnosis of these tumors as they can reduce morbidity and mortality. No optimal treatment for such disease has yet been established. It is because this case has never been found in our hospital so that the management follows the results of the tumor board meeting. and detailed evaluation of the entire neck with the ability to produce multiplanar reformatted images. MRI has a superior soft-tissue resolution, which makes it an ideal technique for imaging head and neck masses. It is superior to CT in defining the intracranial extension of tumors [8]. DecLaratIoNs NECs are classified with their mitotic count and Ki-67 index. The mitotic counts of poorly differentiated NECs, generally called high-grade or G3 NECs, are greater than 20 × 10 HPFs, and the Ki-67 level is greater than 20%. The angioinvasion of high-proliferation tumors with a Ki-67 level higher than 20% is extensive, and these tumors demonstrate an incredible potential to create metastatic disease [9]. In concordance with the Ki-67 positive result, this patient showed rapid progression and metastases. There is no consensus on the treatment of NEC. The treatment of extrapulmonary neuroendocrine tumors depends on whether the tumor is resectable, locoregionally advanced but unresectable, or metastatic [10]. Surgical operation is not always the first choice due to the postoperative quality of life impairment. Combined chemoradiotherapy aiming at organ preservation is preferred. Based on the results of the multidisciplinary tumor board meeting, chemotherapy and radiation preparations are still being carried out because there is no airway obstruction in the hope that the tumor size will decrease so that it makes it easier to carry out tracheostomy and other surgical interventions [11]. Deglutition has been severely impaired due to tumor compression to the trachea and esophagus in this patient. The primary lesion was unknown, and appropriate surgery was possible because the metastases occurred in one organ at a time [11]. Radical surgery needs to be implemented rather than chemoradiotherapy. Kuan et al. divided NEC of the head and neck into sinonasal and non-sinonasal, according to the site of the primary tumor. They reported that patients with sinonasal primary tumors experienced improved survival with surgery while those with non-sinonasal tumors had better survival with radiation therapy [12]. There is a higher risk of failed completion of chemoradiotherapy in patients whose deglutition has been severely compromised by advanced hypopharyngeal tumors [13]. Unfortunately, the general condition was not appropriate for surgery and the patient's condition continued to deteriorate. The survival rates of the patients with head/neck NEC vary somewhat but are poor overall. The five-year survival of patients with poorly differentiated tumors is 14% while patients with well-differentiated NEC have a 5-year survival rate of 34%.7 The prognosis of these tumors is very poor with over 90% of patients developing the distant metastatic disease [14].
2021-12-30T16:04:11.331Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "bdec864ca15943463ed43bf75069a876f52104e5", "oa_license": "CCBYNC", "oa_url": "https://indonesianjournalofcancer.or.id/e-journal/index.php/ijoc/article/download/805/405", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "779b928570f3c0e97e878225871745ab769689c3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
21157207
pes2o/s2orc
v3-fos-license
Pharmacogenetics of MicroRNAs and MicroRNAs Biogenesis Machinery in Pediatric Acute Lymphoblastic Leukemia Despite the clinical success of acute lymphoblastic leukemia (ALL) therapy, toxicity is frequent. Therefore, it would be useful to identify predictors of adverse effects. In the last years, several studies have investigated the relationship between genetic variation and treatment-related toxicity. However, most of these studies are focused in coding regions. Nowadays, it is known that regions that do not codify proteins, such as microRNAs (miRNAs), may have an important regulatory function. MiRNAs can regulate the expression of genes affecting drug response. In fact, the expression of some of those miRNAs has been associated with drug response. Genetic variations affecting miRNAs can modify their function, which may lead to drug sensitivity. The aim of this study was to detect new toxicity markers in pediatric B-ALL, studying miRNA-related polymorphisms, which can affect miRNA levels and function. We analyzed 118 SNPs in pre-miRNAs and miRNA processing genes in association with toxicity in 152 pediatric B-ALL patients all treated with the same protocol (LAL/SHOP). Among the results found, we detected for the first time an association between rs639174 in DROSHA and vomits that remained statistically significant after FDR correction. DROSHA had been associated with alterations in miRNAs expression, which could affect genes involved in drug transport. This suggests that miRNA-related SNPs could be a useful tool for toxicity prediction in pediatric B-ALL. Introduction Acute lymphoblastic leukemia (ALL) is the most common childhood cancer, accounting for 30% of all pediatric malignancies [1]].During the last decades, survival has been increased due to advances in chemotherapy for childhood ALL and cure rates now exceed 80% [2]].However, despite the clinical success of therapy, patients often suffer from toxicity, requiring a dose reduction or cessation of treatment.Therefore, it would be very useful to identify predictors of these adverse effects [3]]. In the last years, several studies have investigated the relationship between genetic variation and treatment-related toxicity in ALL [4][5][6][7][8][9][10].Nevertheless, most of these studies are focused in coding regions, which correspond only to about 1.5% of the entire genome. Nowadays, it is known that regions that do not codify proteins have an important regulatory function.MiRNAs are small noncoding RNAs that regulate gene expression at the post-transcriptional level [11].They are transcribed in the nucleus, as double stranded pri-miRNAs, which are processed to form the pre-miRNAs.The pre-miRNAs are exported to the cytoplasm and cleaved to produce two strands of miRNA [12,13].MiRNAs recognize their target mRNAs by binding to the 39UTR of the target gene [14], which leads to an inhibition of translation or facilitated degradation of the target mRNA.All those steps are regulated by genes of the miRNA processing machinery. MiRNAs can regulate genes involved in drug transport, metabolism and targets [15], affecting treatment response [16].For example, upregulation of miR-125b, miR99a and miR-100 was related to resistance to vincristine and daunorubicin, and downregulation of miR-708 with resistance to glucocorticoids in pediatric B-ALL [17,18].This data indicate that changes in the expression or function of miRNAs could affect response to treatment. Variations in miRNA expression and function may occur through genetic variations [12], [19].Consequently, miRNArelated SNPs interfering with miRNA levels or function may lead to drug resistance or to drug sensitivity [20].In fact, response to methotrexate (MTX), one of the most important drugs in ALL treatment, has been associated with SNP 829C.T near the miR-24 binding site in the 39UTR of DHFR, that causes increased DHFR expression [21].Also, in our group we observed that a polymorphism, that created a new miRNA binding site in ABCC4 and could reduce ABCC4 expression, was associated with increased MTX plasma levels [22]. According to all those evidences, the aim of the present study was to determine if miRNA-related polymorphisms could be useful as new toxicity markers in pediatric B-ALL. Ethics statement University of the Basque Country (UPV/EHU) ethics committee board (CEISH) approval was obtained.Written Informed consent was obtained from all patients or their parents before sample collection. Patients The patients included in this retrospective study were 152 children all diagnosed with B-ALL from 2000 to 2011 at the Pediatric Oncology Units of 4 Spanish hospitals (University Hospital Cruces; University Hospital Donostia; University Hospital Vall d'Hebro ´n and University Hospital La Paz). Treatment and toxicity evaluation All patients were homogeneously treated with the LAL-SHOP 94/99 and 2005 protocols.The Induction phase consisted of treatment with daunorubicin, vincristine, prednisone, cyclophosphamide, asparaginase and triple intrathecal therapy.The consolidation phase consisted of high-dose methotrexate, mercaptopurine, cytarabine and triple intrathecal therapy [22]. Toxicity data were collected objectively, blinded to genotypes, from the patients' medical files.Toxicity was graded according to the Spanish Society of Pediatric Hematology and Oncology (SHOP) standards, adapted from the WHO criteria (grades 0-4).The highest grade of toxicity observed for each patient during the induction and consolidation therapy period was recorded. Genes and polymorphisms selection We selected 21 genes in the pathway of miRNAs biogenesis and processing after literature review and using Patrocles database [23] (Table 1).In each gene, we covered all the SNPs with potentially functional effects using F-SNP, Fast-SNP, polymirTS [24,25] and Patrocles [23] databases.We considered functional effects those causing amino acid changes, alternative splicing, in the promoter region in putative transcription factor binding sites, or disrupting/ creating miRNAs targets.We also selected SNPs previously included in association studies in the literature.All SNPs were selected with a minor allele frequency greater than 5% (MAF$0.05) in European/Caucasoid populations.We searched for pre-miRNAs that had as putative targets genes involved in the pathways of the drugs used in LAL/SHOP protocol, using PharmGKB and mirWalk databases, and selected all the SNPs that had been described at the moment of the selection with a MAF.0.01 in European/Caucasic populations, using Patrocles and Ensembl databases and literature review. Genotyping Genomic DNA was extracted with the phenol-chloroform method as previously described [8] from remission peripheral blood. SNP genotyping was performed using TaqMan OpenArray Genotyping technology (Applied Biosystems, Life Technologies, Carlsbad, USA) according to the published Applied Biosystems protocol.The preliminary list of SNPs was filtered, using as criteria, suitability for the Taqman OpenArray platform. Data were analyzed with Taqman Genotyper software for genotype clustering and calling.Duplicate samples were genotyped across the plates.In order to assess the Hardy-Weinberg equilibrium (HWE) status of each SNP, we genotyped in parallel 348 healthy adult individuals of Spanish origin. Statistical analysis The x2 or Fisher's exact test were used for HWE and toxicity analyses.The effect sizes of the associations were estimated by the odds ratios (OR's) from univariate logistic regression.The most significant test among dominant and recessive genetic models was selected.The results for each toxicity parameter were adjusted for multiple comparisons by the False Discovery Rate (FDR) [26].In all cases the significance level was set at 5%.Analyses were performed by using R v2.11 software.Linkage disequilibrium analysis was performed with Haploview software v4.2. Patients' baseline characteristics In this study, we have analyzed 152 B-ALL patients.Clinical data about MTX plasma concentration 72 h after infusion were available for 141 patients.Clinical data about other therapyrelated toxicity in induction were available for 137 patients and in consolidation for 130 patients (Table 2). Genotyping Results We selected a total of 131 SNPs.After filtering for suitability for the Taqman Openarray platform, a final number of 118 SNPs (72 in 21 genes involved in miRNA biogenesis and 46 in 42 pre-miRNAs) was included in a Taqman Openarray Plate (Applied Biosystems) (Table S1 and S2). A successful genotyping was obtained in 145 DNA samples (95.39%).In the genotyping process, 106 SNPs out of 118 were genotyped satisfactorily (89.83%).The failures were due to no PCR amplification, insufficient intensity for cluster separation, or poor or no cluster definition).The average genotyping rate for all SNPs was 97.81%.Of those 106 SNPs, 14 were not in HWE in a population of 348 healthy controls and were not considered for further analysis.In total, 26 SNPs were excluded from the association study (Table S3).The other 92 SNPs were used in the association studies. Analysis of the association with toxicity In order to investigate if genetic variation may influence treatment toxicity, we tested the association between the 92 polymorphisms successfully genotyped that were in HWE in the control population and 15 different toxicity and pharmacokinetic parameters in the induction and consolidation phases (Table 2) (Table S4 and S5). In the genes of the miRNA biogenesis machinery, the most significant association was found between rs639174 in DROSHA and vomits in consolidation (p-value = 0.0003).This association remained statistically significant after FDR correction (p-corrected = 0.028).Interestingly, in DROSHA gene, a total of 8 SNPs out of 14 analyzed were associated with toxicity.Rs639174, rs2287584, rs10035440, rs4867329 and rs3805500 in DROSHA were among the top 10 associated SNPs in the biogenesis machinery (Table 3).Those SNPs are located along the whole gene and, in general, were not in high linkage disequilibrium among them (r 2 ,0.8) (Figure 1).Among the pre-miRNAs, the most significant SNPs were rs12894467 in mir-300, associated with hepatic toxicity and hyperbilirubinemia in induction, and rs56103835 in mir-453, associated with vomits and MTX plasma levels in consolidation (Table 4). Discussion It is known that ALL treatment can cause toxicity and toxicity predictors are needed.It has been proposed that miRNA-related SNPs interfering with miRNA function may lead to drug resistance or to drug sensitivity [20].However, there are very few studies analyzing the role of polymorphisms in miRNA biogenesis genes and in miRNAs and none of them had been performed in pediatric ALL. In this study, it is worth noting that we have found for the first time an association between rs639174 in DROSHA and vomits and this association remained statistically significant after FDR correction.In DROSHA, rs639174 is an intronic SNP with a putative role in transcriptional regulation (TR).This SNP had been previously associated with head and neck cancer recurrence, suggesting that in some way this SNP may have a functional effect on the gene [27].Interestingly, other 7 polymorphisms in DROSHA with a putative role in splicing and transcriptional regulation were associated with toxicity in induction and consolidation (rs10035440, rs2287584, rs4867329, rs3805500, Table 4.Most significant associations between polymorphisms in pre-miRNAs and toxicity parameters.rs6877842, rs10719 and rs7735863), although they did not remain statistically significant after FDR correction.The SNP rs3805500, which is associated with hepatic toxicity and vomits in our study, is in LD with the SNP rs640831, previously associated with reduced DROSHA mRNA expression and with expression changes in 56 miRNAs out of 199 analyzed [28].This can be understood knowing that DROSHA (RNASEN) encodes an RNAse III enzyme, involved in pri-miRNAs maturation into pre-miRNAs [29].This general alteration of miRNA expression could lead to changes in the expression of genes involved in response to treatment, which could explain the effect we have observed on toxicity during pediatric ALL treatment. As far as we know, this is the first time that polymorphisms in miRNA processing genes have been associated with toxicity after treatment in cancer patients.Knowing that literature about the function of these genes and their implication in pharmacogenetics is scarce, our results indicate that these genes and polymorphisms could be of relevance in the study of drug response. We also found associations between SNPs in pre-miRNAs and toxicity.Interestingly, SNPs associated with toxicity in induction were different from those associated with toxicity in consolidation, in which different drugs are given.This may mean that each miRNA regulates specific drug pathways.Although these associations did not remain significant after FDR correction and are currently of uncertain significance, we still consider that it is interesting to discuss them due to their putative roles in the regulation of drug pathways. The most significant association between SNPs in pre-miRNAs and toxicity in induction was with rs12894467 in the premature mir-300, which could affect the structure and processing of this miRNA.Interestingly, among the predicted targets of mir-300, we can find the transporters ABCC1 and ABCB1, with a role in vincristine detoxification, and the enzyme ALDH5A1, involved in cyclophosphamide inactivation.If the rs12894467 T risk allele caused an upregulation of mir-300, this could explain a downregulation of its targets, leading to an increased effect of the drugs used in the induction phase. The most significant association in consolidation was between the SNP rs56103835 in the premature mir-453 (also known as mir-323b-5p) and both MTX plasma levels and vomits.This miRNA has as putative target genes ABCC1, ABCB1, ABCC2 and ABCC4, which are involved in MTX transport.The SNP rs56103835, in which G allele is associated with higher risk of toxicity, is in the pre-miRNA, and thus could influence miRNA biogenesis and levels of mature mir-453.If mir-453 is up-regulated, it would decrease the activity of ABCC1, ABCB1, ABCC2 and ABCC4 genes, and the higher MTX plasma levels and toxicity observed could be explained.In fact, in a previous study carried out by our group, we showed the relevance of genetic variation in ABCC2 and ABCC4 genes for MTX toxicity [22]. In conclusion, we have found for the first time an association between rs639174 in DROSHA and vomits and other more uncertain associations between polymorphism in genes involved in miRNAs biogenesis and in pre-miRNAs and toxicity during pediatric ALL treatment.These results suggest that miRNArelated SNPs, which can be important in drug pharmacokinetics and dynamics, could be useful as toxicity markers in pediatric ALL.We open a new promising field of investigation, involving the study of miRNA-related polymorphisms in pediatric ALL treatment.Further studies are needed in order to assess the relevance of these SNPs in ALL pharmacogenetics. Figure 1 . Figure 1.Linkage disequilibrium plot of the SNPs analyzed in DROSHA.White: r2 = 0, shades of grey: 0,r2,1, black: r2 = 1.Numbers in squares are D' values.Block definition is based on the Gabriel et al. method.The SNPs associated with toxicity are squared.Those among the top 10 associated SNPs are squared in black doi:10.1371/journal.pone.0091261.g001 Table 2 . Characteristics of the study population. a MTX levels were considered high if the concentration was over 0.2 mM at 72 h.doi:10.1371/journal.pone.0091261.t002 Table 3 . Most significant associations between polymorphisms in biogenesis machinery and toxicity parameters.
2017-05-03T01:33:24.142Z
2014-03-10T00:00:00.000
{ "year": 2014, "sha1": "49780dc4c7e49761ba12ce61317de5a3e45408bf", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0091261&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "49780dc4c7e49761ba12ce61317de5a3e45408bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
64103
pes2o/s2orc
v3-fos-license
How Does Language Change Perception: A Cautionary Note The relationship of language, perception, and action has been the focus of recent studies exploring the representation of conceptual knowledge. A substantial literature has emerged, providing ample demonstrations of the intimate relationship between language and perception. The appropriate characterization of these interactions remains an important challenge. Recent evidence involving visual search tasks has led to the hypothesis that top-down input from linguistic representations may sharpen visual feature detectors, suggesting a direct influence of language on early visual perception. We present two experiments to explore this hypothesis. Experiment 1 demonstrates that the benefits of linguistic priming in visual search may arise from a reduction in the demands on working memory. Experiment 2 presents a situation in which visual search performance is disrupted by the automatic activation of irrelevant linguistic representations, a result consistent with the idea that linguistic and sensory representations interact at a late, response-selection stage of processing. These results raise a cautionary note: While language can influence performance on a visual search, the influence need not arise from a change in perception per se. INTRODUCTION Language provides a medium for describing the contents of our conscious experience. We use it to share our perceptual experiences, thoughts, and intentions with other individuals. The idea that language guides our cognition was clearly articulated by Whorf (1956) who proposed that an individual's conceptual knowledge was shaped by his or her language. There is clear evidence demonstrating that language directs thought (Ervin-Tripp, 1967), influences concepts of time and space (e.g., Boroditsky, 2001), and affects memory (e.g., Loftus and Palmer, 1974). More controversial has been the claim that language has a direct effect on perceptual experience. In a seminal study, Kay and Kempton (1984) found that linguistic labels influence decisions in a color categorization task. In the same spirit, a flurry of studies over the past decade has provided ample demonstrations of how perceptual performance is influenced by language. For example, Meteyard et al. (2007) assessed motion discrimination at threshold for displays of moving dots while participants passively listened to verbs that referred to either motion-related or static actions. Performance on the motion detection task was influenced by the words, with poorer performance observed on the perceptual task when the direction of motion implied by the words was incongruent with the direction of the dot display (see also, Lupyan and Spivey, 2010). Results such as these suggest a close integration of perceptual and conceptual systems (see Goldstone and Barsalou, 1998), an idea captured by the theoretical frameworks of grounded cognition (Barsalou, 2008) and embodied cognition (see Feldman, 2006;Borghi and Pecher, 2011). There are limitations with tasks based on verbal reports or ones in which the emphasis is on accuracy. In such tasks, language may affect decision and memory processes, as well as perception (see Rosch, 1973). For example, in the Kay and Kempton (1984) study, participants were asked to select the two colored chips that go together best. Even though the stimuli are always visible, a comparison of this sort may engage top-down strategic processes (Pinker, 1997) as well as tax working memory processes as the participant shifts their attentional focus between the stimuli. To reduce the contribution of memory and decision processes, researchers have turned to simple visual search tasks to explore the influence of language on perception. Consider a visual search study by Lupyan and Spivey (2008). Participants were shown an array of shapes and made speeded responses, indicating if the display was homogeneous or contained an oddball ( Figure 1A). The shapes were the letters "2" and "5," rotated by 90˚. In one condition, the stimuli were described by their linguistic labels. In the other condition, the stimuli were referred to as abstract geometric shapes. RTs were faster for the participants who had been given the linguistic labels or spontaneously noticed that the shapes were rotated letters. Lupyan and Spivey concluded that ". . . visual perception depends not only on what something looks like, but also on what it means" (p. 412). Visual search has been widely employed as a model task for understanding early perceptual processing (Treisman and Gelade, 1980;Wolfe, 1992). Indeed, we have used visual search to show that the influence of linguistic categories in a detection task is amplified for stimuli presented in the right visual field (Gilbert et al., 2006(Gilbert et al., , 2008. While our results provide compelling evidence that language can influence performance on elementary perceptual tasks, the mechanisms underlying this interaction remain unclear. Lupyan and Spivey (2008;Lupyan, 2008) suggest that the influence of language on perception reflects a dynamic interaction in which linguistic representations sharpen visual feature detectors. www.frontiersin.org By this view, feedback connections from linguistic or conceptual representations provide a mechanism to bias or amplify activity in perceptual detectors associated with those representations (Lupyan and Spivey, 2010), similar to how attentional cues may alter sensory processing (e.g., Luck et al., 1997;Mazer and Gallant, 2003). While there is considerable appeal to this dynamic perspective, it is also important to consider alternative hypotheses that may explain how such interactions could arise at higher stages of processing (Wang et al., 1994;Mitterer et al., 2009;see also, Lupyan et al., 2010). Consider the Lupyan and Spivey task from the participants' point of view. The RT data indicate that the displays are searched in a serial fashion (Treisman and Gelade, 1980). When targets are familiar, participants compare each display item to an image stored in long-term memory, terminating the visual search when the target is found. With unfamiliar stimuli, the task is much more challenging (Wang et al., 1994). The participant must form a mental representation of the first shape and maintain this representation while comparing it to each display item. It is reasonable to assume that familiar shapes, ones that can be efficiently coded with a verbal label, would be easier to retain in working memory for subsequent use in making perceptual decisions (Paivio, 1971;Bartlett et al., 1980). In contrast, since unfamiliar stimuli lack a verbal representation in long-term memory, the first item would have to be encoded anew on each trial. We test the memory hypothesis in the following experiment, introducing a condition in which the demands on working memory are reduced. EXPERIMENT 1 For two groups, the task was similar to that used by Lupyan and Spivey (2008): participants made speeded responses to indicate if a display contained a homogenous set of items or contained one oddball. For two other groups, a cue was present in the center of the display, indicating the target for that trial. Within each display type, one group was given linguistic primes by being told that the displays contained rotated 2's and 5's. The other group was told that the stimuli were abstract forms. The inclusion of a cue was adopted to minimize the demands on working memory. By pairing the search items with a cue of the target, the task is changed from one requiring an implicit matching process in which each item is compared to a stored representation to one requiring an explicit matching process in which each item is compared to the cue. If language influences perception by priming visual feature detectors, we would expect that participants who were given the linguistic labels would exhibit a similar advantage with both types of displays. In contrast, if the verbal labels reduce the demands on an implicit matching process (e.g., because the verbal labels provide for dual coding in working memory, see Paivio, 1971), then we would expect this advantage to be eliminated or attenuated when the displays contain an explicit cue. Participants Fifty-three participants from the UC Berkeley Research Participation pool were tested. They received class credit for their participation. The research protocol was conducted in accordance with the procedures of the University's Institutional Review Board. Stimuli The visual search arrays consisted of 4, 6, or 10 white characters, presented on a black background. The characters were arranged in a circle. The characters were either a "5" or "2," rotated 90˚clockwise. The characters fit inside a rectangle that measured 9 cm × 9 cm, and participants sat approximately 56 cm from the computer monitor. For the no cue (NC) conditions, a fixation cross was presented at the center of the display. For the Cue groups, the fixation cross was replaced by a cue. Procedure The participants were randomly assigned to one of four groups. The two NC groups provided a replication of Lupyan and Spivey (2008). They were presented with stimulus arrays ( Figure 1A) and instructed to identify whether the display was composed of a homogenous set of characters, or whether the display included one character that was different than the others. One of the NC groups was told that the display contained 2's and 5's whereas the other NC group was told that the displays contained abstract forms. For the two Cue groups, the fixation point was replaced with a visual cue ( Figure 1B). For these participants, the task was to determine if an array item matched the cue. As with the NC conditions, one of the Cue groups was told that the display consisted of 2's and 5's and the other Cue group was told that the display contained abstract forms. Each trial started with the onset of either a fixation cross (NC groups) or cue (CUE groups). The search array was added to the display after a 300-ms delay. Participants responded on one of two keys, indicating if the display contained one item that was different than the other display items. Following the response, an accuracy feedback screen was presented on the monitor for 1000 ms. The screen was then blanked for a 500-ms inter-trial interval. Average RT and accuracy were displayed at the end of each block. The experiment consisted of a practice block of 12 trials and four test blocks of 60 trials each. At the beginning of each block, participants in both the NC and Cue groups were informed which character would be the target for that block of trials, similar to the procedure used by Lupyan and Spivey (2008). Each character served as the oddball for two of the blocks. The oddball was present on 50% of trials, positioned on the right and left side of the screen with equal frequency. Frontiers in Psychology | Cognition At the end of the experiment, the participants completed a short questionnaire to assess their strategy in performing the task. We were particularly interested in identifying participants in the abstract groups who had generated verbal labels for the rotated 2's and 5's given that such strategies produced a similar pattern of results as the Cue group in the Lupyan and Spivey (2008) study. Three participants in the NC group and two participants in the Cue reported using verbal labels, either spontaneously recognizing that the symbols were tilted 2's and 5's, or creating idiosyncratic labels (one subject reported labeling the items "valleys" and "mountains"). These participants were replaced, yielding a total of 12 participants in each of the four groups for the analyses reported below. RESULTS Overall, participants were correct on 89% of the trials and there was no indication of a speed-accuracy trade-off. Excluding incorrect trials, we analyzed the RT data (Figure 2) in a three-way ANOVA with two between-subject factors (1) task description (linguistic vs. abstract) and (2) task set (NC vs. Cue), and one within-subject factor, (3) set size (4, 6, or 10 items). The effect of set size was highly reliable, consistent with a serial search process, F (2, 88) = 289.35, p < 0.0001. Importantly, the two-way interaction of task description and task set was reliable, F (1, 44) = 4.96, p < 0.05, and there was also a significant three-way interaction, F (2, 88) = 6.23, p < 0.005, reflecting the fact that the linguistic advantage was greatest for the largest set size, but only for the NC group. To explore these higher-order interactions, we performed separate analyses on the NC and Cue groups. For the NC groups, the data replicate the results reported in Lupyan and Spivey (2008). Participants who were instructed to view the characters as rotated numbers (linguistic description) responded much faster compared to participants for whom the characters were described as abstract symbols. Overall, the RT advantage was 303 ms, F (1, 22) = 10.12, p < 0.001. We used linear regression to calculate the slope of the search functions, restricting this analysis to the target present data. The FIGURE 2 | Reaction time data for Experiment 1, combined over target present and target absent trials. Confidence intervals in the figure were calculated using the three-way interaction (Loftus and Masson, 1994). mean slopes for the linguistic and symbol groups were 112 and 143 ms, respectively. This difference was not reliable, (p = 0.10). However, there was one participant in the symbol group with a negative slope (−2 ms/item), whereas the smallest value for all of the other participants in this group was at least 93 ms/item. When the analysis was repeated without this participant, the mean slope for the symbol group rose to 155 ms/item, a value that was significantly higher than for the linguistic group (p = 0.03). In summary, consistent with Lupyan and Spivey (2008), the linguistic cues not only led to faster RTs overall, but also yielded a more efficient visual search process. A very different pattern of results was observed in the analysis of the data from the two Cue groups. Here, the linguistic advantage was completely abolished. In fact, mean RTs were slower by 46 ms for participants who were instructed to view the characters as rotated numbers, although this difference was not reliable F (1, 22) = 0.072, ns. Similarly, there was no difference in the efficiency of visual search, with mean slopes of 126 and 105 ms/item for the linguistic and symbol conditions, respectively. Thus, when the demands on working memory were reduced by the inclusion of a cue, we observed no linguistic benefit. The results of Experiment 1 challenge the hypothesis that linguistic labels provide a top-down priming input to perceptual feature detectors. If this were so, then we would expect to observe a linguistic advantage regardless of whether the task involved a standard visual search (oddball detection) or our modified, matching task. A priori, we would expect that with either display, the linguistic description of the characters should provide a similar priming signal. In contrast, the results are consistent with our working memory account. In particular, we assume that the linguistic advantage in the NC condition arises from the fact that participants must compare items in working memory during serial search, and that this process is more efficient when the display items can be verbally coded. Mean reaction time was faster and search more efficient (e.g., lower slope) when the rotated letters were associated with verbal labels. In this condition, each item can be assessed to determine if it matches the designated target, with the memory of the target facilitated by its verbal label (especially relevant here given that each target was tested in separate blocks). When the rotated letters were perceived as abstract symbols, the comparison process is slower, either because there is no verbal code to supplement the working memory representation of the target, or because participants end up making multiple comparisons between the different items. The linguistic advantage was abolished when the target was always presented as a visual cue in the display. We can envision two ways in which the cue may have altered performance on the task. First, it would reduce the demands on working memory given that the cue provides a visible prompt. Second, it eliminates the need for comparisons between items in the display since each item can be successively compared to the cue. By either or both of these hypotheses, we would not expect a substantive benefit from verbal labels. RTs increase with display size, but at a similar rate for the linguistic and abstract conditions. Mean RTs were slower for the Cue group compared to the NC group when the targets were described linguistically. This result www.frontiersin.org might indicate that the inclusion of the cues introduced some sort of interference with the search process. However, this hypothesis fails to account for why the slower RTs in the Cue condition were only observed in the linguistic group; indeed, mean RT was faster in the Cue condition for the abstract group. One would have to posit a rather complex model in which the inclusion of the cue somehow negated the beneficial priming from verbal labels. Alternatively, the inclusion of the cue can be viewed as changing the search process in a fundamental way, with the task now more akin to a physical matching task rather than a comparison to a target stored in working memory. A priori, we cannot say which process would lead to faster RTs. However, the comparison of the absolute RT values between the Cue and NC conditions is problematic given the differences in the displays. One could imagine that there is some general cost associated with orienting to the visual cue at the onset of the displays for the Cue groups. Nonetheless, if the verbal labels were directly influencing perceptual detectors, we would have expected to see a persistent verbal advantage in the Cue condition, despite the slower RTs. The absence of such an advantage underscores our main point that the performance changes in visual search for the NC condition need not reflect differences in perception per se. EXPERIMENT 2 We take a different approach in Experiment 2, testing the prediction that linguistic labels can disrupt processing when this information is task irrelevant. To this end, we had participants make an oddball judgment based on a physical attribute, line thickness. We presented upright or rotated 2s and 5s, assuming that upright numbers would be encoded as linguistic symbols, while rotated numbers would not. If language enhances perception, performance should be better for the upright displays. Alternatively, the automatic activation of linguistic codes for the upright displays may produce response conflict given that this information is irrelevant to the task. Participants Twelve participants received class credit for completing the study. Stimuli Thick and thin versions of each character were created. The thick version was the same as in Experiment 1. For the thin version, the stroke thickness of each character was halved. Procedure Each trial began with the onset of a fixation cross for 300 ms. An array of four characters was then added to the display and remained visible for 450 ms (Figure 3). Participants were instructed to indicate whether the four characters had the same thickness, or whether one was different. The characters were either displayed in an upright orientation or rotated, with the same orientation used for all four items in a given display. Upright and rotated trials were randomized within a block. Each participant completed four blocks of 80 trials each. All other aspects were identical to Experiment 1. RESULTS Participants were slower when the characters were upright compared to when they were rotated, F (1, 11) = 7.67, p < 0.01. The mean RT was 375 ms for the upright displays and 348 ms for the rotated displays, for an average cost of 27 ms (SE diff = 5.6 ms). Participants averaged 92% correct, and there was no evidence of a speed accuracy trade-off. We designed this experiment under the assumption that the upright displays would produce automatic and rapid activation of the lexical codes associated with the numbers, and that these task-irrelevant representations would disrupt performance on the thickness judgments. We can envision at least two distinct ways in which linguistic codes might disrupt performance. Perceptually, linguistic encoding encourages holistic processing. If parts of a number are thick, there is a tendency to treat the shape in a homogenous manner, perhaps reflecting the operation of categorization (Fuchs, 1923;Prinzmetal and Keysar, 1989;Khurana, 1998). This bias may be reduced for the less familiar, rotated shapes, which may be perceived as separate lines. Alternatively, the linguistic codes could provide potentially disruptive input to decision processes (e.g., response selection). This hypothesis is similar to the theoretical interpretation of the Stroop effect (MacLeod, 1991). In the classic version of that task, interference is assumed to arise from the automatic activation of the lexical codes of word names when the task requires judging the stimulus color, at least when both the relevant and irrelevant dimensions map onto similar response codes (e.g., verbal responses). In the current task, this interference would be more at a conceptual level (Ivry and Schlerf, 2008). Given that the four items in the display were homogenous, we would expect priming of the concept "same", relative to the concept "different", and that this would occur more readily for the upright condition where the items are readily recognized as familiar objects. DISCUSSION In the current study, we set out to sharpen the focus on how language influences perception. This question has generated considerable interest, reflecting the potential utility for theories of embodied cognition to provide novel perspectives on the psychological and neural underpinnings of abstract thought (Gallese and Lakoff, 2005;Feldman, 2006;Barsalou, 2008). An explosion of empirical studies have appeared, providing a wide range of intriguing demonstrations of how behavior (reviewed in Barsalou, 2008) and physiology (Thierry et al., 2009;Landau et al., 2010;Mo et al., 2011) in perceptual tasks can be influenced by language. We set out here to consider different ways in which language might influence perceptual performance. As a starting point, we chose to revisit a study in which performance on a visual search task was found to be markedly improved when participants were instructed to view the search items as linguistic entities, compared to when the instructions led the participants to view the items as abstract shapes (Lupyan and Spivey, 2008). The authors of that study had championed an interpretation and provided a computational model in which over-learned associative links between linguistic and perceptual representations allowed top-down effects of a linguistic cue to sharpen perceptual analysis. While this idea is certainly plausible, we considered an alternative hypothesis, one that shifts the focus away from a linguistic modulation of perceptual processes. In particular, we asked if the benefit of the linguistic cues might arise because language, as a ready form of efficient coding, might reduce the burden on working memory. We tested this hypothesis by using identical search displays, with the one addition of a visual cue, assumed to minimize the demands on working memory. Under these conditions, we failed to observe any performance differences between participants given linguistic and non-linguistic prompts. These results present a challenge for the perceptual account, given the assumption that top-down priming effects would be operative for both the cued and non-cued versions of the task. Instead, the working memory hypothesis provides a more parsimonious account of the results, pointing to subtle ways in which performance entails a host of complex operations. Our emphasis on how language might influence performance at post-perceptual stages of processing is in accord with the results from studies employing a range of tasks. In a particularly clever study, Mitterer et al. (2009) showed that linguistic labels bias the reported color of familiar objects. When presented with a picture of a standard traffic light in varying hues ranging from yellow to orange, German speakers were more likely to report the color as "yellow" compared to Dutch speakers, a bias consistent with the labels used by each linguistic group. Given the absence of differences between the two groups in performance with neutral stimuli, the authors propose that the effect of language is on decision processes, rather than by directly influencing perception. It should be noted, however, that participants in the Mitterer et al. (2009) study were not required to make speeded responses; as such, this study may be more subject to linguistic influences at decision stages than would be expected in a visual search task. However, numerous visual search studies have also shown that RT in such studies is influenced by the degree and manner in which targets and distractors are verbalized (Jonides and Gleitman, 1972;Reicher et al., 1976;Wang et al., 1994). Consistent with the current findings, RTs are consistently slower when the stimuli are unfamiliar, an effect that has been attributed to the more efficient processing within working memory for familiar, nameable objects (e.g., Wang et al., 1994). We recognize that language may have an influence at multiple levels of processing. That is, the perceptual and working memory accounts are not mutually exclusive, and in fact, divisions such as "perception" and "working memory" may in themselves be problematic given the dynamics of the brain. Nonetheless, we do think there is value in such distinctions since it is easy for our descriptions of task domains to constrain how we think about the underlying processes. Indeed, this concern is relevant to some work conducted in our own lab. In a series of studies, we have shown that the effects of language on visual search is more pronounced in the right visual field (Gilbert et al., 2006(Gilbert et al., , 2008. We have used a simple visual search task here, motivated by the goal of minimizing demands on memory processes and strategies. Our results, showing that taskirrelevant linguistic categories influence color discrimination, can be interpreted as showing that language has selectively shaped perceptual systems in the left hemisphere. Alternatively, activation of (left hemisphere) linguistic representations may be retrieved more readily for stimuli in the right, compared to left, visual field, and thus exert a stronger influence on performance. While the answer to this question remains unclear -and again, both hypotheses may be correct -the visual field difference disappears when participants perform a concurrent verbal task (Gilbert et al., 2006(Gilbert et al., , 2008. This dual-task result provides perhaps the most compelling argument against a linguistically modified structural asymmetry in the perceptual systems of the two hemispheres. Rather, it is consistent with the post-perceptual account promoted here (see also Mitterer et al., 2009) given the assumption that the secondary task disrupted the access of verbal codes for the color stimuli, an effect that would be particular pronounced in the left hemisphere. We extended the basic logic of our color studies in the second experiment presented here, designing a task in which language might hinder perceptual performance. We again used a visual search task, but one in which participants had to determine if a display item had a unique physical feature (i.e., font thickness). For this task, linguistic representations were irrelevant. Nonetheless, when the shapes were oriented to facilitate reading, a cost in RT was observed, presumably due to the automatic activation of irrelevant linguistic representations. While linguistic coding can be a useful tool to aid processing, the current findings demonstrate that language can both facilitate and impede performance. Language can provide a concise way to categorize familiar stimuli; in visual search, linguistic coding would provide an efficient mechanism to encode and compare the display items (Reicher et al., 1976;Wang et al., 1994). However, when the linguistic nature of the stimulus is irrelevant to the task, language may also hurt performance (Brandimonte et al., 1992;. These findings provide a cautionary note when we consider how language and perception interact. No doubt, the words we speak simultaneously reinforce and compete with the dynamic world we perceive and experience. When language alters perceptual performance, is it tempting to infer a shared representational status of linguistic and sensory representations. However, even performance in visual search reflects memory, decision, and perceptual processes. We must be vigilant in characterizing the manner in which language and perception interact.
2014-10-01T00:00:00.000Z
2012-03-20T00:00:00.000
{ "year": 2012, "sha1": "df5eeaa6ceb462f236d7d2bbd1057fff2281413d", "oa_license": "CCBYNC", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2012.00078/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df5eeaa6ceb462f236d7d2bbd1057fff2281413d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
257654342
pes2o/s2orc
v3-fos-license
Auranofin Targeting the NDM-1 Beta-Lactamase: Computational Insights into the Electronic Configuration and Quasi-Tetrahedral Coordination of Gold Ions Recently, the well-characterized metallodrug auranofin has been demonstrated to restore the penicillin and cephalosporin sensitivity in resistant bacterial strains via the inhibition of the NDM-1 beta-lactamase, which is operated via the Zn/Au substitution in its bimetallic core. The resulting unusual tetrahedral coordination of the two ions was investigated via the density functional theory calculations. By assessing several charge and multiplicity schemes, coupled with on/off constraining the positions of the coordinating residues, it was demonstrated that the experimental X-ray structure of the gold-bound NDM-1 is consistent with either Au(I)-Au(I) or Au(II)-Au(II) bimetallic moieties. The presented results suggest that the most probable mechanism for the auranofin-based Zn/Au exchange in NDM-1 includes the early formation of the Au(I)-Au(I) system, superseded by oxidation yielding the Au(II)-Au(II) species bearing the highest resemblance to the X-ray structure. Introduction Bacterial resistance regularly hampers the efficacy of therapy with subsequent grave repercussions, most notably in old or seriously sick persons. It is driven by inadequate empiric antibacterial treatment, characterized as the early employment of an antibacterial medicine to which the microorganism is not vulnerable and spuriously protracted treatment with antimicrobials [1]. One of the most immediate menaces is carbapenem-resistant Enterobacteriaceae, the presence of which in human bloodstream results in death in almost half of cases [2]. These microorganisms bear metallo-β-lactamases (MBLs), for example, New Delhi metallo-βlactamases (NDMs), which give the resistance against the β-lactams including cephalosporins, penicillins, and carbapenems, i.e., the most frequently used class of antibiotics, particularly for the treatment of serious Gram-negative bacterial infections [3]. Resistance to β-lactams is provided by plasmids bearing MBLs; moreover, the ease with which these plasmids are transferred between various species results in their universal spreading [4]. That is why the synthesis of broad-spectrum MBL inhibitors is of utmost importance. MBLs are zinc metalloenzymes, which feature one or two zinc ions Zn(II) and the nucleophilic hydroxyl (OH − ) in between, which plays a crucial role in the hydrolysis of the β-lactam ring, thus disrupting the antibiotic's action [5]. MBLs are characterized by a great structural diversity; hence, the formulation of a broad-spectrum inhibitor for all of them is problematic. The key part played by Zn(II) ions makes them a perfect target for the tentative inhibitors. Indeed, it was shown in numerous studies that Zn(II) ions are crucial for the resistance of MBLs to antibiotics since these metal cofactors are involved both in the Figure 1. Typical active-site metal coordination geometry for B1 MBLs. The metal coordination and Zn-Zn distance are indicated with dashed lines. The typical distances between Zn1, Zn2, and OH are denoted in Å. The PDB code for the represented structure is 5zgz [12]. Obviously, the chelation of zinc by ligands such as aspergillomarasmine A [13], or its replacement by another metal, for instance, by Bi(III) from bismuth citrate [14] or Au(I) from auranofin [15], incapacitates the complex NDM-1 machinery directed at the β-lactam breaking. Auranofin (AF, 1-thio-β-D-glucopyranosatotriethylphosphine gold-2,3,4,6-tetraacetate) was the first orally administered gold-based metallodrug developed specifically to remedy rheumatoid arthritis, certified Figure 1. Typical active-site metal coordination geometry for B1 MBLs. The metal coordination and Zn-Zn distance are indicated with dashed lines. The typical distances between Zn1, Zn2, and OH are denoted in Å. The PDB code for the represented structure is 5zgz [12]. Obviously, the chelation of zinc by ligands such as aspergillomarasmine A [13], or its replacement by another metal, for instance, by Bi(III) from bismuth citrate [14] or Au(I) from auranofin [15], incapacitates the complex NDM-1 machinery directed at the β-lactam breaking. Auranofin (AF, 1-thio-β-D-glucopyranosatotriethylphosphine gold-2,3,4,6-tetraacetate) was the first orally administered gold-based metallodrug developed specifically to remedy rheumatoid arthritis, certified by the US Food and Drug Administration (FDA) in 1985 ( Figure 2) [16]. Its therapeutic effect is ascribed to the [AuPEt 3 ] + cation yielded after the detachment of the thiosugar in the biological milieu [17]. This cation has an augmented preference to thiol-and selenol-based proteins [18], thus causing the disruption of cellular metabolism pathways and leading to the intended medicinal impacts. [16]. Its therapeutic effect is ascribed to the [AuPEt3] + cation yielded after the detachment of the thiosugar in the biological milieu [17]. This cation has an augmented preference to thiol-and selenol-based proteins [18], thus causing the disruption of cellular metabolism pathways and leading to the intended medicinal impacts. In a recent study, auranofin was recognized as an inhibitor of MBL NDM-1 that completely and irreversibly blocks the enzyme function, substituting zinc ions in the enzyme's active site [15]. The authors demonstrated that the administration of auranofin resensitizes carbapenem-and colistin-resistant bacteria to antibiotics and slows down the development of β-lactam and colistin resistance. It was found that both Zn(II) ions are substituted by the Au(I) ions delivered by auranofin and that two gold ions replace both Zn1 and Zn2 in the active site of NDM-1 by assuming a quasi-tetrahedral coordination geometry [15]. The X-ray structure of the NDM-1 active site with Zn1/Zn2 replaced by two gold ions is shown at Figure 3. The coordination geometries of gold ions are rather unusual. In a recent study, auranofin was recognized as an inhibitor of MBL NDM-1 that completely and irreversibly blocks the enzyme function, substituting zinc ions in the enzyme's active site [15]. The authors demonstrated that the administration of auranofin resensitizes carbapenem-and colistin-resistant bacteria to antibiotics and slows down the development of β-lactam and colistin resistance. It was found that both Zn(II) ions are substituted by the Au(I) ions delivered by auranofin and that two gold ions replace both Zn1 and Zn2 in the active site of NDM-1 by assuming a quasi-tetrahedral coordination geometry [15]. The X-ray structure of the NDM-1 active site with Zn1/Zn2 replaced by two gold ions is shown at Figure 3. The coordination geometries of gold ions are rather unusual. , chain A at PDB 6lhe) and without a water molecule in the first coordination sphere ((c), chain B at PDB 6lhe). Zn1/Au1 is always displayed at the left, and Zn2/Au2 is displayed at the right. The numeration of protein residues is taken from corresponding PDBs. All the distances are in Å and typed in italic font. The PDB codes for the represented structures are 5zgz [12] and 6lhe [15]. , chain A at PDB 6lhe) and without a water molecule in the first coordination sphere ((c), chain B at PDB 6lhe). Zn1/Au1 is always displayed at the left, and Zn2/Au2 is displayed at the right. The numeration of protein residues is taken from corresponding PDBs. All the distances are in Å and typed in italic font. The PDB codes for the represented structures are 5zgz [12] and 6lhe [15]. Although the preferred geometry of gold(I) complexes is linear with two-coordination at the gold center, instances of tetrahedral coordination have been reported [19]. Solution NMR studies have demonstrated that the addition of excess phosphine to [AuL 2 ] + (L = phosphine) can result in the formation of the [AuL 3 ] + and [AuL 4 ] + species, but unexpectedly, few of these complexes have been structurally characterized [20]. Furthermore, the majority of the four-coordinate species that have been identified comprise either bidentate phosphines [21] or thiolate ligands [22]. Taking into account that the gold is presented in the form of Au(I) in auranofin, with a distinct linear bicoordination, the detected tetrahedral coordination of both gold ions deserves to be further clarified. One explanation might be the occurrence of either oxidative or reductive processes accompanying the Zn(II)/Au(I) exchange. On the other hand, the NDM-1 fold itself might template the quasi-tetrahedral coordination and constrain the two Au(I) ions to adopt the tiled coordination geometry. The concomitance of both protein constraint and redox processes would be another reasonable explanation for the observed gold coordination in the NDM-1 active site. In this frame, we thus emphasize two aspects of extreme importance that need to be elucidated. On one hand, an in-depth understanding of the mechanism of the Zn/Au exchange, particularly the chemical steps underlying the decomplexation of the strongly coordinated PEt 3 ligand from the [AuPEt 3 ] + cation, would shed more light onto the biological activity of auranofin. Indeed, although there is a general consensus that the activation of auranofin yields the cationic complex [AuPEt 3 ] + , the mechanism of its action in the active sites of metalloenzymes is not yet fully understood [23,24]. The majority of authors agree that the sulfhydryl and selenohydryl selectivity for gold controls the process of auranofin's gold dismantling its ligands via the substitution reactions with either Cys and Sec protein residues or with the free thiols in the cytoplasm [25]. That is why there exists a long-standing debate on whether auranofin loses firstly its ligands in the cytoplasm via substitution by various thiols and only after a series of ligand exchanges reaches its intended biomolecular target, or whether auranofin's activation happens only after it reaches its target. Therefore, the experimental investigation of the chemical processes occurring after the administration of auranofin is very challenging due to the high chemical complexity impacted by this metallodrug either before or after its entry in the targeted cell. Nevertheless, the theoretical chemists have not yet succeeded in responding to this question either, despite numerous computational articles focused on the disentanglement of the mechanistic process of auranofin's ligand exchange reactions [17,[26][27][28]. Moreover, to the best of our knowledge, all the available computational studies of auranofin have addressed the behavior of one auranofin complex with its intended targets, for the sake of simplicity. The assessment of plausible mechanisms of the Zn/Au exchange operated by the attack of the [AuPEt 3 ] + cation to the active site of protein NDM-1 is currently the focus of our ongoing theoretical investigation. Indeed, computational studies were often successfully utilized for the analysis of the interaction of metal ions and metallodrugs with proteins [29,30]. Another aspect of high importance emerging from the X-ray detection of the Zn/Au exchange in the NDM-1 active site is represented by the unusual tetrahedral coordination of the two gold ions. Indeed, despite the fact that the oxidation of Au(I) to Au(III) is accompanied by the increase of the coordination number from two to four, the resulting coordination geometry is expected to turn from linear to square planar, so the quasitetrahedral coordination of the gold centers in NDM-1 appears to be elusive. In this paper, we address the coordination of Au ions observed in the NDM-1 active site by the use of density functional theory approaches. We extracted a reduced system modelling the coordination of the gold bimetallic scaffold, incorporating two gold centers, their ligands, and the bridging hydroxyl from the available X-ray structure [15], and assumed all possible combinations of charge and multiplicity of this system. The optimization of this system in various charge-multiplicity states and the comparison of the obtained geometries with the experimental results [12,15] allowed us to infer the oxidative states of gold cations in NDM-1 and eventually expand our understanding of the chemical structure of NDM-1 upon the reaction with auranofin. Moreover, we performed the optimization of the gold-bound NDM-1 active core by either freezing or not freezing the Cartesian coordinates of the atoms resembling the alpha carbon atoms of the real enzyme. This strategy allowed us to better assess the role of the NDM-1 backbone in the shaping of the bimetallic scaffold coordination. In addition to providing an interpretation to the detected X-ray data, this theoretical study delivered preliminary insights on the viable mechanisms of the Zn/Au exchange. Computational Details All calculations were performed with the Gaussian 09 A.02 quantum chemistry package [31]. Geometrical optimizations were carried out in solution by using ωB97X [32] in combination with the def2SVP basis set [33,34]. The input geometries of NDM-1 with Zn or Au ions were obtained from the pdb entries 5zgz [12] and 6lhe [15], respectively, by modelling the metal-bound residues with the corresponding side chains, capped with a methyl group resembling the alpha-carbon atoms. To take into account the anchoring of these groups to the NDM-1 backbone, the C atoms of terminal methyl groups were kept frozen during the geometry optimization. Frequency calculations were performed to verify the correct nature of the stationary points and to estimate the zero-point energy (ZPE) and thermal corrections to thermodynamic properties. Despite the presence of artificial restraints during optimization, the computed frequency spectra did not produce any imaginary frequencies. DFT gives a good description of geometries and reaction profiles for complexes formed by transition metals [35,36] and gold in particular [37]. Therefore, the ωB97X functional is known to reach a high accuracy in the calculation of electronic energies [38,39]. The NBO electron spin densities were calculated as the difference of alpha and beta natural electron configurations [40]. The PCM continuum solvent method was used to describe the solvation [41]. The water was used as an implicit solvent due to the location of the NDM-1 active site on the surface of the protein. The solvent-accessible surface (SAS) of NDM-1 was assessed by means of the sas tool in Gromacs [42]. Results and Discussion The active site of NDM-1 is characterized by the presence of two zinc metal centers, Zn1 and Zn2, at a distance of 3.62 Å and is connected through a hydroxyl bridge (Figure 3a). The other ligands at Zn1 are three histidines, 120, 122, and 189, that complete the tetrahedral coordination of this center. On the other hand, Zn2 reaches a trigonal bipyramidal geometry through the coordination of Asp124, Cys208, His250, and a water molecule at the distances in the 2.2-2.5 Å range. Such a coordinative asset corroborates previous studies [9][10][11] that have indicated Zn1 as the metal center that affixes the OH in the correct position, whereas the labile water on Zn2 can be replaced by carboxylate substrates, for instance, the C3/C4 carboxylate of the β-lactam antibiotic [43]. The substitution of Zn(II) by Au ions [15] determines appreciable modifications of the NDM-1 active site (Figure 3b,c). X-ray studies have led to two coordinative variants of the bimetallic gold moieties. In one case, two oxygens are bound to the Au1-Au2 scaffold; one oxygen is bound at a short distance (1.95 Å) with Au1, thus presumably being a hydroxyl group, whereas another oxygen is coordinated at Au2 at a longer distance (2.43 Å), thus more likely corresponding to a water ligand. The distance between the two Au centers is 3.76 Å (Figure 3b), which is longer than those detected in various crystallographic studies in the range 2.47-3.49 Å (Table S1) [44][45][46][47][48][49][50][51]. These data suggest that gold metal ions are not bound and held in place by the surrounding protein residues. Therefore, the formation of metallophilic Au(I) . . . Au(I) interactions-also denoted as closed-shell d10-d10 interactions-has been indicated to occur in the range 2.75-3.25 Å [52], so the two gold centers observed in the NDM-1 enzyme are not directly bonded. Besides the two oxygens bound at the Au1 and Au2 metal centers, Au1 is also bonded to the three histidines, 122, 189, and 120, at distances of 2.00, 2.05, and 2.50 Å, respectively, while Au2 is bound to Asp124, Cys208, and His250 at distances of 2.23, 2.47, and 2.27 Å, respectively. Hence, both metal centers present a tetrahedral coordination, which is rather unusual for this transition metal species [53]. We can conclude that in the substitution of the Zn(II) ions with Au, Au1 resembles Zn1 in the coordination of the hydroxyl ligand, even though the latter group does not form a bridge, as detected in the native NDM-1 structure. Therefore, the coordinative bond between Au1 and His120 resulted to be sensibly elongated compared to His122 and His189, thus distorting the tetrahedral-like coordination of Au1. In the X-ray variant with just one oxygen (Figure 3c), the Au2-O distance of 2.38 Å is consistent with the coordination of a water molecule, while no hydroxyl group resulted coordinated to the bimetallic system. Such a reorganization is rather important because of the role played by the metal-coordinated hydroxyl in the NDM-1-catalysis mechanism. On the other hand, we noticed that the coordination of Au1 shown in Figure 3c also features an increased elongation (by 0.27 Å) of the His120-Au1 bond compared to the system with the bound hydroxyl (Figure 3b). The loose coordination to one of the histidines, i.e., His120, and the lack of coordinated OH − in one X-ray structure (Figure 3c) suggest that the gold metal centers eventually bound to the NDM-1 enzyme may assume different oxidation states; in particular, we envision that at least Au1 may be found in two oxidation states, the higher one able to coordinate the hydroxyl ligand (Figure 3b) and the lower one lacking this anionic ligand and almost unbound to His120. Based on these evidences, we carried out density functional theory calculations with the aim of better characterizing the bonding structure of the Au1-Au2 bimetallic moiety and providing a preliminary insight of the mechanism of the Zn(II)/Au(I) exchange. For this purpose, we assigned various combinations of charge and multiplicity to the system, incorporating two gold cofactors and all the ligands found within 3.0 Å from the metal centers, i.e., OH, water, Asp, Cys, and four histidines. In particular, we assumed oxidation states of the gold centers 0, I, II, and III and multiplicities assigned consistently with the configurations reported in Table 1. With respect to the oxidation state I of gold in the [AuPEt 3 ] + cation, the selected values reflect the occurrence of either oxidation, reduction, or even none of these processes, accompanying the binding at NDM-1. The exploration of different values of multiplicities in some instances (Tables 1 and 2, Figure 4) was instead performed in consideration of the rather unusual tetrahedral coordination of both the Au1 and Au2 centers, as well as the presence of elongated distances, i.e., His120-Au1, that could be originated by the presence of unpaired electrons on the metal center. Afterwards, we conducted optimizations by starting from the same X-ray input geometry, obtained by either chain A or chain B of the pdb entry 6lhe, in which the positions of the carbon atoms resembling the Cα of the NDM-1 residues were kept frozen. This option allowed us to model the rigid arrangement of the metal-coordinated ligands imparted by the NDM-1 backbone. The assignment of the charge -3 described the system Au(0)-Au(0) (A, Table 1). For this bimetallic system with the multiplicity of 1, the calculated intermetallic bond is 4.11 Å, and the coordination of Au1 with two of the three histidines disappears, while all the Au2-ligand distances elongate, thus making such a combination of charge/multiplicity rather unlikely. Moreover, Au(0)-Au(0) system B with the multiplicity of 3 was found to lose the coordination picture of the active site detected in the X-ray structure and in addition showed the formation of a strong Au-Au bond of 2.84 Å which is not experimentally detected. The geometry optimization of Au(I)-Au(I) systems C-E was carried out by exploring three different multiplicity configurations: 1, 3, and 5, with the corresponding configurations of the two gold centers bearing 0, 1, and 2 unpaired electrons ( Table 1). As shown, our calculations show that systems C and D do not reproduce the bimetallic gold system detected in the crystal structure well, with both Au1 and Au2 reducing their coordination numbers. On the other hand, system E with a multiplicity of 5 resembles the experimental structure much better, as shown by the RMSD value of only 0.83 Å ( Table 2). The Au-Au system F with a charge of 0 and a multiplicity of 2 leaves two possible oxidative state combinations, i.e., Au(I)-Au(II) and Au(0)-Au(III) ( Table 1). In fact, DFT calculations showed that this configuration of the bimetallic system disrupts the experimentally detected coordination of both gold centers; the Au-Au distance is elongated greatly, 7.02 Å, while other coordinative bonds are lost (Table 2, Figure 4). Table 1 (vide infra). The modelled NDM-1 residues are also reported. All the distances are in Å. Table 2. Comparison of the metal-metal and metal-ligand distances between the optimized structures of the A-H models (columns 2-8) and experimental data (last column). Last column demonstrates the RMSD values for computed distances with respect to the experimental data. The data computed for models C-E and H, disclosing the minimum RMSD with respect to the experimental data, are reported in the format "restrained optimization data/unrestrained optimization data". * Retrieved from the X-ray structure with Au2 coordinated to a water molecule (PDB 6lhe, chain A) [15]. Distance The DFT optimization of systems G and H with a charge of +1 and a multiplicity of 3 and, thus, the possible combinations Au(I)-Au(III) or Au(II)-Au(II), respectively, led to the geometry that resembles mostly the X-ray structure of the NDM-1 active site (chain B of PDB 6lhe), with an RMSD value of 0.74 Å ( Table 2). In this case, the coordination pattern of the two gold ions resembles the experimental data well, with the only exception being the lack of the coordination between Au2 and Asp124 in system G ( Table 2). The Au-Au distance for this system was calculated to be about 5 Å; although this value is quite higher compared to the X-ray structure, it correctly corroborates the absence of any Au-Au bond or interaction. Nevertheless, the fact that the coordination of the Au-Au scaffold is correctly assessed by assigning a charge of +1 and multiplicity of 3 in system G, but not with the multiplicity of 1 in system H-featured an RMSD value of 2.12 Å (Table 2)-is exceptionally well in agreement with the experimental results and suggests that the gold ions, delivered from two auranofin complexes to the metallo-β-lactamase NDM-1, are more likely to be in the Au(II)-Au(II) oxidative state. The geometry optimization of the bimetallic systems with charges of -1 and +1 was then repeated at the same level of theory by removing any constraints. These calculations helped to elucidate the role played by the protein backbone, on which the metal-coordinating side chains are installed, in shaping the detected coordination of Au1 and Au2; the unconstrained optimization performed on systems C-E and H was found to mostly resemble the experimental architecture ( Table 2). As shown, in the absence of any geometrical constraint, the Au(I)-Au(I) systems C and D, i.e., singlet or triplet, respectively, are both optimized to structures with coordinative bond patterns better resembling the X-ray structure, with the corresponding RMSD values being only 0.91 and 0.80 Å, respectively. On the contrary, the coordination pattern of the more elusive quintet configuration in system E was found to deviate majorly in the absence of geometrical constraints. These outcomes seem to corroborate the formation of the Au(I)-Au(I) bimetallic system in the NDM-1 enzyme with either singlet or triplet configurations and indirectly indicate that the replacement of Zn(II) ions by [AuPEt 3 ] + may require no preliminary redox step. On the other hand, the unconstrained DFT optimization of the Au(II)-Au(II) bimetallic system H also led to a substantial improvement of the coordination pattern, yielding an RMSD of only 0.23 Å (Table 2). Hence, our DFT calculations indicate that Au(I)-Au(I) and Au(II)-Au(II) configurations are equally representative of the X-ray structure of NDM-1 treated with auranofin. The possible relationships between these two configurations have to be ascertained; however, we tentatively propose that Au(I)-Au(I) may be initially formed and that two-electron oxidation of the bimetallic scaffold may eventually lead to the Au(II)-Au(II) configuration. In this view, the Zn/Au exchange process, at least in its initial steps, is regarded as a non-redox process in which the supposed active species derived from auranofin, i.e., the [AuPEt 3 ] + cation, may react with the active site of NDM-1 via only ligand substitutions. Such a mechanistic hypothesis is the focus of ongoing studies. The distribution of electron densities of the atoms forming the first coordination sphere of the Au-Au scaffold in the active site of NDM-1-systems B, D-F, and H-has been studied by means of Mulliken and NBO analyses ( Table 3 and Table S2). The high correlation of the two analyses' data allowed us to better limit the discussion to only the NBO results (Table 3). In the case of system B, a triplet with a charge of -3, we found one unpaired electron at the center Au2, whereas the other one was detected half on the Au2-bound sulphur atom and half on the Au1 center (Table 3). On the other hand, both systems D and E, triplet and quintet, respectively, locate most of the unpaired electrons on the metal center Au1 (Table 3). In all these systems B, D, and E, the S and hydride O atoms present significant spin densities; the atoms are envisioned to be easily exposed to the attack by radical species produced in the biological milieu. Hence, the spin density analyses suggest that the Au(0)-Au(0) systems B, D, and E should be rather poorly representative of the bimetallic scaffold Au-bound NDM-1. In the neutral doublet system F, the unpaired electron was detected mostly (0.82) at the center Au1 and two coordinated nitrogen atoms, whereas a residual (0.18) spin density was found on the bridging hydroxyl (Table 3). In the triplet system H, the Au1 center and bridging hydroxyl localize approximately one unpaired electron, with almost the same distribution found in the system F, whereas the other unpaired electron was detected on the Au2 center, mostly on the metal and S atoms ( Table 3). These calculations showed that systems F and H host the higher extent of spin density within the bimetallic scaffold, mostly on metal centers and the sulphur atom, with only a residual amount on the bulk-exposed hydroxyl ligands. We repute that these data evidence the higher redox stability of the F and H systems, being less suitable to or more protected from the attack of radical species, and suggest that in principle, Au(I)-Au(I) and Au(II)-Au(II) are the most representative configurations of the Au-bound NDM-1 enzyme. To better corroborate our conclusions and analyze the effect of the Zn/Au exchange on the solvent exposure of the NDM-1 active site, we calculated the per residue solvent accessible surface (SAS) in the case of Au-or Zn-bound NDM-1 (Table 4). It was concluded that the metal substitution induces only an overall slight decrease of the SAS of the catalytic site of NDM-1, slightly more pronounced in chain B than in chain A, −2.3% and −8.3%, respectively. Interestingly, an appreciable increase of the SAS of the hydroxyl ligand was instead detected in the structure of chain A compared to the Zn-bound NDM-1. These data, together with the appreciable spin density on the hydroxyl oxygen atom detected on systems D and E, corroborate the lower stability of these configurations compared to the Au(I)-Au(I) and Au(II)-Au(II) configurations modelled by systems C and H. In the frame of the presented computational data, we hypothesized that the experimentally detected chains A and B of the Au-bound NDM-1 may be in fact chemically related; the hydroxyl-bridged chain A structure is more likely Au(II)-Au(II), a triplet, as represented by system H, and is yielded by the oxidation of the chain B-like structure in which the bimetallic scaffold is presumably Au(I)-Au(I), a singlet, as represented by system C. Another aspect that we attempted to address in this study was providing an explanation to the tetrahedral coordination of the gold metal centers. The conclusions about the higher consistency of the Au(I)-Au(I) and Au(II)-Au(II) configurations, which typically disclose coordination numbers < 4, reinforce the templating role of the protein environment in determining the coordination geometries retrieved in the chain A and B structures of gold-bound NMD-1. The positional constraint exerted by the NMD-1 backbone on the gold-bound residues probably shapes the coordination geometry and stabilizes specific configurations of the bimetallic scaffold, i.e., singlet Au(I)-Au(I) or triplet Au(II)-Au(II). The Zn/Au exchange yielded by the reaction of auranofin with the NDM-1 enzyme can be envisioned as a process eventuating in the metal-by-metal replacement with no significant structural aberration of the enzyme core. Table 4. Per residue solvent accessible surface (SAS) of the portion within 4.0 Å around the bimetallic system of the gold-bound and zinc-bound NDM-1 protein extracted from the pdb entries 6lhe and 5zgz, respectively. The M1/M2 residues correspond to either Au1/Au2 or Zn1/Zn2 metal centers; the OH and H 2 O residues correspond to the hydroxyl ion and water molecule coordinated to the bimetallic system. The total SAS and per atom SAS, i.e., total SAS divided by the number of non-hydrogen atoms, of the analyzed NDM-1 portions are reported in the last two rows. All values are in Å. Summary Density functional theory calculations were carried out to investigate the unusual tetrahedral coordination assumed by the gold ions in the bimetallic core of the gold-bound NDM-1 enzyme, which is produced upon the treatment of the tiled zinc-dependent betalactamase with the metallodrug auranofin. By testing several charge and multiplicity schemes in combination with constraining the positions of the coordinating residues on/off, we showed that Au(I)-Au(I) and Au(II)-Au(II) moieties mostly resemble the experimental X-ray structure of the bimetallic scaffold of gold-bound NDM-1. The most plausible scenario for the auranofin-based Zn/Au exchange in NDM-1 would see the early formation of the Au(I)-Au(I) system, followed by an oxidation step affording the Au(II)-Au(II) species, which disclosed the highest resemblance to the X-ray structure. The backbone fold of the NDM-1 enzyme exerts a template effect on the coordination geometry of the bimetal Au-Au scaffold by favoring the tetrahedral coordination and probably also influences the higher stability of the triplet Au(II)-Au(II) configuration with respect to other assets. Moreover, thanks to the computational data presented here, the theoretical investigation of the mechanism of the Zn/Au exchange yielded by the reaction of the [AuPEt 3 ] + cation with NDM-1 is currently ongoing.
2023-03-22T15:15:24.954Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "fc50077861333ae62349e90a5055e908eb3070a5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/15/3/985/pdf?version=1679134475", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b0fd6932efa49935ac00a6bc799729361f7374b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
227128169
pes2o/s2orc
v3-fos-license
High Levels of Within-Host Variations of Human Papillomavirus 16 E1/E2 Genes in Invasive Cervical Cancer Human papillomavirus type 16 (HPV16) is the most common HPV genotype found in invasive cervical cancer (ICC). Recent comprehensive genomics studies of HPV16 have revealed that a large number of minor nucleotide variations in the viral genome are present in each infected woman; however, it remains unclear whether such within-host variations of HPV16 are linked to cervical carcinogenesis. Here, by employing next-generation sequencing approaches, we explored the mutational profiles of the HPV16 genome within individual clinical specimens from ICC (n = 31) and normal cervix (n = 21) in greater detail. A total of 367 minor nucleotide variations (167 from ICC and 200 from the normal cervix) were detected throughout the viral genome in both groups, while nucleotide variations at high frequencies (>10% abundance in relative read counts in a single sample) were more prevalent in ICC (10 in ICC versus 1 in normal). Among the high-level variations found in ICC, six were located in the E1/E2 genes, and all of them were non-synonymous substitutions (Q142K, M207I, and L262V for E1; D153Y, R302T, and T357A for E2). In vitro functional analyses of these E1/E2 variants revealed that E1/M207I, E2/D153Y, and E2/R302T had reduced abilities to support viral replication, and that E2/D153Y and E2/R302T failed to suppress the viral early promoter. These results imply that some within-host variations of E1/E2 present at high levels in ICC may be positively selected for and contribute to cervical cancer development through dysfunction or de-stabilization of viral replication/transcription proteins. INTRODUCTION Cervical cancer is the fourth most frequent cancer in women worldwide and is etiologically linked to persistent infections with high-risk genotypes of human papillomaviruses (HPVs) (zur Hausen, 2002). Of the approximately 15 high-risk HPVs, Human papillomavirus type 16 (HPV16) is most often detected in cervical cancer, indicating its strong potential for triggering cervical cancer development (Munoz et al., 2003). Incident infections with high-risk HPV in cervical mucosa clinically manifest as low-grade cervical intraepithelial lesions, which are generally eliminated within 1-2 years by the host immune response. However, in a subset of infected women, if the infection persists, the lesions can progress to a precancerous state, and eventually evolve into invasive cervical cancer (ICC) after more than a decade of viral persistence (Schiffman et al., 2007). Epidemiological evidence indicates that high-risk HPV infections can also be detected in healthy women with normal cervical cytology, reflecting a state of asymptomatic infection (de Sanjose et al., 2007). Recent advances in next-generation sequencing technologies have facilitated a detailed understanding of viral genetic diversity within and between infected individuals (Cullen et al., 2015;de Oliveira et al., 2015;Mirabello et al., 2017;van der Weele et al., 2017;Dube Mandishora et al., 2018;Hirose et al., 2018;Lagstrom et al., 2019;Mariaggi et al., 2018). Although genomic sequences of DNA viruses are generally considered to be relatively stable compared to RNA viruses, HPV genomes within individual clinical specimens harbor a large number of minor genetic variants, so-called within-host genomic variability. In the HPV16 genome, these are mostly predominated by C-to-T or C-to-G substitutions, clearly implying the involvement of cellular APOBEC3 cytosine deaminases in their generation (Mirabello et al., 2017;Hirose et al., 2018). Such signature mutations are more frequently detected in normal or low-grade specimens of the cervix, suggesting that APOBEC3 mediates viral clearance by introducing deleterious mutations into the HPV16 genome (Hirose et al., 2018;Zhu et al., 2020). In contrast, in precancer/cancer specimens, some nucleotide variations in the HPV16 genome were found to be enriched to relatively high levels within a specimen (de Oliveira et al., 2015;Hirose et al., 2018;van der Weele et al., 2019). However, it is not clear whether such high-levels of variants are positively selected for, and whether they contribute to cervical cancer progression. Here, by employing deep sequencing techniques, we focused our analysis on those within-host nucleotide variations in the HPV16 genome detected at high levels within individual specimens. Such high-level variations were more frequently found in ICC than in cytologically normal cervix. Of these, some non-synonymous variants in the E1/E2 genes resulted in a decreased ability to support viral replication and transcription. Based on these results, we propose that selective pressure is being exerted on these E1/E2 variants during cervical carcinogenesis. Clinical Specimens Our study subjects consisted of HPV16-positive women with normal cytology (NILM: negative for intraepithelial lesion or malignancy, n = 21) and ICC (n = 31), who visited the Keio University Hospital, Tsukuba University Hospital, or Showa University Hospital for cervical cancer screening or treatment of cervical diseases. Cervical smears were classified according to the Bethesda system. Histological diagnoses of ICC were made using hematoxylin and eosin sections of cervical biopsy specimens according to the World Health Organization classification. HPV16 positivity was determined by HPV genotyping as described previously (Azuma et al., 2014). The average age (±standard deviation) in each category was 37.1 (±11.4) for NILM and 41.6 (±12.9) for ICC (Supplementary Table). Using a cytobrush, exfoliated cervical cells were collected in ThinPrep (Hologic, Bedford, MA, United States) from the patients. Total cellular DNA was extracted from the cells on a MagNA Pure LC 2.0 (Roche Diagnostic, Indianapolis, IN, United States) using the MagNA Pure LC Total Nucleic Acid Isolation kit (Roche Diagnostic), and used for deep sequencing analyses of the HPV16 genome. The study protocol was approved by the ethics committees at each hospital and the National Institute of Infectious Diseases, and written informed consent for study participation was obtained from each patient. Detection of Nucleotide Variations in the HPV16 Genome Overlapping PCR was performed with PrimeSTAR GXL DNA polymerase (Takara, Kusatsu, Japan) to cover the whole-genome sequence of HPV16. The sequences of PCR primers were as follows: HPV16-1744F (5 -TGT CTA AAC TAT TAT GTG TGT CTC CAA TG-3 ) and HPV16-5692R (5 -GAT ACT GGG ACA GGA GGC AAG TAG ACA GT-3 ); HPV16-5531F (5 -GGG TCT CCA CAA TAT ACA ATT ATT GCT G-3 ) and HPV16-1980R (5 -TAT CGT CTA CTA TGT CAT TAT CGT AGG CCC-3 ) (Hirose et al., 2018). The amplified DNA was subjected to agarose gel electrophoresis and purified using the Wizard gel purification kit (Promega, Madison, WI, United States). The purified DNA was converted to a DNA library using the Nextera XT DNA sample prep kit (Illumina, San Diego, CA, United States), followed by size selection with SPRIselect (Beckman Coulter, Brea, CA, United States). The multiplexed libraries were analyzed on a MiSeq (Illumina) with the MiSeq reagent kit v3 (150 cycle). Complete genomic sequences of HPV16 were assembled de novo from the total read sequences using the VirusTAP pipeline 1 (Yamashita et al., 2016). The accuracy of the reconstructed wholegenome sequences was verified by read mapping with Burrows-Wheeler Aligner (BWA) v0.7.12 and subsequent visual inspection by Integrative Genomics Viewer (IGV) v2.3.90. Nucleotide mismatches compared to the assembled reference genome and positions of variations in each sample were identified using BWA and SAMtools v1.3.1 with in-house Perl scripts (available upon request). Based on a quality score confidence threshold of Phred quality score >30 (error probability <0.001) used to extract variation positions in the read sequences, we defined a position as heterogeneous if relative read abundance was >0.5% (Kukimoto et al., 2013). The presence of nucleotide substitutions was confirmed by manual inspection of mismatched read sequences using IGV. Transient Replication Assay The HPV16 origin-containing plasmid and expression plasmids for N-terminal FLAG-tagged HPV16 E1 and E2 were described previously (Kukimoto et al., 2013). Expression plasmids for E1/E2 variants were constructed using the QuickChange Lightning Multi Site-Directed Mutagenesis Kit (Agilent Technologies, La Jolla, CA, United States). HPV-negative, cervical cancer C33A cells were plated 24 h before transfection in a 24-well plate at a density of 40,000 cells/well and transfected with 10 ng of the origin-containing plasmid for firefly luciferase expression and 10 ng of pGL4.75 (Promega) for Renilla luciferase expression together with 100 ng of the E1 expression plasmid and 50 ng of the E2 expression plasmid using the FuGENE HD reagent (Promega). The total quantity of transfected plasmid DNA was adjusted to 220 ng with the empty plasmid p3xFLAG-CMV10 (Sigma-Aldrich, St. Louis, MO, United States) as carrier DNA. At 72 h after transfection, firefly and Renilla luciferase activities were measured using the Dual-Glo Luciferase assay system (Promega) on an ARVO MX luminescence counter (PerkinElmer, Waltham, MA, United States), and the level of replication was quantified as the ratio of the two luciferase activities. Promoter Reporter Assay The reporter plasmid containing the HPV16 early promoter, pGL3-P97, was described previously (Kukimoto et al., 2006). HPV18-positive, cervical cancer HeLa cells were plated 24 h before transfection in a 24-well plate at a density of 16,000 cells/well and transfected with 200 ng of pGL3-P97 or pGL3-Basic (Promega) and 5 ng of pGL4.75 with or without 40 ng of the E2 expression plasmid using the FuGENE6 reagent (Promega). The total quantity of transfected plasmid DNA was adjusted to 405 ng with p3xFLAG-CMV10. At 48 h after transfection, firefly and Renilla luciferase activities were measured as described above, and the level of transcription was quantified as the ratio of the two luciferase activities. Coimmunoprecipitation Assay Human embryonic kidney 293 (HEK293) cells (2 × 10 6 cells) were transfected with 5 µg of the E1 expression plasmids using FuGENE HD. At 48 h after transfection, total cell extracts were prepared in RIPA buffer as described above. N-terminal 6xHistagged HPV16 E2 (His-E2) was bacterially expressed and purified as previously described (Kusumoto-Matsuo et al., 2011). The cell extracts were incubated with 20 µg of His-E2 at 4 • C for 2 h while mixing with anti-FLAG M2 magnetic beads (Sigma-Aldrich). The beads were washed three times with RIPA buffer, and the bound proteins were eluted by boiling the beads in SDS-PAGE sample buffer. The recovered proteins were analyzed by western blotting with anti-FLAG and anti-6xHis (HIS.H8; Abcam, Cambridge, United Kingdom) antibodies. DNA Pulldown Assay Biotinylated DNA probes containing the HPV16 genomic region from 7,791 to 120 (236 base pairs in length) were prepared by PCR using the HPV16 whole-genome plasmid as a template with the following primers: HPV16-bio7791F (5 -biotin-TAC ATG AAC TGT GTA AAG GTT AGT CA-3 ), and HPV16-120R (5 -TGT GGG TCC TGA AAC ATT GCA GTT CTC TTT-3 ). The biotinylated DNA probes were coupled to Dynabeads/M-280 streptavidin (Dynal Biotech, Oslo, Norway) at room temperature for 20 min in coupling buffer (5 mM Tris-HCl, pH7.5, 0.5 mM EDTA, and 1 M NaCl). Total cell extracts were prepared from HEK293 cells that had been transfected with the E2 expression plasmids as described above, and incubated with the DNAcoupled or uncoupled magnetic beads at 4 • C for 2 h. The beads were washed three times with RIPA buffer, and the bound proteins were eluted by boiling the beads in SDS-PAGE sample buffer and analyzed by western blotting with anti-FLAG antibody. Statistical Analysis All statistical analyses were performed using R version 3.6.3. 2 Mann-Whitney U test was used to evaluate the average variation number between the NILM and ICC samples. Welch's t-test was used to evaluate a difference in variant frequency or reporter activity between different groups. A value of p < 0.05 was regarded as statistically significant. Data Availability Short-read sequencing data are available from the DNA Data Bank of Japan, Sequence Read Archive, under accession number DRA009226. Within-Host Variations of HPV16 Genome Sequences in ICC and NILM Using our bioinformatics pipeline to detect minor nucleotide variations compared to a viral reference sequence dominantly present in each sample, we identified a total of 367 nucleotide substitutions in the HPV16 genome as within-host variations: 167 from ICC samples (n = 31) and 200 from NILM samples (n = 21) ( Table 1). As we previously reported for another set of clinical samples (Hirose et al., 2018), non-synonymous substitutions far outnumber synonymous substitutions across all viral genes except E4. The number of variations per sample ranged from 1 to 18 in ICC (average, 5.4) and from 0 to 46 in NILM (average, 9.5) ( Figure 1A). There was no significant difference in the average number of variations between the two groups (p = 0.20, Mann-Whitney U test). As shown in Figure 1B, the distribution pattern of within-host frequencies of individual variations was slightly different between ICC and NILM; nucleotide variations at relatively high frequencies were more apparent in ICC than in NILM. The average of variant frequencies was significantly different between the two groups (p = 0.02, Welch's t-test). These nucleotide variations were almost completely evenly distributed throughout the viral genome in both NILM and ICC samples (Figure 1C), although a non-coding region between E5 and L2 (NC) showed the highest density of nucleotide variations ( Table 1). When the variant positions were ranked according to viral genomic regions (E1, E2, E4, E5, E6, E7, L1, L2, LCR, and NC), it became apparent that the E1, E2, and L2 regions contained more nucleotide variations at high frequencies in ICC than in NILM ( Figure 1D). The E7 and L1 regions also harbored one exceptionally high-level variation in ICC and NILM, respectively. We next compared the distribution of within-host frequencies of nucleotide variations between ICC and NILM based on three categories of nucleotide substitutions: non-synonymous, synonymous, and non-coding region substitutions. As shown in Figure 1E, high-level, non-synonymous substitutions were found more frequently in ICC than in NILM (p = 0.03, Welch's t-test), whereas no such differential trend was apparent for synonymous and non-coding region substitutions (p = 0.56 for synonymous substitutions, and p = 0.43 for non-coding region substitutions, Welch's t-test). The distributions of frequencies of non-synonymous versus synonymous substitutions were also examined according to individual gene regions. As shown in Figure 1F, the E1, E2, and L2 regions were more enriched for non-synonymous substitutions at high frequencies (p = 0.16 for E1, p = 0.03 for E2, and p = 0.12 for L2, Welch's t-test) compared to the other regions. The one E7 and one L1 variation detected at high levels ( Figure 1D) were also non-synonymous substitutions. Patterns of Nucleotide Substitutions of Within-Host Variations Recently, comprehensive genomics studies of HPV16 have documented that nucleotide variations in the HPV16 genome within individual clinical specimens were mostly C-to-T or G-to-A substitutions, which are believed to be mediated by cellular APOBEC3 cytosine deaminases as a host defense response to virus infection (Hirose et al., 2018;Zhu et al., 2020). We therefore examined the mutational spectrum in our clinical samples, based on six types of substitutions, i.e., C-to-A, C-to-G, C-to-T, T-to-A, T-to-C, and T-to-G (all substitutions are referred to by the pyrimidine of the mutated Watson-Crick base pair). As shown in Figure 2A, C-to-T substitutions were the most common in both ICC and NILM samples, followed by C-to-A substitutions. In contrast, the number of nucleotide variations at relatively high frequencies (>10%) exhibited a different pattern, and C-to-A substitutions were prevalent in ICC and NILM ( Figure 2B). Overall, the C-to-T substitutions were evenly distributed across the HPV16 genome without any preference for specific viral regions regardless of sample histology ( Figure 2C). No enrichment in particular regions was evident for the C-to-A substitutions as well as other minor types of substitutions. Biological Activities of E1/E2 Variants The E1/E2 proteins play essential roles in viral replication; E2 recruits E1 to the viral origin DNA and E1 then unwinds the origin to initiate DNA replication. The enrichment of highly abundant, non-synonymous substitutions in the E1/E2 genes prompted us to test whether the resulting E1/E2 variants had altered activities in supporting viral replication. To this end, the variant and prototype E1/E2 proteins were transiently expressed from expression plasmids in HPV-negative C33A cells, together with a viral origin-containing plasmid that included the firefly luciferase gene. Three days after transfection, replication levels of the origin plasmid were evaluated as a readout of elevated reporter activities. Although the transient replication assay does not reflect a replication mode of viral persistence, it has often been used to evaluate inherent activity of E1/E2 to induce viral replication. Two E1/E2 variants, E1 K483A (Nakahara et al., 2015) and E2 K111R (Thomas and Androphy, 2018), previously shown to be defective for viral replication, were included as negative controls. Because all E2 proteins so far reported have a threonine residue at position 357, we examined E2 T357A instead of A357T. Among the E1 variants tested, M207I had a significantly reduced ability to support replication of the origin plasmid relative to the prototype E1, whereas Q142K and L262V yielded similar levels of replication ( Figure 3A). Western blot analysis revealed comparable levels of protein expression for the prototype and variant E1s (Figure 3B). In a titration experiment of the E1 expression plasmids, M207I showed a diminished ability to induce origin-dependent replication under the saturated amounts of the transfected plasmids when compared to the prototype ( Figure 3C). Coimmunoprecipitation assay with FLAG-tagged E1s and 6xHis-tagged E2 demonstrated that all the E1 variants retained a capability to interact with E2 as the prototype E1 did (Figure 3D). Regarding the E2 variants, D153Y and R302T exhibited a severely impaired ability to induce virus replication ( Figure 3E). Transfection of increasing amounts of the E2 expression plasmids for D153Y and R302T also resulted in significantly reduced levels of replication compared to the prototype E2 (data not shown). Western blot analysis of the E2 variants showed considerable variability of protein levels ( Figure 3F). In C33A cells, R302T was less efficiently expressed than the prototype E2, whereas the level of T357A was similar to that of the prototype. Although D153Y was almost undetectable on short exposure of the blot, longer exposure allowed the visualization of high molecular mass aggregates of this variant, which could not be resolved in the gel (data not shown), suggesting instability of D153Y in the cell. The defect of D153Y in viral replication was thus explained by a low expression level of this variant in C33A cells. The E2 variants were further evaluated for their potential to regulate the viral early promoter in reporter assays, which was of interest because E2 is a known transcriptional repressor of the early promoter that drives E6/E7 expression. HeLa cells were transfected with a reporter plasmid containing the HPV16 LCR upstream of the luciferase gene with or without the E2 expression plasmids, and 2 days after transfection, the reporter activity was measured. As shown in Figure 3G, while the prototype E2 completely suppressed the early promoter, D153Y and R302T failed to repress the viral promoter activity. In contrast, T357A suppressed the promoter activity as efficiently as the prototype E2 did. Western blot analysis showed comparable levels of protein expression for the prototype and R302T, and lower expression of T357A, whereas D153Y was almost undetectable as observed in C33A cells (Figure 3H). Spatial Location of D153 and R302 in the E2 Protein Finally, we determined the location of the variable amino acid residues of the E2 variants in the three-dimensional structure of the protein. The E2 protein is composed of three distinct domains: an N-terminal transactivation domain, a C-terminal DNA-binding domain, and a hinge-region connecting these two domains (McBride, 2013). Figure 4 shows a surface projection of the HPV16 E2 transactivation domain. Of its 201 amino acid residues, 176 are exposed at the surface, while 25 are buried inside the molecule. D153, which is included within this domain, is exposed at the molecular surface. Interestingly, V152 and E162, which were also shown to be mutated at high levels in a recent study (Mariaggi et al., 2018), are positioned spatially near D153. The other two residues, R302 and T357, are included in the DNA-binding domain. Of note, R302 is highly conserved among E2 proteins from reported HPV genotypes and constitutes a key residue in the DNA recognition helix of E2 (McBride, 2013). In the DNA-binding domain of HPV18 E2 (Kim et al., 2000), the corresponding arginine residue makes direct contact with the phosphate backbone of DNA in the E2-binding sequence through hydrogen bonding (Figure 5A), indicating its direct role in interacting with DNA. Indeed, DNA pulldown assay using the viral origin DNA revealed that R302T completely lost binding activity to the origin, whereas T357A kept such activity as similarly as the prototype E2 did (Figure 5B). DISCUSSION Accumulating evidence indicates that HPV genomes often undergo mutagenesis in infected individuals, and APOBEC3 is a prime candidate for host proteins that generate such withinhost viral genomic variability (Mirabello et al., 2017;Hirose et al., 2018;Mariaggi et al., 2018). APOBEC3 signature mutations in the HPV16 genome are more often detected in low-grade cervical lesions than in precancer/cancer samples, implying that APOBEC3 is involved in defending against HPV infections (Zhu et al., 2020). In the current study, we extended viral genomic analysis to asymptomatically HPV16-infected normal FIGURE 3 | Biological activities of within-host variants of E1 and E2 proteins. (A) Replication activities of E1 variants. Expression plasmids (100 ng) for FLAG-tagged prototype or variant E1 proteins (Q142K, M207I, L262V, and K483A) were transfected into C33A cells together with the prototype E2 expression plasmid (50 ng), the HPV16 origin-containing firefly luciferase reporter plasmid (10 ng), and the Renilla luciferase plasmid (10 ng), and the levels of replication were measured 72 h after transfection. Relative replication levels compared to that of the prototype E1, which is set to 1.0, are shown. Error bars represent the standard deviation of three independent experiments. Statistically significant differences (Welch's t-test, p < 0.05) are indicated with *. (B) Western blot analysis of E1 variants. FLAG-tagged prototype or variant E1 proteins expressed in C33A cells were detected with anti-FLAG antibody. Tubulin, loading control. (C) Titration of the E1 expression plasmids in the transient HPV16 replication assay. Increasing amounts of the E1 expression plasmid for the prototype E1 or M207 were transfected into C33A cells, and the levels of replication were measured 72 h after transfection. Relative light units of firefly luciferase activity divided by those of Renilla luciferase activity are shown. Error bars represent the standard error of two independent experiments. *Paired t-test (p < 0.05). (D) Coimmunoprecipitation of 6xHis-tagged E2 (His-E2) with the prototype and variant E1s. FLAG-tagged E1s were transiently expressed in HEK293 cells, and total cell extracts were prepared and mixed with recombinant His-E2, followed by immunoprecipitation with anti-FLAG magnetic beads. Immunoprecipitated proteins were analyzed by western blotting with anti-6xHis and anti-FLAG (Continued) FIGURE 3 | Continued antibodies. Input, 10% of His-E2 used. (E) Replication activities of E2 variants. Expression plasmids (50 ng) for FLAG-tagged prototype or variant E2 proteins (D153Y, R302T, T357A, and K111R) were transfected into C33A cells together with the prototype E1 expression plasmid (100 ng), the HPV16 origin-containing firefly luciferase reporter plasmid (10 ng), and the Renilla luciferase plasmid (10 ng), and the levels of replication were measured 72 h after transfection. Relative replication levels compared to that of the prototype E2, which is set to 1.0, are shown. Error bars represent the standard deviation of three independent experiments. Statistically significant differences (Welch's t-test, p < 0.05) are indicated with *. (F) Western blot analysis of E2 variants. FLAG-tagged prototype or variant E2 proteins expressed in C33A cells were detected with anti-FLAG antibody. Tubulin, loading control. (G) Transcription activities of E2 variants. Expression plasmids for FLAG-tagged prototype or variant E2 proteins (D153Y, R302T, T357A, and K111R) were transfected into HeLa cells together with the pGL3-P97 reporter plasmid, and transcription was measured 48 h after transfection. The promoter activity of pGL-P97 without E2 is set to 1.0. Error bars represent the standard deviation of three independent experiments. Statistically significant differences compared to the prototype E2 (Welch's t-test, p < 0.05) are indicated with *. (H) FLAG-tagged prototype or variant E2 proteins expressed in HeLa cells were detected with anti-FLAG antibody. Tubulin, loading control. cervix and found that C-to-T substitutions, a typical pattern of APOBEC3 mutagenesis, were also dominantly detected in cytologically normal samples. This observation suggests that the host immune response mediated by APOBEC3 is operative even in asymptomatic infections and potentially contributes to viral clearance. To further explore the biological significance of intra-host variability of the HPV16 genome, we focused on viral variations that were enriched at high levels within individual samples. Such high-level, single-nucleotide variations were more common among the ICC samples than the normal samples. This suggests that some selection process was responsible for their high level within a specimen because cervical cancer development requires a long period of HPV persistent infection. Consistent with such a selection scenario, the high-level nucleotide substitutions observed in the ICC samples were more enriched for nonsynonymous substitutions, which strongly implies positive selection for particular intrahost variants of viral proteins. Interestingly, the non-synonymous substitutions detected in ICC were more prominent in the E1 and E2 regions than in other regions of the viral genome. These two regions comprise relatively long open-reading frames (1,950 bp for E1 and 1,098 bp for E2), but the length of a reading frame alone cannot explain the enrichment of E1/E2 mutations because the late L1 and L2 regions of similar lengths (1,596 bp for L1 and 1,422 bp for L2) harbored only one high-frequency nonsynonymous substitution in L1. The enrichment of high-level, non-synonymous substitutions in E1/E2 among the ICC samples prompted us to explore any functional changes in these gene products. Indeed, we found that of six E1/E2 variants tested, three (E1 M207I, E2 D153Y, and E2 R302T) had a reduced ability to regulate viral replication or transcription. During cervical cancer progression, E2 is often disrupted by integration into the host genome or transcriptionally silenced by epigenetic modifications (McBride and Warburton, 2017). Because the E2 protein negatively regulates the viral early promoter that drives E6/E7 oncogenesis, functional loss of E2 is thought to be strongly associated with the development of ICC via upregulation of E6/E7. Our finding that the two intrahost variants of E2 are defective for viral replication/transcription is consistent with the prevailing consensus on the role of E2 in HPV-induced carcinogenesis, implying the possibility that was visualized in MOE. The spatial locations of R303, which is homologous to R302 in HPV16 E2, and the DNA recognition helix are shown. Hydrogen bonding between R303 and the DNA backbone is indicated as red dotted lines. (B) DNA pulldown assay for the prototype and variant E2 proteins. Total cell extracts prepared from HEK293 cells transfected with the E2 expression plasmids were mixed with magnetic beads coupled or uncoupled with the HPV16 origin DNA, followed by washing the beads. Bound proteins were analyzed by western blotting with anti-FLAG antibody. Input, 6% of cell extracts used. cells expressing D153Y and R302T had been positively selected for their contribution to cervical cancer development through enhanced expression of E6/E7. A recent study including genomic analysis of HPV16 on serial cervical samples from precancer/cancer cases revealed that 56% of the women tested had an identical viral genomic sequence in two consecutive samples (the median time between sampling was 24 months) (Arroyo-Muhr et al., 2019). Moreover, the estimated substitution rate was almost zero substitutions/site/year, suggesting that HPV16 genomic sequences are extremely stable in most cases of persistent infections. However, another study with longitudinal sampling from primary and recurrent CIN2/3 lesions reported that of 14 paired samples, 10 had exactly the same sequences in consecutive samples, but 4 harbored relatively high-level nucleotide variations (5-50% abundance) in either the initial or follow-up samples (van der Weele et al., 2019). Interestingly, among six nucleotide positions detected as showing high-level variations in the CIN2/3 samples, four were located in the E2 gene, and all of them were non-synonymous substitutions for the E2 protein (van der Weele et al., 2019). Enrichment of minor nucleotide variants in the E2 gene was also demonstrated for HPV16-positive cervical specimens in another study (Mariaggi et al., 2018). These observations are consistent with our finding that the E2 sequence is enriched in high-level, non-synonymous substitutions in cervical cancer samples and suggest that the enrichment of such E2 variations is not a byproduct of cancer generation but precedes progression to ICC. Among the E2 variants found in this study, D153Y seems to be less stably expressed in cells, which likely leads to the failure of this variant to regulate viral replication and transcription. Threedimensional structural projection of the transactivation domain of E2 indicates that the D153 residue is exposed on the surface of the molecule, positioned spatially near residues that were previously reported to be subject to intrahost variation (Mariaggi et al., 2018). The cellular transcription factor Brd4 interacts with and stabilizes HPV16 E2 (Zheng et al., 2009), thereby supporting its transcriptional function. Although the Brd4-binding region includes R37 and I73 resides in the same transactivation domain of E2, it does not overlap at the surface with D153 (Figure 4). Because proteomics analysis of E2 revealed that a variety of cellular protein complexes interact with E2 (Jang et al., 2015), cellular proteins other than Brd4 might be involved in stabilizing E2 through interaction with the region around D153. Consistent with the important role in DNA interaction suggested from structural inspection, R302T completely lost binding activity to the viral origin DNA containing three E2recognition sites, and this defect nicely explains the inability of R302T to regulate viral origin-dependent replication and early promoter-driven transcription. Regarding E1 variants, M207I showed a reduced ability to support viral origin-dependent replication, although it was expressed at a similar level to the prototype E1 and retained the ability to interact with E2. Such replication defect may be attributed to some change in DNA interaction during DNA unwinding by M207I because this residue is included in the DNA-binding domain of E1 (Auster and Joshua-Tor, 2004). Our previous study reported another within-host E1 variant, Q381E, which was present at a relatively high abundance (5.42%) in one ICC sample. This variant also exhibited a reduced ability to support HPV16 origin-dependent replication (Kukimoto et al., 2013). Expression of E1 induces a host DNA-damage response and causes growth arrest of the host cell (Fradet-Turcotte et al., 2011;Sakakibara et al., 2011). The E1 level is kept low through its proteasomal degradation induced by E1 itself to allow for virus persistence (Nakahara et al., 2015). Based on these observations, we hypothesize that functional attenuation of E1 may also favor the survival of infected cells and confer a selective growth advantage to those harboring such E1 variants. Although our results suggest that the E1/E2 genes are a hotspot for high-level within-host variations, it is also clear that not all the variations of E1/E2 cause functional changes or reduced expression. Because HPV-infected cervical lesions generally undergo two-way processes of progression and regression during a long period of persistent infection, it is very likely that a small viral population in the regression phases experiences a bottleneck for random selection of a minor variant genome (i.e., random genetic drift). Enrichment of variant viral genomes in cervical cancer specimens may reflect such neutral selection processes. In this regard, the presence of enriched nucleotide variations in the HPV genome is not a prerequisite but rather episodic for individual cases to eventually progress to ICC, as recently demonstrated for APOBEC3 mutagenesis in a range of human cancer cell lines (Petljak et al., 2019). Nevertheless, highly abundant, non-synonymous variations in the E1/E2 genes of cervical cancer specimens are reminiscent of the fact that most HPV-induced cancers present with viral integration into the host genome together with a breakpoint in E1 or E2 (McBride and Warburton, 2017). While our current study cannot distinguish between episomal and integrated forms of the viral genome, functional defects of E1/E2 caused by withinhost variations may substitute for such integration events that are required for the development of ICC. Thus, consecutive monitoring of intra-patient E1/E2 variations in precancerous lesions, such as CIN2/3, may contribute to clinical assessment of whether the lesions will progress if untreated. Further largescale studies with longitudinal clinical samples will be needed to address this issue in more detail. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ddbj.nig. ac.jp/, DRA009226. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Institutional Review Boards at Keio University Hospital, Tsukuba University Hospital, Showa University Hospital, and the National Institute of Infectious Diseases. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS IK conceived and designed the study, and wrote and revised the manuscript. YH and YT obtained the sequencing data. MY-N, KT, and IK performed in vitro experiments. YH, YT, MY-N, and IK analyzed and interpreted the data. MO, NT, TS, TI, AS, and KM collected the clinical specimens. All authors read and approved the final manuscript.
2020-11-24T14:10:21.095Z
2020-11-24T00:00:00.000
{ "year": 2020, "sha1": "437ee7d57d4871f3a8943e84de3d0368366d4b93", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.596334/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "437ee7d57d4871f3a8943e84de3d0368366d4b93", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
268720161
pes2o/s2orc
v3-fos-license
PATHFINDER-CHD: prospective registry on adults with congenital heart disease, abnormal ventricular function, and/or heart failure as a foundation for establishing rehabilitative, prehabilitative, preventive, and health-promoting measures: rationale, aims, design and methods Background Adults with congenital heart defects (ACHD) globally constitute a notably medically underserved patient population. Despite therapeutic advancements, these individuals often confront substantial physical and psychosocial residua or sequelae, requiring specialized, integrative cardiological care throughout their lifespan. Heart failure (HF) is a critical challenge in this population, markedly impacting morbidity and mortality. Aims The primary aim of this study is to establish a comprehensive, prospective registry to enhance understanding and management of HF in ACHD. Named PATHFINDER-CHD, this registry aims to establish foundational data for treatment strategies as well as the development of rehabilitative, prehabilitative, preventive, and health-promoting interventions, ultimately aiming to mitigate the elevated morbidity and mortality rates associated with congenital heart defects (CHD). Methods This multicenter survey will be conducted across various German university facilities with expertise in ACHD. Data collection will encompass real-world treatment scenarios and clinical trajectories in ACHD with manifest HF or at risk for its development, including those undergoing medical or interventional cardiac therapies, cardiac surgery, inclusive of pacemaker or ICD implantation, resynchronization therapy, assist devices, and those on solid organ transplantation. Design The study adopts an observational, exploratory design, prospectively gathering data from participating centers, with a focus on patient management and outcomes. The study is non-confirmatory, aiming to accumulate a broad spectrum of data to inform future hypotheses and studies. Processes Regular follow-ups will be conducted, systematically collecting data during routine clinical visits or hospital admissions, encompassing alterations in therapy or CHD-related complications, with visit schedules tailored to individual clinical needs. Assessments Baseline assessments and regular follow-ups will entail comprehensive assessments of medical history, ongoing treatments, and outcomes, with a focus on HF symptoms, cardiac function, and overall health status. Discussion of the design The design of the PATHFINDER-CHD Registry is tailored to capture a wide range of data, prioritizing real-world HF management in ACHD. Its prospective nature facilitates longitudinal data acquisition, pivotal for comprehending for disease progression and treatment impacts. Conclusion The PATHFINDER-CHD Registry is poised to offer valuable insights into HF management in ACHD, bridging current knowledge gaps, enhancing patient care, and shaping future research endeavors in this domain. State of research and scientific background Adults with congenital heart defects (ACHD) constitute a profoundly medically underserved patient population on a global scale [1][2][3].Approximately 50 million adults worldwide live with CHD, with over 360,000 individuals in Germany alone, and this number projected to escalate in the coming decades [4,5].Within Germany and Europe, the number is further augmented by a significant, albeit indeterminate cohort of refugees, asylum seekers, or migrants, among whom the incidence and prevalence of CHD are even more elevated. Despite therapeutic advances, ACHD are chronically ill, characterized by significant residua or sequelae, which manifest both physically and psychosocially.Nearly all necessitate specialized, integrated care throughout their lifespan, differing markedly from the management of acquired heart disease. Heart failure in ACHD Globally, heart failure (HF) affects approximately 64 million patients and presents a burgeoning public health concern due to its attendant morbidity and mortality.In Germany, HF is among the top three causes of mortality, with a 5-year survival rate of 50% and nearly 35,000 deaths annually.Each cardiac decompensation and hospitalization worsens the prognosis and increases the mortality risk. In ACHD, HF from chronic pressure or volume overloads, intracardiac scars, valvular heart disease, arrhythmias, pulmonary hypertension, or ischemia due to congenital coronary anomalies is the main reason for increased morbidity and mortality.Variables such as the duration and severity of cyanosis, type of cardiac surgery, late effects of heart-lung machine operation, cardio protection during operative treatment, age at treatment, and time since the procedure additionally modulate risk.Approximately 25% of afflicted individuals succumb from HF, with selected CHD witnessing mortality rates soaring to 50% [13][14][15]. The risk profile for HF is particularly pronounced in cohorts with univentricular hearts (post-Fontan operation), with systemic right ventricle after atrial redirection in transposition of the great arteries, with severe pulmonary vascular disease (Eisenmenger syndrome), or with profound heart valve dysfunction subsequent to repair of tetralogy of Fallot [8,16].Notably, robust data with manifest HF or at risk for its development, including those undergoing medical or interventional cardiac therapies, cardiac surgery, inclusive of pacemaker or ICD implantation, resynchronization therapy, assist devices, and those on solid organ transplantation.Design The study adopts an observational, exploratory design, prospectively gathering data from participating centers, with a focus on patient management and outcomes.The study is non-confirmatory, aiming to accumulate a broad spectrum of data to inform future hypotheses and studies. pertaining to HF management in CHD remain scarce, as these entities, typified by their complexity, have hitherto been interrogated only within limited cohorts so far, and frequently constitute an exclusion criterion in heart failure studies [17,18].Management recommendations from the corpus of acquired heart disease, where a wealth of evidence-based therapies exists, are only indirectly admissible. Special considerations and long-term care challenges for HF in ACHD For individuals exhibiting systolic ventricular dysfunction of a morphologically left systemic ventricle, therapeutic strategies typically involve blockers of the renin-angiotensin system (RAS) such as ACE inhibitors and angiotensin-receptor blockers (ARB, sartans), angiotensin receptor neprilysin inhibitors (ARNIs), beta blockers, mineralocorticoid receptor antagonists (Aldactone, Eplerenone), as well as diuretics (loop diuretics, thiazides, metolazone), and digitalis glycosides [18,19].Emerging therapeutic avenues such as empagliflozin, dapagliflozin, or vericiguat, are encumbered by a dearth of data. For the treatment of heart failure with preserved systolic function (HFpEF), our understanding of therapeutic modalities in the realm of CHD remains rudimentary [20]. The data on the treatment of ventricular systolic dysfunction of a morphologically right systemic ventricle (e.g. after atrial redirection in transposition of the great arteries, in congenitally corrected transposition, and in univentricular hearts of the right ventricular type) is also scant.This also applies to individuals with univentricular hearts and surgically created passive pulmonary blood flow (Fontan circulation) [21]. In refractory heart failure scenarios, where conservative therapy has been exhausted, resynchronization therapy, mechanical assist devices, and heart or heart-lung transplantation are fundamentally available options [14,22].However, the efficacy of these treatments is currently ambiguous, and long-term results are also lacking [23][24][25].Heart or heart-lung transplantation is limited by the often complex anatomy of CHD and the shortage of donors [26]. In addition to inadequate, evidence-based treatment recommendations, knowledge about preventive, prehabilitative, and health-promoting measures that could positively influence the course of the disease, is virtually non-existent. Current guidelines addressing cardiac rehabilitation in ACHD offer scant directives.Only small studies confirm the feasibility of such interventions and provide only cursory recommendations [27].As a consequence, today a paltry proportion of patients avail themselves of rehabilitative or prehabilitative measures. There is a marked deficit in patient education regarding their condition and the relevance and necessity of healthrelevant behaviors [3,28].There is a tremendous ignorance among patients regarding available entitlements, responsible cost bearers, clinic selection, appeals processes, and similar domains [28,29].As a result, the utilization of rehabilitative or prehabilitative services remains suboptimal among affected individuals. Objectives and goals The primary objective of this research project is to ameliorate the inadequate data situation regarding the care of ACHD.This ambition will be achieved by the establishment of a prospective, epidemiological-clinical registry dedicated to the management of heart failure in ACHD and abnormal ventricular function and/or anatomy. The project intends to analyze the collected data to develop concepts for the evidence-based optimization and safe implementation of heart failure therapy. Additionally, this initiative will lay the foundation for the integration of preventive, rehabilitative, prehabilitative, and health-promoting initiatives, tailored to the unique needs of individuals with ACHD. Through a nationwide registry, gathered real world data will answer open questions in ACHD care, particularly elucidating the treatment and counseling imperatives across various stages of the disease. A comprehensive delineation of key focal points is encapsulated in Table 1. Further Goals Include: 1. Enhancing the health status of ACHD: The primary aim of the registry is to optimize current HF therapy, thereby fostering an amelioration in the well-being and health status of ACHD in Germany and potentially extending its impact beyond national borders. Ethical considerations and data protection measures This registry operates in alignment with universally accepted ethical standards, encompassing the Declaration of Helsinki, as well as relevant local regulations and national laws.Before initiating data collection as per the protocol, written informed consent is obtained from the patient, or their legally authorized representative, using the approved Informed Consent Form (ICF). Comprehensive information regarding the registry, including its voluntary nature, will be clearly communicated to the patient through a direct conversation.Sufficient time will be provided for the patient to deliberate their participation in the registry.The signed ICF will be stored in the registry's records.The patient will receive a copy of the ICF, duly signed and dated.The patient, or their guardian, reserves the right to withdraw from the registry at any point.Those who choose to withdraw will not be substituted. Registration The registry has been included in the German Clinical Trials Registry, DRKS (Ref.Nr.: DRKS00030508).An English version of the dataset has been submitted to the WHO Study Registry. Confidentiality and safeguarding data Adherence to data protection regulations, including compliance with the General Data Protection Regulation (GDPR), is a priority.Neither patient initials nor exact birth dates will be entered into the database.Patient information is gathered using pseudonyms. When the baseline assessment is logged (at the inclusion visit), the Electronic Data Capture (EDC) system automatically generates a distinct and sequential Subject Identification Code.All registry-related documents, such as printed electronic case report forms and informed consent forms, are marked with that code.The investigator will keep a record for identifying patient records in response to inquiries.Data is exclusively transferred in an encrypted format. Design PATHFINDER-CHD is a multicenter, prospective observational study.The registry allows for structured, non-interventional collection of data.There is no specified end date or minimum duration for the registry. Setting The participating centers are among the largest specialist clinics in Germany for the care of ACHD.Physicians participating in the study retain full autonomy in diagnosing and treating their patients.Any examinations conducted are at the physician's discretion and based on their clinical practice. Patients The designated population for the registry includes ACHD.There is no formalized process for screening potential patients.It is encouraged that physicians evaluate every patient with ACHD, to determine their suitability for inclusion in the registry. Patients qualify for inclusion in the registry documentation if they: -have any form CHD -are aged 18 years or older -have given informed consent (can also be provided by guardians) -can be documented over a long-term follow-up period Patients are not eligible if they participate in an interventional study, as this would interfere with real-world evidence generation. Patients retain the flexibility to switch their medications and treatments as needed, even multiple times, throughout the documentation period. Data collection and quality control The operation of the Electronic Data Capture (EDC) system, including the website and database, is managed by an institution with profound experience with registries.Data are entered by sites through a Hypertext Preprocessor (PHP)-based user interface into a Structured Query Language (MySQL) database.The raw data collected from the ACHD centers are securely stored in their original form in a central database.The data are backed up daily. The raw data preparation and transfer to statistical analysis programs follow documented standards.These standards, as well as the data security and backup concept, are documented in the Standard Operating Procedures (SOPs) of the evaluating institute.During data collection, the documentation forms are transmitted to a central database by the study centers via direct electronic data capture over the Internet. Assessments The registry is structured to include a fundamental set of variables (mandatory data) crucial for all patients enrolled.Additionally, there's a secondary set of variables (optional data) that may be solicited from participating facilities, though their provision is voluntary.The system is tailored to support sub-studies, which involve the incorporation of additional variables, at specific centers.This facilitates research collaboration among different institutions. Baseline assessments and regular follow-ups are conducted whenever a patient visits the facility, undergoes therapy changes, or experiences events or complications related to their CHD.A list of all collected variables can be found in Table 2. Data analysis and compliance The analysis will mainly employ standard methods of descriptive and inferential statistics as well as machine learning techniques.Results based on samples will be reported with 95% confidence intervals.Additionally, time-to-event and predictor analyses are planned.The study is in accordance with the current version of the Declaration of Helsinki.All steps of the analyses will be conducted in compliance with Good Practice Secondary Data Analysis (GPS) [30]. The analysis of the registry data will be conducted globally across the entire registry and, if requested, can also be specific to individual institutions.The analysis includes age-and gender-specific evaluations.Standard methods of descriptive and inferential statistics will be employed.The results will be described using descriptive, exploratory and inferential statistical methods.These include the calculation of statistical epidemiological measures, determination of confidence intervals, and more.Data visualization will be achieved through histograms, scatter plots, cross-tabulations, and dimension reduction methods like principal component analysis, t-distributed stochastic neighbor embedding (t-SNE) or uniform manifold approximation and projection (UMAP) depending on gender.A wide range of statistical and machine learning methods will be used for data analysis.This includes t-tests, analyses of variance (ANOVA), correlation analyses, regression analyses, latent variable/latent class models, latent growth models and time-to-event analyses.These methods will enable a comprehensive and detailed understanding of the data, facilitating the identification of patterns, relationships, and trends within the ACHD population.This approach ensures a thorough exploration of the data, allowing for robust conclusions and insights to be drawn from the registry. Results At the present time, there are no results available. Discussion The PATHFINDER-CHD registry will compile data from a minimum of 1.500 ACHD. A key expected outcome is the improved early detection of complications through targeted risk monitoring, including follow-up care, based on specific criteria.This proactive approach promises to mitigate severe health consequences through earlier interventions.The data collected will also facilitate the development of more effective treatment and follow-up strategies, expected to enhance the prognosis for ACHD, thereby enhancing their quality of life and reducing mortality risks. The research emphasizes the creation of preventive, prehabilitative, and rehabilitative measures grounded in the latest findings.These measures are expected to be directly implementable in clinical settings, offering a holistic and effective approach to patient care.The interdisciplinary care approach, informed by the registry data, is expected to significantly enhance patient safety by addressing the complex needs of ACHD in a tailored manner. An integral part of the project is the establishment of interdisciplinary forums for continuous care, involving general practitioners and specialists.These forums will facilitate the exchange of knowledge and development of best practices for lifelong care of ACHD.By improving health outcomes, the project could delay or prevent the need for early retirement among ACHD, significantly impacting the economic burden on health care and pension systems. Additionally, one of the foundational goals is to reduce disease-related costs for both statutory health insurance and pension insurance systems.Effective care strategies and early interventions can lead to substantial cost savings.Importantly, the insights gained from this patient cohort are likely to be applicable to other patient groups with rare diseases, extending the impact of this research beyond ACHD and paving the way for improved care strategies across a spectrum of less common conditions. In conclusion, this project has the potential to improve substantially to the care of ACHD, offering not only improved clinical outcomes but also significant societal and economic benefits.The successful integration of these findings into clinical practice will be a crucial step in realizing these extensive benefits. Table 1 Objectives of the PATHFINDER Registry and future perspectives Table 2 List of variables collected in PATHFINDER-CHD
2024-03-28T05:10:01.990Z
2024-03-26T00:00:00.000
{ "year": 2024, "sha1": "e2a648206083899dd67c012711ca8bb9fa46bd4a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e2a648206083899dd67c012711ca8bb9fa46bd4a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55058127
pes2o/s2orc
v3-fos-license
Net Nutrient Uptake in the White River, Northwest Arkansas, Downstream of a Municipal Wastewater Treatment Plant Wastewater treatment plays a crucial role in preserving water quality in receiving streams; however, continuous nutrient enrichment can diminish the retention capacity of rivers. The objectives of this study were to evaluate the effects of wastewater treatment plant effluent and river discharge on water chemistry and determine the retention efficiency of nutrients added in the effluent along a 6.1-km reach of a 5th-order stream in the Ozark Highlands of northwest Arkansas. From 2006 through 2007, effluent discharge increased river nitrite, soluble reactive P (SRP), and total organic C (TOC) and conductivity. As river discharge increased, dissolved oxygen (DO) and turbidity increased, but water temperature, conductivity, and TOC decreased. Net nutrient uptake lengths were inconsistent for NO3-N, NH4-N, and SRP. Results indicated that the fluvial channel acted as both a sink and a source of NO3-N and SRP, but the channel always acted as a sink for NH4-N with a significantly positive retention coefficient that indicated only 12% of added NH 4 -N was retained in the study reach. The effluent discharge increased the concentrations of seven water quality parameters and it appears the long-term enrichment has rendered the immediate-downstream reach ineffective as a nutrient sink. Nutrients added in the effluent were generally transported with little to no uptake or transformation, thus river chemical concentrations beyond the study reach have likely been influenced by this effluent discharge. Introduction Water quality issues in the Ozark Highlands region of northwest Arkansas, southwest Missouri, and northeast Oklahoma include sedimentation and mineral and nutrient enrichment.Numerous stream segments do not support the designated uses for aquatic life and/or as a municipal and industrial water supply [1].The causes of these impairments include surface erosion, urban nonpoint source pollution, and the effluent from municipal wastewater treatment plants (WWTP) [1].Even so, at least the last two decades of water quality research in the Ozark Highlands have focused primarily on nutrient fluxes in surface runoff in response to animal manure application [2][3][4][5].A need exists to evaluate the impact of treated wastewater on in-stream processes, focusing on how effluent discharges influence stream nutrient retention. In the 2000s, numerous studies evaluated the effects of effluent discharges on nutrient dynamics within the stream channel [6][7][8][9].Impacts of the effluent discharge in relatively small streams demonstrated the stream's inability to retain added phosphorous (P) and nitrogen (N); added nutrients were traveling kilometer-scale distances before being significantly retained.These streams provided short-term N storage through partial N cycling and nitrification of ammonium (NH 4 -N) to nitrate-N (NO 3 -N).However, NO 3 -N often showed a net increase in transport downstream from the effluent discharge or traveled long distances before retention within the fluvial channel. Nutrient studies evaluating impacts of WWTP effluent addition in other regions of the world have reported differing results.For example, a river near Berlin, Germany was studied with a two-reach approach that showed little to no effects on stream water chemistry from a mod-ern-day WWTP [10].Gücker et al. [10] reported diminished rates of P and ammonium uptake, but increased nitrate uptake efficiency downstream of the WWTP.Gücker et al. [10] attributed the difference in their findings, as compared to previous studies, to the modern tertiary treatment of wastewater.Thus, it is clear that the effects of effluent discharges on nutrient dynamics and water chemistry vary with the treatment capacity of the WWTPs.Treese et al. [11] even suggested that clogging of the streambed may occur in effluent-dominated streams due to increased physical, chemical, and biological processes from elevated nutrients to render the stream unstable and result in a reduced capacity to recharge groundwater. Most studies on the effects of effluent discharges on stream nutrient retention have focused on smaller streams, where the effluent discharge often has a profound effect on physio-chemical properties and makes up a large portion of discharge.Relatively few studies have focused on large rivers, when the effluent discharge is greatly diluted even during seasonal base-flow conditions.The Chattahoochee River, a large urban river near Atlanta, Georgia exhibited great variation in nutrient patterns downstream of multiple effluent discharges due to large fluctuations in river discharge and subsequent dilution of the effluents [12].Thus, the dilution of effluent discharges plays a large role in the impact on water chemistry and nutrient transport downstream. The objectives of this study were to evaluate the effects of WWTP effluent and river discharge on water quality and determine the retention efficiency of nutrients added in WWTP effluent in a 5th-order stream in the Ozark Highlands of northwest Arkansas.It was hypothesized that 1) there will be a no difference in water quality upstream and downstream of the WWTP effluent due to a large dilution effect, 2) dilution-corrected nutrient concentration differences will not be observed among downstream sample sites due to the relatively short study reach, 3) nutrient retention coefficients would not differ from zero indicating nutrient transport with no retention nor export was occurring, and 4) retention coefficients and net nutrient uptake lengths for N fractions would be unrelated, but those for P fractions would be related to certain water quality parameters, particularly turbidity. The Study Area The Ozark Highlands ecoregion covers parts of Kansas, Missouri, Oklahoma, and Arkansas [13] and is characterized by karst topography and high-gradient, riffle-pool, clear-flowing streams.Stream base flows throughout the dr summer months are maintained by springs and seeps.y The ecoregion is known for its rich aquatic diversity.Bedrock in the Ozark Highlands is typically limestone, dolomite, and chert.Historically, land cover was oak (Quercus spp.)-hickory (Carya spp.) forest with intermittent tallgrass prairie.Most of the tallgrass prairie has been converted to agriculture [14].Approximately 20% of the Ozark Highlands is used for pasture, 10% for cropland, and 70% is forestland [15].The Ozark Highlands is also an area of concentrated poultry production [16].Arkansas' broiler production is concentrated in the northwestern counties of Benton, Washington, Carroll, and Madison, all of which are located within the Ozark Highlands ecoregion.Poultry litter is rich in N, P, and potassium (K) and is a cost-effective way of fertilizing soils [17].Between 1.3 million and 1.8 million Mg of litter is generated in Arkansas annually.A large fraction of this litter is concentrated in northwest Arkansas [18].This application of litter has resulted in high soil-test P levels where pastures have been fertilized long-term [19] and numerous surface water quality issues throughout the region.Over the last 20 years, the northwest Arkansas portion of the Ozark Highlands has experienced a high rate of urbanization.From 2000 to 2007, the population within Washington and Benton counties increased by 28% from 311,121 to 397,399 [20].The increasing population has placed greater demands on regional water resources, which relies on Beaver Lake within the White River Basin The White River in northwest Arkansas is the largest tributary to Beaver Lake, and over 250,000 residents of northwest Arkansas use water from Beaver Lake as their source of drinking water.Three WWTPs discharge treated wastewater within the Beaver Lake-White River watershed.The Paul R. Noland WWTP in Fayetteville, AR is the largest contributor of treated wastewater to receiving waters within the watershed.The Paul R. Noland WWTP discharges effluent into the White River, which is classified as an impaired waterbody because of the lack of support for aquatic life due to excessive siltation and/or turbidity [1]. The White River is composed of three major branches: the West Fork, the Middle Fork, and the main fork, which is simply referred to as the White River (Figure 1).The three branches of the White River originate in the Boston Mountainss ecoregion and flow north to the Ozark Highlands ecoregion.The Middle Fork of the White River and the White River combine to form Lake Sequoyah, a small, shallow reservoir.The outflow of Lake Sequoyah combines with the West Fork of the White River and eventually flows into Beaver Lake. This study was performed on a 6.1 km reach of the White River located between the confluence of the three forks of the White River and the headwaters of Beaver Lake.The entire reach examined in this study was in the Ozark Highlands.In 2004, the White River was designated to have an impaired ability to support aquatic life due to siltation and/or turbidity, where the source was likely from surface erosion.The causes of surface erosion were agricultural activities, unpaved road surfaces, and in-stream erosion mainly from unstable stream banks [21].The White River was categorized as a high-priority for development of a total maximum daily load for the indicated pollutants [1]. Six sites were selected for sampling during the study, a United States Geological Survey (USGS) stream dis-charge monitoring station was located in the study reach, station 07084600 (Figure 2) at Wyman Bridge, just east of Fayetteville, AR. one upstream (~2 km) of the Paul R. Noland WWTP just south of Wyman Bridge and five sites downstream were chosen at riffles, so that the water column would be mixed by the turbulence of the water moving over the shallow riffles.The only major water inflow between Sites 1 and 2 was the effluent discharge from the WWTP; there were no tributary inflows.The sites downstream were from ~0.4 to ~4 km below the WWTP discharge into the White River. For the 30-year period from 1971 to 2000, Fayetteville, AR experienced an average annual air temperature of 14.2˚C and average annual precipitation of 117 cm [22].During the study period of 2006 and 2007, annual precipitation at the USGS station 07048600 totaled 86 and 72 cm, 26% and 38%, respectively, below the 30-year average [23].The White River at Wyman Bridge has a total drainage area of 1036 km 2 [24] and is 74% forested, 15% pasture, and 4% developed or urban. The Wastewater Treatment Plant At the time of this study, the Paul R. Noland WWTP was a Class IV, activated-sludge treatment plant with ultra-violet disinfection.The WWTP's National Pollution Discharge Elimination System (NPDES) permit allowed the WWTP to discharge a maximum of 27,710 m 3 •d −1 into the White River, and the effluent quality was regulated by the Arkansas Department of Environmental Quality (ADEQ).Daily to hourly discharge flow data and effluent water quality records for days that sampling occurred were obtained directly from the WWTP (personal communication, Tim Luther, Operations Manager, CH2M HILL OMI).Effluent water quality data obtained included: daily averages of temperature, dissolved oxygen (DO), pH, total suspended solids (TSS), soluble reactive phosphorous (SRP), total P (TP), and NH 4 -N.Other forms of N (eg., NO 3 -N, NO 2 -N, and organic N) were not routinely measured or reported for this effluent discharge, thus were unavailable to use and report in this study. Water Sample Collection, Processing, and Analyses Water sampling was conducted monthly, excluding December and February, for two consecutive years from January 2006 through 2007.Flow conditions in the White River below the 40-year median flow of 10.5 m 3 •s −1 were targeted as sampling dates, because higher flows presented some personnel safety considerations.At each of the six sampling sites, pH, electrical conductivity, DO, and temperature were measured in-situ with a Thermo Orion 5 Star portable meter (Beverly, MA) at three points within the thalweg (i.e., left, middle, and right).A 1-L water sample was also collected at each of the three points within the thalweg at each sampling site.In the event of split flow resulting from channel morphological changes, both channels were measured for discharge (see below).If the secondary channel accounted for more than 20% of the total discharge, one or more of the three water samples were taken from its thalweg based on its estimated contribution to discharge.A cross section was surveyed with 11 equally spaced survey points across the river channel for determining river discharge (Q).The distance of the cross section was measured with a fiberglass measuring tape.Channel depth was determined with a Marsh McBirney measuring rod and flow velocity was measured electromagnetically with a Flo-Mate 2000 (Marsh McBirney, Fredrick, MA).River discharge was estimated using the product of the water velocity (m•s -1 ) and cross-sectional area (m 2 ) for each area between survey points.The equal-interval discharges were then summed to estimate total river discharge at each sampling site. Following collection, water samples were stored on ice in a dark cooler.Within 24 hrs after collection, sample bottles were shaken and a well-mixed, 40-mL aliquot was removed and preserved to a pH of ~ 2 with two drops of concentrated HCl per 40 mL of solution for subsequent total organic carbon (TOC) and total N (TN) analyses.A 100-mL, well-mixed aliquot was then removed from the 1-L bottle and preserved to a pH of ~ 2 with two drops of 12 N sulfuric acid per 100 mL of solution for subsequent TP analysis.Turbidity was measured on a 20-mL aliquot using a HACH 2100N Turibidimeter (Loveland, CO) according to the SM 2130 B method [25].Turbidity was reported in nephelometric turbidity units (NTU).The remaining portion of the initial 1-L sample was then vacuum-filtered through a 0.45-µm filter.The filtered aliquot was used for subsequent SRP, nitrite (NO 2 -N), NO 3 -N, NH 4 -N, and chloride (Cl-) analyses. Chloride concentrations were determined according to the SM 4500-Cl-C mercuric-nitrate titration method [25].Total organic carbon and TN were determined using a Shimadzu TOC-VCSH TOC analyzer with an added THM-1 TN measuring unit (Shimadzu, Kyoto, Japan) using the SM 5310 B [25] and ASTM D 5176-91 methods [26], respectively.Determinations of NO 3 -N, NO 2 -N, NH 4 -N, SRP, and TP were conducted using a HACH DR 4000 (HACH, Loveland, CO.) spectrophotometer.Nitrate was reduced to NO 2 -N using the SM 4500-NO 3 -E cadmium-copper reduction method [25].The resulting reduced-sample was colormetrically analyzed for determination of the NO 2 -N concentration.The difference between the reduced-sample NO 2 -N concentration and the previously determined NO 2 -N concentration was determined to be the NO 3 -N concentration [25].Ammonium was determined by the HACH Nessler method 8038 [27].Nitrite was determined by the HACH Diazotization method 8507 [27].Soluble-reactive P was determined by the HACH ascorbic acid method 8048 [27].Preserved TP water samples were digested according to the persulfate digestion method (SM 4500-P B) and determined colormetrically by the HACH ascorbic acid method [27].All analyses were conducted before recommended holding times had expired [25]. Nutrient Retention, Export, or Net Uptake Nutrients added to an aquatic system are retained in, transported through, or exported from the system (i.e., added to the water column) [28].The fraction of nutrients retained within the study reach (i.e., the retention coefficient (RC)) was calculated using the nutrient loads from Sites 2 (S2) and 6 (S6) with Equation 1: where N was the mean measured nutrient concentration (mg•L -1 ) and Q was the measured river discharge (m 3 •s -1 ) for the respective Site 2 (S2) or 6 (S6).Since NO 2 -N made up such a small percentage of the inorganic N fraction in the water column, the combined NO 2 -N + NO 3 -N concentration was used in this analysis.Calculating nutrient export or retention in this way is a general approach that examines only reach-level inputs and outputs, which rely on measured Q at the sites.Streams and rivers in the Ozark Highlands often have relatively large subsurface Q flowing through the gravel alluvium within the fluvial channel. The WWTP effluent was used as the nutrient source for determining net nutrient uptake length (S NET ).The S NET approach evaluates longitudinal changes in nutrient concentration throughout the entire study reach, where S NET is a more quantitative approach to examining nutrient dynamics within a study reach than just examining nutrient inputs and outputs.The mean concentration (based on three sub-samples) at each sampling site was corrected for downstream of the effluent discharge (Site 2) using Equation (2): where N D was the dilution-corrected concentration (mg•L -1 ) for the nutrient of choice, N X was the mean nutrient concentration (mg•L -1 ) at sample site x, Cl 0 was the mean chloride concentration (mg•L -1 ) from Site 2 (i.e., the immediate downstream sample site of the WWTP), and Cl X was the mean chloride concentration (mg•L -1 ) at sampling site x.The proportion of nutrient remaining in the water column was then calculated using Equation (3): where P was the proportion of the dilution-corrected nutrient (N) concentration remaining in the water column at site (X).The proportion remaining in the water column (P) was natural-log transformed, and the slope of the linear relationship between the natural-log of the proportion remaining in the water column and the distance from the WWTP discharge represented K.When K (i.e., the slope) was significant (i.e., different from 0) at the p < 0.1, then S NET was calculated with Equation (4): Net nutrient uptake length (S NET ) was expressed in km and was calculated for SRP, NO 3 -N, and NH 4 -N for each sampling date.Negative S NET values represented net release of the nutrient through the study reach, while positive distances demonstrate net retention with long distance suggesting less efficient retention than shorter distances (Newbold et al., 1981).An alpha value of 0.1 was used to judge significance due to the large scale (i.e., 5 th -order stream) of the White River [see also 12]. The net mass transfer coefficient (V F-NET ) was calculated using S NET , Q, and the average wetted width of the river (W) using Equation ( 5): and was expressed in m s -1 .The V F-NET is the velocity at which nutrients travel from the water column to the stream substrate, and removes some hydrologic effects for across site and date comparisons. Net nutrient uptake rate (U NET ) was then calculated by Equation 6: and was expressed in mg m 2 s -1 .This parameter considers changes in concentrations downstream from the effluent discharge to estimate net uptake rates. Water quality parameters (i.e., TN, NO 3 -N, NO 2 -N, NH 4 -N, TP, SRP, turbidity, TOC, Cl, pH, conductivity, temperature, and DO) upstream of the municipal WWTP discharge (Site 1) were compared to those at the first site immediately downstream (Site 2) to evaluate the immediate effect of WWTP effluent on water quality.This was accomplished by conducting a two-factor analysis of variance (ANOVA) using SAS (version 9.1, SAS Institute, Inc., Cary, NC) to evaluate the effect of site (upstream and downstream) and flow regime (i.e., low, medium, and high) on water quality parameters.In addition, paired t-tests were performed separately within each flow regime comparing parameters upstream and downstream to further evaluate the effect of the WWTP effluent discharge on river water quality (Minitab 13.31, Minitab Inc., State College, PA). On dates in which S NET was significant, simple correlation analyses using Minitab were performed to evaluate the relationship between the S NET of individual nutrients and other water quality parameters.An alpha level of 0.1 was decided a priori to use to judge the significance of S NET values due to the expected large spatial variability with the measured parameters.An alpha level of 0.05 was used to judge significance for all correlations conducted.The parameters that were analyzed included: Site 2 nutrient concentrations (SRP, NO 3 -N, NH 4 -N, TP, and TN), TOC, turbidity, conductivity, temperature, pH, DO, and the mean Q averaged across all six sites.Site 2 was chosen because the water quality parameters downstream would show how the effluent discharge might influence nutrient dynamics.An average Q was calculated and used instead of Q measured at Site 2 because of the fluctuations from site to site due to interflow within the gravel streambed. River and WWTP Discharge White River discharge varied over the 20 sampling months from 0.1 m 3 •s -1 in August 2006 to 14.3 m 3 •s -1 in January 2007 in response to local precipitation (Figure 3).Average discharge was 4.2 m 3 •s -1 on days the river was sampled.Based on the <2, 2 to 6, and >6 m 3 •s -1 discharge thresholds, there were a total of 7, 7, and 6 sampling dates that represented the low, medium and high flow categories, respectively (Figure 3).The 42-year (1964 to 2006) average river discharge for the study reach was 15.3 m 3 s -1 and included storm-flow as well as base-flow discharge [29].White River discharge was below the 42-year average on all sample dates in this study.Thus, the flow-regime categories that were assigned for this study do not represent the total variation in White River discharge. The WWTP discharge ranged from 0.1 m 3 •s -1 in August 2006 and September 2007 to 0.6 m 3 •s -1 in March and October 2007 (Figure 3), averaging 0.3 m 3 •s -1 over the 20 sampling months.Effluent discharge was less variable compared to river discharge on days sampled.The WWTP discharge contribution to river discharge at Site 2 ranged from 2 to almost 100% of streamflow, averageing 19% of the total river discharge on the days sampled.During August 2006, the low-flow conditions coupled with gravel streambed material could explain the reported WWTP discharge being larger than the measured river discharge as flow through the gravel alluvium was likely occurring.The variation in the degree to which dilution occurred immediately after the WWTP effluent discharge was part of the reason that river discharge was qualitatively categorized for purposes of this study. Water Quality Upstream of the WWTP Discharge Stream water quality was measured upstream of the WWTP on all 20 sampling dates (Table 1).Turbidity varied greatly across sampling dates ranging from 1.9 in August 2006 to 45.2 NTU in March 2006.There was a total maximum daily load (TMDL) present for turbidity that was set by the ADEQ as required for impaired waterbodies.Since no stream load data could be assessed for turbidity (i.e., there is no concentration associated with NTU because it is an optical measurement), total suspended solids (TSS) was used as a surrogate to turbidity to develop the TMDL.A target base-flow TSS concentration of 11 mg•L -1 was reported to correspond with a turbidity level of 10 NTU, while a storm-flow TSS target of 12 mg•L -1 corresponded with a turbidity level of 17 NTU [30].Turbidity at Site 1 exceeded the base-flow TMDL on 50% of the sample dates.All forms of N measured (i.e., NO 3 -N, NO 2 -N, NH 4 -N, and TN) had maximum concentrations < 1.0 mg•L -1 during the sampling dates, and maximum SRP and TP concentrations were ≤ 0.05 mg•L -1 (Table 1). WWTP Effluent Characteristics As was expected, some effluent characteristics varied seasonally, while others did not.Effluent temperature varied seasonally from a low of 13 Effluent SRP and TP concentrations were both < 0.4 mg•L -1 , except for in April, May, and June 2006.Total P and SRP were greatest in the effluent during May 2006, 1.9 and 1.4 mg•L -1 , respectively.These concentrations were much greater than any observed P concentrations from the White River.Effluent ammonium concentrations were <0.5 mg•L -1 on all sampling dates except in the months of March and April 2006 and in April 2007.April 2007 had the greatest observed NH 4 -N concentration (1.8 mg•L -1 ).The exact reason for the three months of elevated SRP, TP, and NH 4 -N concentrations in the effluent discharge is unknown. When wastewater effluent NH 4 -N, SRP, and TP concentrations were compared with river concentrations at Site 1 and 2, effluent nutrient concentrations were often related to that observed downstream.Site 2 river SRP (r = 0.67, p < 0.01) and NH 4 -N (r = 0.53, p < 0.05) concentrations were significantly positively correlated with the WWTP effluent concentrations.These correlations indicate that the WWTP effluent was a major factor influencing downstream dissolved P and NH 4 -N concentrations in the White River. Upstream-Downstream Comparison With the exception of NH 4 -N, pH, and temperature, all other water quality parameters measured in this study were affected by site (i.e., upstream or downstream), flow regime (i.e., low, medium, or high), or both (Table 2).Based on the two-factor ANOVA, measured Cl -, TN, TP, and NO 3 -N concentrations were greater downstream than upstream of the WWTP discharge during low-flow (p < 0.01), but did not differ between sites during medium-or high-flow conditions (Figure 4).Measured Cl -, TN, TP, and NO 3 -N concentrations at Site 2 ranged from 14 to 76 mg Cl -•L -1 , 0.9 to 10.6 mg TN•L -1 , 0.04 to 0.16 mg TP•L -1 , and 0.6 to 11.7 mg NO 3 -N•L -1 across sample dates during low-flow conditions.These same sites ranged from 5 to 14 mg Cl -•L -1 , 0.3 to 2.7 mg TN•L -1 , 0.01 to 0.13 mg TP•L -1 , and 0.2 to 2.4 mg NO 3 -N•L -1 across sample dates during medium-and high-flow conditions.During low-flow, the relatively high concentrations of Cl -, TN, TP, and NO 3 -N in the WWTP effluent affected river water chemistry due to less dilution in the river when compared to higher base flows (i.e., medium and high flows in this study).This supports the assumption that the degree of dilution, based on river discharge, plays an important role in the nutrient enrichment of the White River.Based on paired t-tests that were conducted separately by flow regime, the concentrations of Cl -and TP were always greater (p < 0.05) downstream from the WWTP effluent discharge than upstream, further indicating the significant impact that the WWTP effluent discharge has on stream water chemistry.Nitrate accounted for 91% of TN across both sites and all sample dates; thus results for nitrate and TN were similar.Nitrogen and Cl -concentrations have been shown to be elevated below a WWTP discharge in other point-source-receiving streams in the Ozark Highlands [6,7,32], therefore, it was not surprising that the WWTP effluent affected downstream stream concentrations most when diluting flows (i.e., high discharge flow rates) were not present in the White River. Nitrite, SRP, TOC, and conductivity were greater (p < 0.04) downstream than upstream when averaged across all flow regimes (Table 2).The mean downstream NO 2 -N concentration was more than double that of the upstream concentration (Table 3).Nitrite is an intermediate form of N during nitrification and is not stable in the environment [33].Soluble reactive P is biologically important because it is often the limiting nutrient for primary production in White River tributaries [34], but concentrations were generally low (<0.1 mg SRP L -1 ) on all sample dates throughout the study.The mean river SRP concentration in the White River was four times greater downstream than upstream of the WWTP when averaged across flow regimes.The TOC concentration was 35% greater downstream from the WWTP effluent discharge compared to upstream (Table 3).Carbon added from the WWTP effluent provides more substrate for microorganisms in the river which can lead to more heterotrophic production, which could influence microbial processes and reach-level retention capacity.Stream conductivity was always greater, on average 62% greater, downstream than upstream of the WWTP effluent discharge (Table 3) because of the added solutes in the effluent. Based on the ANOVA, turbidity and DO did not differ between Site 1 and 2 (Tables 2 and 3).However, based on a paired t-test within each flow regime, DO was greater downstream than upstream of the WWTP effluent discharge during low flow and was similar when flow exceeded 2 m 3 •s -1 .Conductivity, TOC, DO, and turbidity varied among flow regimes (p < 0.015) when averaged across sites (Table 2).Both TOC and conductivity were greatest during low-flow conditions and did not differ between medium and high-flow conditions (Table 4).Conductivity during medium and high-flow conditions was less than one half that observed during low-flow conditions (as defined in this study).Total organic carbon also experienced a similar decrease as that of conductivity as the flow regime increased.Dilution of the WWTP effluent was likely the mechanism responsible for these changes when discharge exceeded 2 m 3 •s -1 . Dissolved oxygen varied among all three flow regimes and increased as flow regime increased (Table 4).The increased mixing and aeration from more turbulent flow during increasingly greater discharge rates were likely responsible for increasing DO concentrations.Though water temperature was statistically unaffected by either site or flow regime (Table 2), water temperature numerically decreased from the low-to the high-flow regime, while the DO concentration significantly increased (Table 4), which was expected. Similar to DO, turbidity was also greater during highthan low-flow conditions, but turbidity during mediumflow was similar to that during both low-and high-flow conditions (Table 4).The amount of suspended sediment in the water column is typically directly proportional to the water velocity, thus it was not surprising that turbidity was greatest during high-flow conditions.However, the relationship between exposure to and actual biological impairment from suspended sediment, as characterizes numerous streams in the Ozark Highlands, is poorly understood [35]. Neither site nor flow regime affected (p > 0.05) NH 4 -N concentrations, water temperature, or pH based on the ANOVA (Table 2).Averaged across sites and flow regimes, mean ammonium concentration was 0.1 mg•L -1 , mean pH was 7.3, and mean water temperature was 18.6˚C.However, based on a paired t-test within each flow regime, water temperature was slightly greater downstream than upstream of the WWTP effluent discharge when flows exceeded 6 m 3 •s -1 . Water Quality Downstream of the WWTP Discharge White River water quality measured at the five sites downstream of the WWTP effluent discharge varied widely.Turbidity ranged from 5 to 50 NTU across all downstream sample sites and dates during this study (Table 5).The average turbidity for Sites 2 through 6 was above the TMDL NTU limit on 45% of the sampling dates.The WWTP's point-source-pollution effect was apparent based on increased nutrient concentrations, conductivity, and Cl -.The mean NO 3 -N concentration at Sites 2 through 6 averaged across sample dates was 3.2 mg•L -1 , which was more than three times the mean NO 3 -N concentration at Site 1 upstream of the WWTP discharge (Table 1).River TP averaged 0.10 mg TP L -1 across downstream sample locations and dates, but exceeded the EPA-recommended reference P concentration for Ecoregion XI of 0.01 mg•L -1 [31] with a maximum observed concentration 0.32 mg TP•L -1 .Chloride concentrations ranged from 5 to 77 mg•L -1 and averaged 30 mg•L -1 .The mean chloride concentration for Sites 2 through 6 was more than five times greater than Site 1 upstream of the WWTP (Table 1).Mean conductivity for Sites 2 through 6 (330 μS•cm -1 ) was two times great- *Asterisks denote a significant difference between upstream and downstream mean values for the same water quality parameter; † Least significant diference at the 0.05 level (LSD 0.05 ); † † Parameter also had significant site x flow regime interaction.Temperature (˚C) 21.8 (1.9) 17.7 (1.3) 15.9 (2.0) - † Flow regime categories are defined as follows: Low (< 2.0 m 3 s -1 ), Medium (2.0 -6.0 m 3 s -1 ), and High (> 6.0 m 3 s -1 ); † † Least significant difference at the 0.05 level (LSD 0.05 ); ‡ Means followed by difference letters in the same row are different at the 0.05 level; Parameter also had significant site x flow regime interaction. er than that of Site 1 (Table 1) and ranged from 95 to 1118 μS•cm -1 . Nutrient Retention, Export and Net Uptake The White River showed variable retention or export of nutrients across sampling dates and between constituents when reach-level inputs and outputs were evaluated using the retention-coefficient approach.The various forms of N showed retention coefficients ranging from a low of -2.42 to a high of 0.96 for NH 4 -N, NO 3 -N + NO 2 -N, and TN.Only NH 4 -N had an average retention coefficient that was significantly different (i.e., greater) than zero (p = 0.04), suggesting NH 4 -N was generally retained or transformed through the study reach.The other forms of N were, on average, just transported downstream without retention or transformation.The retention coefficients for NO 3 -N + NO 2 -N and TN were highly correlated (r = 0.99, p < 0.001), which is not surprising since NO 3 -N made up a large portion on the TN pool.However, NH 4 -N retention coefficients were not correlated (p > 0.10) with the retention coefficients of other N forms within the White River. Phosphorus retention coefficients within the study reach were just as variable as N forms, ranging from -1.19 to 0.94 for SRP and -0.92 to 0.94 for TP.On average, retention coefficients did not differ from zero, suggesting that minimal retention was occurring.Retention coefficients for SRP and TP were significantly correlated (r = 0.64, p < 0.01), likely because SRP made up a large portion of TP in the White River.Total N and P retention coefficients were also correlated (r = 0.52, p = 0.02), suggesting that retention of these two nutrients might be coupled within this study reach. The calculations of net uptake lengths were not biased by flow through alluvial gravel within the study reach, as may have been the case for retention coefficients that were based on reach-level inputs and outputs.Calculated S NET values showed trends (increasing, decreasing, or no significant change) in the downstream direction.Net uptake lengths for SRP were significant (p < 0.10) on five sample dates within the study period, ranging from -8.7 to 7.9 km.Overall, little retention of SRP was occurring within the fluvial channel of the White River, suggesting that the study reach was not a consistent sink for SRP.Across these five sampling dates, SRP S NET was positively correlated to Site 2 SRP concentration (r = 0.927, p = 0.02) suggesting that as the concentration of SRP at Site 2 increased, S NET also increased.The study reach acted as a source of SRP when the effects of the effluent discharge were minimal and observed concentrations at Site 2 were 0.06 mg•L -1 or less.Net uptake lengths for SRP were not correlated with any other physio-chemical property measured in the White River.Table 6 summarizes V F-NET and U NET values for SRP within the White River. Net uptake lengths for NO 3 -N were significant on 10 sampling dates, ranging from -22.1 to 13.1 km.Similar to SRP S NET , NO 3 -N S NET had some sampling dates showing net retention within the study reach and others suggesting net export from the study reach.The net export could be explained by nitrification of reduced N forms within the fluvial channel, whereas the net retention occurred when biological uptake and denitrification exceeded nitrification rates.Net uptake lengths for NO 3 -N were only correlated with turbidity at Site 2 (r = 0.65, p = 0.04), whereas no other measured physio-chemical property was related to NO 3 -N S NET .Table 6 summarizes V F-NET and U NET values for NO 3 -N across the sampling dates. Net uptake lengths for NH 4 -N displayed less variation than that for SRP or NO 3 -N S NET across the sampling dates, ranging from 5.0 to 14.8 km.When S NET was significant, uptake lengths were long, but positive, suggesting that NH 4 -N was retained, albeit not efficiently, within the White River downstream from the effluent discharge.Net uptake lengths for NH 4 -N were not significantly correlated to any physio-chemical property measured downstream from the effluent discharge during this study.Table 6 summarizes V F-NET and U NET values for NH 4 -N across the sampling dates. Comparison to Other Studies Effluent chemistry often differs greatly from that in receiving aquatic systems [36], and the effluent discharge at the White River near Fayetteville, Arkansas had a significant influence on water chemistry and nutrient transport.Despite the large size of the White River (i.e., 5 th order), this effluent discharge at times made up a substantial portion of flow within the study reach during relatively dry summers.Overall, the influence of the effluent discharge on water chemistry was observable across all flow regimes as defined in this study, but was most profound during low-flow conditions (<2 m 3 •s -1 ).Other studies have shown that effluent discharges influence stream water chemistry when the stream flow is dominated by WWTP inputs [7][8][9]32]. Phosphorus generally travels long distances downstream from effluent discharges before significant retention occurs, and this observation is consistent across streams receiving effluent discharge in the Ozark Highlands [6,7,32] and others throughout the USA [12] and the world [8,9].When significant net retention occurs, S NET distances can reach up to 85 km [12], but most S NET SRP lengths is less than 20 km [6,7,12,32].The effects of effluent discharges on SRP concentrations and transport likely vary with how much the effluent domi-nates a receiving stream and how much effluent changes concentrations in the receiving stream.At the White River, TP concentrations and transport were similar to SRP, because TP was largely in the soluble-reactive form. However, some consistencies occur across streams that are effluent dominated to larger rivers where effluents are not a major proportion of discharge within the fluvial channel.For example, both the White River (this study) and other effluent-dominated streams [7,12,32] showed net release of SRP from within the study reaches.Haggard et al. [7] suggested that SRP release occurs when effluent P concentrations are relatively low, and the SRP concentration in the receiving stream is less than that associated with the sediment equilibrium P concentrations (EPC 0 ).Ekka et al. [32] showed that sediment EPC 0 are strongly influenced by effluent P inputs, and that dramatic changes in EPC 0 may occur with changes in effluent P concentrations.It is likely that something similar is happening within the White River downstream of the WWTP input.However, sediment-P interactions might be more complex in the White River because this stream is more turbid relative to other Ozark streams.Thus, dissolved inorganic P (i.e., SRP) transport, retention and release through the White River might be more complex, for a variety of reasons, than that observed in less turbid streams within the Ozark Highlands. The White River was less efficient at NH 4 -N retention compared to other smaller streams receiving effluent discharge, because NH 4 -N S NET was 5 km or longer at the White River compared to less than 1.5 km in smaller systems (eg., Columbia Hollow; Figure 5) [7].However, the observation that these stream reaches were a sink for NH 4 -N (i.e., S NET was positive on all sampling dates) was consistent across small to large river systems.It is likely that biological transformation (i.e., nitrification) was the mechanism responsible for NH 4 -N retention, but suspended and stream-bed sediments can also adsorb NH 4 -N from the water column.In contrast, Gibson and Meyer [12] showed that NH 4 -N release occurred within the Chattahoochee River downstream from multiple effluent discharges (Figure 5). The transport of NO 3 -N downstream from effluent discharges is complex, because nitrification of reduced N forms within the effluent and the fluvial channel can result in increasing NO 3 -N concentrations with downstream distance [7,8].In the White River, NO 3 -N was significantly retained on half of the sampling dates, while the other dates showed increases in dilution-corrected NO 3 -N concentrations downstream.The observed NO 3 -N dynamics in the White River match that observed at many other streams receiving effluent discharge (Figure 5), where net NO 3 -N release occurs as often as net Copyright © 2011 SciRes.JEP NO 3 -N retention [8,12].The observation that the White River downstream from this effluent discharge does not efficiently retain nutrients, either SRP or NO 3 -N, is important because the end of this study reach is the headwaters of Beaver Lake.Thus, this essentially means that nutrient inputs from this WWTP travel kilometer-scale distances downstream to the reservoir providing drinking water for northwest Arkansas.The effluent discharge might actually be influencing primary productivity in the headwaters of Beaver Lake because sestonic chlorophyll-a concentrations generally increase with N and P supply [37].However, the WWTP effluent discharge contributes less than 10% of the annual inputs of TN or TP to Beaver Lake from its watershed [38].Nutrient transport in streams downstream effluent discharges often depends on drought conditions [39], and the relative contribution of annual inputs from this WWTP to Beaver Lake will likely be greater during years where annual discharge is less. Conclusions The WWTP discharge into the White River made up a small fraction of the total river discharge, and the immediate dilution of the effluent was apparent by the observed changes in water quality during low river discharge.This effluent discharge had a significant impact on nutrient concentrations, despite its relatively low contribution to river discharge.However, longitudinal patterns in nutrient concentrations downstream from the effluent discharge were not as consistent as reported previously for smaller-order rivers where the effluent made up a relatively larger proportion of river discharge.Nutrient retention coefficients were highly variable, and suggested that NO 3 -N + NO 2 -N and SRP were, on average, not retained within the study reach.However, NH 4 -N was significantly retained within the study, on average, when evaluating reach level inputs and outputs.Since little nutrient retention occurred in the White River downstream from this effluent discharge, the headwaters of Beaver Lake are likely directly influenced by the WWTP evaluate in this study.The WWTP has relatively low nutrient concentrations in its effluent discharge, but its continual discharge of nutrients to the White River has resulted in little retention within the study reach.Thus, any changes to the effluent nutrient concentrations or loading would likely influence the headwaters of Beaver Lake. Figure 1 . Figure 1.Map of the major rivers within the Beaver Lake Watershed in Northwest Arkansas.The Paul R. Noland municipal wastewater treatment plant (WWTP) discharges into the White River and was used as the nutrient input source for this study.The study reach stretches 2.2 km upstream of the WWTP discharge to 3.9 km downstream of the WWTP discharge. Figure 2 . Figure 2. Map of the study reach with sampling sites and wastewater treatment plant (WWTP) discharge to the White river, northwest AR. Figure 3 . Figure 3. White River discharge throughout a 20-month sampling period from January 2006 to December 2007.Also plotted are the wastewater treatment plant (WWTP) discharge into the White river and the 40-yr average White River discharge.river discharge was quantitatively divided into three flow regimes (Low, Medium, and High).Horizontal lines at 2.0 and 6.0 m 3 s -1 indicate the thresholds separating the three flow regimes. Figure 4 . Figure 4. Flow regime (i.e., Low, Medium, and High) and site location (i.e., upstream and downstream of the wastewater treatment plant) effects on water quality parameters in the White River, Fayetteville, AR.Different letters above bars for the same parameter are different at the 0.05 level. Figure 5 . Figure 5.Comparison of net nutrient uptake lengths (S NET ) for ammonium-nitrogen (NH 4 -N), nitrate-nitrogen (NO 3 -N), and soluble reactive phosphorus (SRP) from two previous studies examining wastewater treatment plant (WWTP) effluent receiving streams to that from the current study.Data are presented for Columbia Hollow (CH), Arkansas [7] and the Chattahoochee River-upstream study reach (CHAT-U) and downstream study reach (CHAT-D), Georgia [12].The standard error about the mean and the number of observations (n) in each of the studies are also reported. ber 2007 to a high of 8.5 mg•L -1 in January 2007 and averaging 2.7 mg•L -1 across the study period.Effluent pH varied between pH 7 in January 2006 and 7.9 in September 2007. Since oxygen solubility is known to be inversely related to water temperature, this observed variation was expected.Effluent TSS concentrations also varied, but not seasonally, ranging from a low of 0.5 mg•L -1 in Septem- Table 6 . Summary statistics for mass transfer coefficients (V F-NET ) and uptake rates (U NET ) for soluble reactive phosphorus (SRP), ammonium-nitrogen (NH 4 -N), and nitrate-nitrogen (NO 3 -N) on sampling dates that demonstrated significant net nutrient uptake or release in the study reach of the White River, AR downstream of the wastewater treatment plant. †The number of sampling dates in which nutrient uptake length (S NET ) was significant at p < 0.1.
2018-12-10T18:20:14.179Z
2011-05-06T00:00:00.000
{ "year": 2011, "sha1": "3a40279003b8eff4240c01a1f2d18ffab26bc5fd", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=4792", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3a40279003b8eff4240c01a1f2d18ffab26bc5fd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
37639987
pes2o/s2orc
v3-fos-license
Peripheral giant cell granuloma Peripheral giant cell granuloma or the so-called “giant cell epulis” is the most common oral giant cell lesion. It normally presents as a soft tissue purplish-red nodule consisting of multinucleated giant cells in a background of mononuclear stromal cells and extravasated red blood cells. This lesion probably does not represent a true neoplasm, but rather may be reactive in nature, believed to be stimulated by local irritation or trauma, but the cause is not certainly known. This article reports a case of peripheral giant cell granuloma arising at the maxillary anterior region in a 22-year-old female patient. The lesion was completely excised to the periosteum level and there is no residual or recurrent swelling or bony defect apparent in the area of biopsy after a follow-up period of 6 months. Introduction Peripheral giant cell granuloma (PGCG) is the most common oral giant cell lesion appearing as a soft tissue extra-osseous purplish-red nodule consisting of multinucleated giant cells in a background of mononuclear stromal cells and extravasated red blood cells. This lesion is probably not present as a true neoplasm, but rather may be reactive in nature. The initiating stimulus has been believed to be due to local irritation or trauma, but the cause is not certainly known. It has been termed a peripheral giant cell "reparative" granuloma, but whether it is in fact reparative has not been established and its osteoclastic activity nature appears doubtful. Its membrane receptors for calcitonin demonstrated by immunohistochemistry and its osteoclastic activity when cultured in vitro are evidences that the lesions are osteoclasts, [1][2][3][4][5] whereas other authors have suggested that the lesion is formed by cells of the mononuclear phagocyte system. [6] The PGCG bears a close microscopic resemblance to the central giant cell granuloma, and some pathologists believe that it may represent a soft tissue counterpart of the central bony lesion. [7] Case Report A 22-year-old female patient reported to the Department of Oral and Maxillofacial Surgery with the complaint of swelling in the left upper jaw since 1 year. History revealed that the swelling started as a small one and progressively increased to the present size over a period of 1 year. It was associated with intermittent pain. There was no history of trauma, neurological deficit, fever, loss of appetite, loss of weight. There was no similar swelling present in any other part of the body. The patient was systemically healthy. On extraoral examination, a single, diffuse swelling was seen on the left side of the face in the region of anterior maxilla. The swelling measured about 2 × 1.5 cm. The surface of the swelling was lobulated and present in relation to 11 21 22. The swelling was firm in consistency and bluish in color, and the overlying mucus membrane was intact [ Figure 1]. Orthopantomogram, intraoral periapical radiographs, and maxillary occlusal radiograph showed no bone resorption. The fine needle aspiration cytology (FNAC) features showed numerous giant cells in a hemorrhagic background. Spindle cells/inflammatory cells were not seen. Surgery (excisional biopsy) was planned under local anesthesia (LA). The overlying mucosa was incised and undermined. Lesion was separated from the adjacent tissue by blunt dissection and removed in one piece [ Figure 2]. Primary closure was done with 3-0 silk suture [ Figure 3]. The specimen was sent for histopathologic examination. Sutures were removed after 1 week. There was no evidence of recurrence till 5 months of follow-up [ Figure 4]. The connective tissue stroma was highly cellular, consisting of proliferating plump fibroblasts. Numerous giant cells of various shapes and sizes, containing 8-15 nuclei, were seen with proliferating and dilated endothelial lined blood Histopathology Histopathologic examination of biopsied specimen revealed it to be whitish in color, oval in shape, firm in consistency and measuring about 2 × 1 cm in dimension [ Figure 5]. found to be short lived and to disappear early in culture in contrast to the active proliferation of the stromal cells. A study by Willing et al. [18] revealed that the stromal cells secrete a variety of cytokines and differentiation factors, including monocyte chemoattractant protein-1 (MCP1), osteoclast differentiation factor (ODF), and macrophagecolony stimulating factor (M-CSF). These molecules are monocyte chemoattractants and are essential for osteoclast differentiation, suggesting that the stromal cell stimulates blood monocyte immigration into tumor tissue and enhances their fusion into osteoclast-like, multinucleated giant cells. Furthermore, the recently identified membrane-bound protein family, a disintegrin and metalloprotease (ADAM), is considered to play a role in the multinucleation of osteoclasts and macrophage-derived giant cells from mononuclear precursor cells. [19] In the most recent study by Bo Liu et al., [5] in situ hybridization was carried out to detect the mRNA expression of the newly identified receptor activator of nuclear factor (NF)-kappaB ligand (RANKL) that is shown to be essential in the osteoclastogenesis, its receptor, receptor activator of NF-kappaB (RANK), and its decoy receptor, osteoprotegerin (OPG). They concluded that RANKL, OPG and RANK expressed in these lesions may play important roles in the formation of multinucleated giant cells. capillaries with extravasated red blood cells (RBCs). Few giant cells were also seen inside the vascular spaces. Numerous ossifications were also seen in the stroma [ Figure 6]. Discussion The etiology and nature of PGCG (giant cell epulides) still remains undecided. In the past, several hypotheses had been proposed to explain the nature of multinucleated giant cells, including the explanation that they were osteoclasts left from physiological resorption of teeth or reaction to injury to periosteum. There is strong evidence that these cells are osteoclasts as they have been shown to possess receptors for calcitonin and were able to excavate bone in vitro. The PGCG occurs throughout life, with peaks in incidence during the mixed dentitional years [8] and in the age group of 30-40 years. [7,9] It is more common among females (60%). [7,9] The mandible is affected slightly more often than the maxilla. [7,9] Lesions can become large, some attaining 2 cm in size. The clinical appearance is similar to that of the more common pyogenic granuloma, although the PGCG often is more bluishpurple compared with the bright red color of a typical pyogenic granuloma. Recently, the PGCG associated with dental implants has also been reported. [10] Although the PGCG develops within soft tissue, "cupping" superficial resorption of the underlying alveolar bony crest is sometimes seen. At times, it may be difficult to determine whether the mass is a peripheral lesion or a central giant cell granuloma eroding through the cortical plate into the gingival soft tissues. [11,12,13] The extra-osseous lesions of cherubism involving the gingiva appear very similar to giant cell epulides. However, the other distinctive clinical and radiographic features of cherubism will indicate the correct diagnosis. [14] Histologically, PGCG is composed of nodules of multinucleated giant cells in a background of plump ovoid and spindle-shaped mesenchymal cells and extravasated RBCs. The giant cells may contain only a few nuclei or up to several dozen of them. Some of them are large, vesicular nuclei; others demonstrate small, pyknotic nuclei. The origin of the giant cell is unknown. Ultrastructural and immunological studies [2][3][4][5][6] have shown that the giant cells are derived from osteoclasts. [15] There is also a growing body of opinion that giant cells may simply represent a reactionary component of the lesion and are derived via blood stream from bone marrow mononuclear cells and may be present only in response to an as yet unknown stimulus from the stroma. This concept is based on the results of some more recent studies using cell culture and transplantation, [16,17] in which the giant cells have been
2018-04-03T04:44:21.547Z
2012-04-01T00:00:00.000
{ "year": 2012, "sha1": "9b17bd5b4e95fda5562d0d72f85cc47d28245af5", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0976-237x.95121", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1c6faeda508b374c77676e4b17cf094299b4f4b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214768138
pes2o/s2orc
v3-fos-license
A rapid and simple quantitative method for specific detection of smaller coterminal RNA by PCR (DeSCo-PCR): application to the detection of viral subgenomic RNAs RNAs that are 5′-truncated versions of a longer RNA but share the same 3′ terminus can be generated by alternative promoters in transcription of cellular mRNAs or by replicating RNA viruses. These truncated RNAs cannot be distinguished from the longer RNA by a simple two-primer RT-PCR because primers that anneal to the cDNA from the smaller RNA also anneal to—and amplify—the longer RNA-derived cDNA. Thus, laborious methods, such as northern blot hybridization, are used to distinguish shorter from longer RNAs. For rapid, low-cost, and specific detection of these truncated RNAs, we report detection of smaller coterminal RNA by PCR (DeSCo-PCR). DeSCo-PCR uses a nonextendable blocking primer (BP), which outcompetes a forward primer (FP) for annealing to longer RNA-derived cDNA, while FP outcompetes BP for annealing to shorter RNA-derived cDNA. In the presence of BP, FP, and the reverse primer, only cDNA from the shorter RNA is amplified in a single-tube reaction containing both RNAs. Many positive strand RNA viruses generate 5′-truncated forms of the genomic RNA (gRNA) called subgenomic RNAs (sgRNA), which play key roles in viral gene expression and pathogenicity. We demonstrate that DeSCo-PCR is easily optimized to selectively detect relative quantities of sgRNAs of red clover necrotic mosaic virus from plants and Zika virus from human cells, each infected with viral strains that generate different amounts of sgRNA. This technique should be readily adaptable to other sgRNA-producing viruses, and for quantitative detection of any truncated or alternatively spliced RNA. INTRODUCTION Many positive sense RNA viruses generate 3 ′ coterminal subgenomic RNAs (sgRNAs) in infected cells. These include many pathogens such as human norovirus, chikungunya, Zika, and dengue viruses, and important plant pathogens such as barley yellow dwarf (BYDV) and maize chlorotic mottle viruses. Most viral sgRNAs, including those of the above viruses, are simply 5 ′ -truncated versions of the viral genome, usually being less than half the length of the full-length genomic RNA Sztuba-Solinśka et al. 2011). sgRNAs can serve as mRNAs for translation of open reading frames (ORFs) located downstream from the 5 ′ -proximal ORF(s) that are translated from genomic RNA (Sztuba-Solinśka et al. 2011). More recently, sgRNAs have been found that are derived from the 3 ′ untranslated region (UTR) of the viral genome, and thus function as noncoding sgRNAs (ncsgRNAs) (Iwakawa et al. 2008;Pijlman et al. 2008;Peltier et al. 2012). For plant viruses in the Tombusviridae, Luteoviridae, Solemoviridae, Bromoviridae, Virgaviridae, Benyviridae families, and the order Tymovirales, and animal viruses in the Togaviridae (e.g., chikungunya virus), Caliciviridae (e.g., human norovirus), Astroviridae (human astrovirus) families, ORFs encoding the RNA-dependent RNA polymerase and associated replicase proteins, located in the 5 ′ half of the genome, are translated from the viral genomic RNA (gRNA). However, for translation of 5 ′ distal ORFs that encode proteins required at middle or late stages of infection, such as structural proteins, one or more sgRNAs are generated (Monroe et al. 1993;Koev and Miller 2000;Miller and Koev 2000;Sztuba-Solinśka et al. 2011;Royall and Locker 2016;Contigiani and Diaz 2017). For example, the nonstructural polyprotein ORF (including the replicase) of members of Togaviridae is translated from gRNA, while the polyprotein ORF encoding structural proteins is translated from a sgRNA that is 3 ′ coterminal with the gRNA (Strauss and Strauss 1994). Certain viruses in the Luteoviridae (Shen and Miller 2004;Shen et al. 2006;Miller et al. 2015), Tombusviridae (Scheets 2000;Iwakawa et al. 2008) and Benyviridae (Peltier et al. 2012;Flobinus et al. 2016Flobinus et al. , 2018 families, and all viruses in the Flavivirus genus (Pijlman et al. 2008;Roby et al. 2014) generate ncsgRNAs from the 3 ′ UTR that play an important role in regulating virus gene expression, virus movement and transmission, with major effects on pathogenicity and symptom development. However, their mechanisms of action are only just beginning to be understood. For example, (i) BYDV sgRNA2 regulates translation of gRNA and sgRNA1 (Shen et al. 2006;Miller et al. 2015), (ii) beet necrotic yellow vein virus sgRNA3 is required for long-distance movement in plants (Peltier et al. 2012), and (iii) subgenomic flavivirus RNAs (sfRNA) interfere with the innate immune systems of mammalian and insect hosts (Schnettler et al. 2012;Roby et al. 2014;Manokaran et al. 2015;Donald et al. 2016;Miller et al. 2016;Finol and Ooi 2019). In this study, we detected sgRNAs of red clover necrotic mosaic virus (RCNMV) and Zika virus (ZIKV). RCNMV (Family: Tombusviridae, Genus: Dianthovirus, Fig. 1A) is a bipartite plant virus with positive-sense single-stranded gRNA1 and gRNA2 (Gould et al. 1981;Hiruki 1987). During infection, a coding sgRNA generated from the 3 ′ end of gRNA1 serves as the mRNA for viral coat protein translation (Sit et al. 1998). RCNMV also generates a ncsgRNA, SR1f, as a stable degradation product formed by incomplete degradation of gRNA and coat protein sgRNA by a plant 5 ′ to 3 ′ exonuclease (Iwakawa et al. 2008;Steckelberg et al. 2018). SR1f is not required for infection of the highly susceptible host plant, Nicotiana benthamiana, as an RCNMV mutant that is unable to generate SR1f accumulates substantial levels of the viral genomic RNAs and the coat protein sgRNA (Iwakawa et al. 2008). However, this mutant is unable to accumulate substantially in Arabidopsis thaliana. ZIKV (Family: Flaviviridae; Genus: Flavivirus; Fig. 1B) usually causes an acute, mild febrile illness, but in the 2015 South and Central American epidemic was found to cause neurological disorders such as microcephaly in infants born to infected mothers and Guillain-Barre syndrome in adults Ferraris et al. 2019). One of the molecular determinants of pathogenicity of ZIKV and other flaviviruses is the sfRNA, which, like SR1f, is an incomplete degradation product of gRNA by a host 5 ′ to 3 ′ exonuclease (Pijlman et al. 2008;Silva et al. 2010). RCNMV SR1f and the sfRNAs of ZIKV and other flaviviruses are not required for viral replication but increase virus titer and disease severity (Iwakawa et al. 2008;Pijlman et al. 2008;Moon et al. 2012Moon et al. , 2015Schnettler et al. 2012;Schuessler et al. 2012;Roby et al. 2014;Akiyama et al. 2016;Göertz et al. 2016;Lee et al. 2019). For example, dengue virus disease severity appears to correlate positively with sfRNA level in infected cells. Screening viral mutants that vary in level of sgRNA accumulation is crucial to the understanding of the role of these sgRNAs in viral infection. In order to (i) decipher the role of ncsgRNA, (ii) identify cis-or trans-acting RNA elements in a sgRNA, (iii) understand the function of proteins encoded by sgRNAs, (iv) identify promoters required for sgRNA synthesis, (v) undertake field surveys for viral strains with particularly severe symptoms controlled by sgRNA levels, etc., rapid detection of sgRNA and measurement of expression is important. While gRNA can be measured by a simple two-primer based RT-PCR with PCR primers that can hybridize to any region across the gRNA, detection of sgRNAs as distinct from gRNA currently requires more cost-and time-intensive methods, usually northern blot hybridization (Kessler et al. 1990;Amiss and Presnell 2005). In addition, northern blot hybridization is less sensitive compared to RT-PCR and requires several micrograms of total RNA as input. Indirect ways of estimating sgRNA levels include quantitative RT-PCR (qRT-PCR) in which abundance of gRNA, as calculated by gRNA-specific qRT-PCR, is subtracted from total abundance of gRNA and sgRNA, as calculated by qRT-PCR using primers that anneal to their coterminal region , or deep sequencing (e.g., Illumina) of total RNA in an infected cell and simply comparing the number of reads that map to the sgRNA region vs the upstream gRNA. However, this too is expensive, time-consuming and requires much bioinformatics analysis post-sequencing. Also, Illumina read counts can vary significantly across a viral genome in the absence of sgRNA (Xu et al. 2019). To overcome the difficulties and costs associated with the above methods, an RT-PCR approach would be preferable. However, as mentioned above, a simple two-primer based RT-PCR cannot distinguish sgRNA-derived cDNA (sgRNA cDNA) from gRNA-derived cDNA (gRNA cDNA). For an RT-PCR reaction with coterminal RNAs, any primer-pair designed to amplify the sgRNA cDNA will also anneal to the gRNA cDNA, owing to their coterminal ends, resulting in amplification from both, making RT-PCR futile for specific detection of sgRNA. To prevent amplification from gRNA cDNA and enable selective amplification from sgRNA cDNA, we have developed a three-primer based RT-PCR approach, which we name DeSCo-PCR (detection of smaller coterminal RNA by PCR). This method is easy to optimize, relatively simple, quick and inexpensive for specific detection of sgRNAs. Overview of the DeSCo-PCR method DeSCo-PCR utilizes a nonextendable blocking primer (BP) with two amplification primers to prevent amplification of gRNA under conditions that permit amplification of the sgRNA (Fig. 2). Firstly, cDNA to be used as template for DeSCo-PCR is prepared from total RNA, using a virus sequence-specific reverse primer ( Fig. 2A). DeSCo-PCR uses three primers ( Fig. 2B): (i) a reverse primer (RP) that anneals to gRNA cDNA and sgRNA cDNA at their coterminal 5 ′ end (complementary to the 3 ′ -coterminal ends of the viral RNAs), (ii) a forward primer (FP), containing the sequence of the 5 ′ end of the sgRNA, that can anneal to both gRNA cDNA and sgRNA cDNA, and (iii) a long (∼50-nt) forward nonextendable BP containing a contiguous gRNA sequence upstream and downstream from the sgRNA 5 ′ end followed by a tract of nonviral bases at its 3 ′ end, which makes it nonextendable by the polymerase (explained in detail below). Under PCR conditions, BP out-competes FP for annealing to gRNA cDNA because it has more bases that can anneal to gRNA cDNA. However, BP is nonextendable and hence, amplification cannot occur from gRNA cDNA. For annealing to sgRNA cDNA, FP outcompetes BP because FP has more bases that can anneal to sgRNA cDNA, resulting in amplification of sgRNA cDNA. Thus, in the presence of all three primers, only sgRNA cDNA is amplified but not the gRNA cDNA ( Fig. 2C). Blocking primer design for DeSCo-PCR Blocking primer (BP) is a DeSCo-PCR specific primer that is 50-60-nt long and has three regions ( Fig. 2B; dashed box): (i) competitive region (CR), the first ∼40-nt of the primer that can anneal only to gRNA sequence (just upstream of the 5 ′ end of sgRNA sequence) but not to sgRNA sequence; (ii) blocking region (BR), the ∼10-nt middle region of the primer that can anneal to both gRNA and sgRNA sequences at the 5 ′ end of the sgRNA (entire sequence of BR is present in the FP); (iii) nonextendable region (NER), the 3 ′ -terminal ∼6-nt of the primer with any nontemplate bases that ensure that the 3 ′ end of the primer cannot anneal to either the gRNA or sgRNA sequence. Because the 3 ′ end of the primer cannot anneal, the polymerase cannot extend and hence, amplify from the template. FP and CR-BR sequences of BP can anneal to gRNA sequence. The melting temperature (T m ) of CR-BR should be significantly higher than FP so that BP will out-compete FP for annealing to gRNA sequence during PCR. FP and BR can anneal to sgRNA sequence. T m of FP should be higher than that of BR so that FP will out-compete BP for annealing to sgRNA sequence. The NER should not be included for any T m calculations. It is preferable to calculate T m according to buffer conditions of the PCR reaction. For example, if Promega GoTaq master mix is used, T m should be calculated using the "T m for Oligos" tool on its website (https://www.promega.com/resources/tools/biomath/) with the appropriate master mix specified. General guidelines for optimizing DeSCo-PCR For optimizing DeSCo-PCR conditions, either in vitro transcribed gRNA and sgRNA can be reverse transcribed and the resulting first-strand cDNA product can be used as template for PCR, or one can use DNA templates with (i) sequence of sgRNA and (ii) at least partial gRNA sequence that includes sgRNA sequence and ∼100 nt upstream of sgRNA. All DeSCo-PCR reactions should be conducted with low ramp-rate for the annealing step of PCR. The main determinant of PCR parameters is the template concentration. Therefore, in vitro transcribed (IVT) viral RNA concentration or the dilution of cDNA that gives similar band intensity by RT-PCR to that from infected tissues should be determined to serve as a positive control. Next, a gradient PCR with ∼25 cycles should be performed with FP plus RP to determine the maximum annealing temperature (T m ) that results in amplification from gRNA cDNA (or sgRNA cDNA). At this T m , DeSCo-PCR should be carried out with an increasing molar ratio of BP to FP to determine the ratio at which there is amplification predominantly from sgRNA cDNA but not (or only faintly) from gRNA cDNA. A positive control with FP plus RP, and a negative control with BP plus RP should be used with both gRNA cDNA and sgRNA cDNA templates to ensure that any lack of amplification is not because of a failed PCR reaction and any successful amplification is not from BP, respectively. Next, T m can be finely tuned if required, with the selected BP:FP ratio at which sgRNA cDNA is amplified but amplification from gRNA cDNA is completely blocked. Finally, DeSCo-PCR should be conducted with twofold dilution of sgRNA to determine the lower level of B A C FIGURE 2. Schematic diagram of DeSCo-PCR. (A) First-strand cDNA synthesis (red line) using template-specific reverse primer (RP) annealed to viral positive-strand RNA (bold black line). (B) Primer schematics indicating annealing of BP mostly upstream but extending downstream from the 5 ′ end of sgRNA sequence and annealing of FP to a longer tract starting exactly at the 5 ′ end of sgRNA sequence. This allows BP to win the annealing competition for gRNA and FP to win the annealing competition for the 5 ′ end of sgRNA. The dashed box shows the sequences of BP and FP primers and the partial cDNA sequences of RCNMV RNA1 and SR1f to which the primers anneal. (C) Primer competition at annealing step and subsequent extension step of DeSCo-PCR. Vertical lines represent base-pairing between the primers and the cDNA template. Circled X indicates primer that does not anneal in the presence of competing primer. (gRNA) genomic RNA, (sgRNA) subgenomic RNA, (FP) forward primer, (BP) blocking primer, (CR) competitive region (blue letters), (BR) blocking region (red letters), (NER) nonextendable region (green letters). detection of sgRNA and the T m can be further fine-tuned accordingly. To use DeSCo-PCR as a quantitative assay for measuring the relative expression of sgRNA, twofold serial dilutions of sgRNA cDNA can be used as templates for simple-and DeSCo-PCR with varying number of PCR cycles to determine the optimal number of cycles at which DeSCo-PCR reflects the expected sgRNA cDNA dilution. Proof-of-concept using in vitro transcribed (IVT) gRNA and sgRNA To test the concept of DeSCo-PCR, 0.5 pmol each of in vitro transcribed (IVT) RCNMV RNA1, RCNMV SR1f, ZIKV gRNA-mimic1 and ZIKV sfRNA1 were reverse transcribed using either RCNMV reverse primer (RRP) for the RCNMV RNAs or ZIKV reverse primer (ZRP) for the ZIKV RNAs. ZIKV gRNA-mimic1 (Fig. 1B) is a 5 ′ -truncated version of the genomic RNA consisting of the 3 ′ -terminal 1009 nt, to serve as a convenient stand-in for full-length ZIKV RNA for initial RT-PCR experiments. cDNA reaction products were diluted fivefold and 2 µL of these diluted cDNA reaction products were used as template for PCR. Simple PCR with RRP plus RCNMV forward primer (RFP) as a positive control amplified both the cDNA from RNA1 (RNA1 cDNA) and cDNA from SR1f (SR1f cDNA), demonstrating successful amplification under PCR conditions (Fig. 3A, lanes 1,4). PCR with RRP plus RCNMV blocking primer (RBP) did not amplify from either RNA1 cDNA or SR1f cDNA, demonstrating that that RBP is nonextendable under these PCR conditions ( It is noteworthy that an unexpected, very low molecular weight band appeared in the PCR reactions containing BP (Supplemental Fig. S1). To determine whether it is BP-derived primer-dimer, or if it is a nonspecific amplification product, we conducted PCR with BP plus RP, and FP plus BP plus RP, using sfRNA1 cDNA or water as template. The low molecular weight product appeared, even in the absence of a template, indicating that it is a BP-derived "primer-dimer" (Supplemental Fig. S2, lanes 2,3,5,6). In spite of the presence of primer-dimer, detection of sgRNA and measurement of its relative abundance (below) was not affected. Additionally, there is a small but reproducible increase in mobility of the DeSCo-PCR product compared to the FP-RP PCR product even though both products result from amplification by the FP-RP primer pair (Fig. 3). We found that this difference was due to the presence of the abundant primer-dimer formed only in DeSCo-PCR. We showed this by conducting PCR using ZIKV sfRNA1 cDNA as template with FP plus RP that yields only the band of interest and PCR with BP plus RP that yields only the primer-dimer, mixed these PCR products, and then loaded this mixture in a single well for agarose gel electrophoresis. Mobility of the band of interest from the FP-RP PCR, in the presence of the BP + RP primer-dimer, was identical to that from DeSCo-PCR (Supplemental Fig. S2, lanes 1-4). The reason for the slight mobility change due to the primer-dimer is unclear, but it does not affect the utility of DeSCo-PCR. Quantitative analysis for measuring relative amounts of sgRNA by DeSCo-PCR To test if DeSCo-PCR can be used as a quantitative assay for measuring relative amounts of sgRNA, we first tested whether PCR of sgRNA-derived cDNA (in the absence of full-length viral cDNA) was quantitative in the presence of the three primers. In vitro transcribed RCNMV SR1f and ZIKV sfRNA1 were reverse transcribed using RRP and ZRP, respectively, and twofold serial dilutions of the resulting cDNA were used as template for PCR. Relative amounts of sgRNA-derived cDNA in each sample was estimated by measuring the relative intensity of each band with respect to that of undiluted sample. DeSCo-PCR with RRP plus RFP plus RBP showed reduction in band intensity with SR1f cDNA dilution (Fig. 4A). Furthermore, relative band intensity, as measured by DeSCo-PCR, precisely reflected the expected SR1f cDNA dilution (Fig. 4B). We next tested whether DeSCo-PCR can be used as a quantitative assay in the presence of plant total RNA and RCNMV RNA1. Twofold dilutions of IVT SR1f were mixed with a constant amount of N. benthamiana total RNA and IVT RCNMV RNA 1 (hence, gRNA and sgRNA are in different B A ratios). Five hundred nanograms of N. benthamiana total RNA was mixed with 0.1 pmol IVT RNA1 and twofold serial dilutions of IVT SR1f starting with an undiluted amount of 0.1 pmol (Fig. 4C). Subsequently, the RNA mixes were reverse transcribed with RRP followed by PCR. RNA1-specific PCR with RNA1-specific forward primer (RCNMV_909_FP) plus RNA1-specific reverse primer (RCNMV_1262_RP) (both far upstream of the sgRNA region of the genome) showed that the band intensity across all samples was uniform, as expected (Fig. 4D, Gel 1). DeSCo-PCR with RRP plus RFP plus RBP that amplifies only SR1f showed reduction in band intensity with SR1f dilution (Fig. 4D, Gel 2). Relative band intensities were used as proxy for measuring the relative amounts of RNA1 or SR1f. The relative amount of RNA1 was mostly uniform across all samples as measured by RNA1-specific PCR, as expected (Fig. 4E). Relative amounts of SR1f (blank subtracted) from DeSCo-PCR reflected the expected SR1f dilutions (Fig. 4E). Relative band intensity calculation by blank-subtracted values shows that there is either a very small amount of amplification that occurs from RNA1 or it is just the background fluorescence. If relative intensities were calculated using no-SR1f sample subtracted values, the estimation of relative amounts of SR1f became even more accurate (Fig. 4E). We also tested whether the detection of ZIKV sfRNA1 by DeSCo-PCR was quantitative. As for RCNMV, DeSCo-PCR of dilutions of ZIKV sfRNA1-derived cDNA with ZRP plus ZFP plus ZBP showed a reduction in band intensity proportional to the cDNA dilution (Fig. 5A,B). These results show that DeSCo-PCR can precisely measure relative amounts of sgRNA cDNA. To test if DeSCo-PCR can be used as a quantitative assay in the presence of ZIKV gRNA, twofold dilutions of IVT sfRNA1, starting at 0.1 pmol, were mixed with constant levels (0.1 pmol) of IVT gRNA-mimic1 (Fig. 5C). This RNA mix was reverse transcribed with ZRP followed by PCR. gRNA-mimic1-specific PCR with ZIKV gRNA-specific forward primer (ZIKV_ 9827_FP) plus ZIKV gRNA-specific reverse primer (ZIKV_10115_RP) showed that the band intensity across all samples was uniform, as we observed with RCNMV (Fig. 5D, Gel 1). DeSCo-PCR with ZRP plus ZFP plus ZBP that amplifies only sfRNA1 showed reduction in band intensity with sfRNA1 dilution (Fig. 5D, Gel 2). The relative amount of gRNA-mimic1 was uniform across all samples as measured by gRNA-mimic1-specific PCR, as expected (Fig. 5E). The relative amount of sfRNA1 (blank subtracted) from DeSCo-PCR reflected the expected sfRNA1 dilutions (Fig. 5E). If relative intensities were calculated using no-sfRNA1 sample subtracted values, the estimation of relative amounts of sfRNA1 became even more accurate. Collectively, these experiments show that DeSCo-PCR can quantitatively detect sgRNAs, even in the presence of gRNA, and allow calculation of relative differences in sgRNA/gRNA ratio. Specific detection of sgRNA in virus-infected tissues We next tested whether DeSCo-PCR could distinguish viral genomic from subgenomic RNA in infected tissues. We first tested RCNMV (R) in the plant host N. benthamiana, taking advantage of a viral mutant (RΔSR1f) we constructed, which contains a six-base substitution in its xrRNA structure at the 5 ′ end of the SR1f sequence, preventing it from generating the noncoding subgenomic SR1f RNA (Iwakawa et al. 2008). Northern blot hybridizations with a probe complementary to the 3 ′ end of RCNMV RNA1 revealed ample amounts of SR1f from N. benthamiana plants infected with wild-type RCNMV, and no (or vanishingly small amounts of) SR1f in plants infected with RCNMVΔSR1f, while both sets of plants accumulated substantial amounts of RCNMV genomic RNA1 and CPsgRNA (Fig. 6A). cDNA was prepared from 1 µg of total RNA from RCNMV-infected and RCNMVΔSR1f-infected N. benthamiana leaves using RRP followed by PCR. Because RCNMVΔSR1f has a six-base substitution at the 5 ′ end of the SR1f sequence, forward and BPs incorporating this substitution, RFP-m1 and RBP-m1, respectively, were used for PCR with cDNA from RCNMVΔSR1f-infected samples. PCR with RRP plus RFP, and RRP plus RFP-m1 resulted in amplification from both RCNMV-infected and RCNMVΔSR1f-infected cDNA samples, respectively, confirming successful virus infection (Fig. 6B, L1 and L2). PCR with RRP plus RBP, and RRP plus RBP-m1 primers did not result in amplification showing that the RBP and RBP-m1 are nonextendable under PCR conditions (Fig. 6B, L3 and L4). DeSCo-PCR with RRP plus RFP plus RBP amplified only from RCNMV-infected cDNA samples (Fig. 6B, L5) while DeSCo-PCR with RRP plus RFP-m1 plus RBP-m1 resulted in no amplification from RCNMVΔSR1f-infected cDNA samples (Fig. 6B, L6) demonstrating that SR1f is detected only in wild-type RCNMV-infected plants and not in RCNMVΔSR1f-infected plants. We next tested ZIKV RNA accumulation in HeLa cells, taking advantage of a mutant, 10ΔZIKV (deletion of nts 10,650 to 10,659 in the 3 ′ UTR) that produces a lower ratio of sfRNA1/ gRNA than wild-type ZIKV (Shan et al. 2017b). Northern blot hybridization with a 3 ′ probe complementary to ZIKV RNA revealed much greater levels of sfRNA1 in cells infected with wild-type virus than with the mutant. In this case, the genomic RNA levels were also reduced in 10ΔZIKV infection, but the sfRNA1 was virtually undetectable by northern blot hybridization in 10ΔZIKV-infected cells (Fig. 6C). cDNA was prepared from 1 µg total RNA from mock-infected, wild-type ZIKV-infected and 10ΔZIKV-infected HeLa cells using ZRP. PCR of the resulting cDNA template with ZRP plus ZFP primers amplified both ZIKV-infected and 10ΔZIKV-infected cDNA samples, but not from mock-infected cDNA samples, as expected (Fig. 6D, lanes 1-3). There was no amplification using ZRP plus ZBP primer pairs, confirming that the ZBP is nonextendable under the PCR conditions (Fig. 6D, lanes 4,5). DeSCo-PCR with ZRP plus ZFP plus ZBP primers yielded a product from cDNA from cells infected with wild-type ZIKV (Fig. 6D, lane 6), but only a very faint band from 10ΔZIKV-infected cells (Fig. 6D, lane 7), reflecting the ratios of sfRNA1/ gRNA observed by northern blot hybridization and published previously (Shan et al. 2017b). Collectively, these experiments demonstrate that DeSCo-PCR can be used for specific, quantitative detection of sgRNAs from hosts in different kingdoms infected by unrelated viruses. DISCUSSION DeSCo-PCR is a simple, quick, inexpensive and sensitive assay that can selectively amplify a viral sgRNA from a pool of RNA containing host total RNA, viral gRNA and other sgRNAs. Even though northern blot hybridization has certain advantages (e.g., the entire sequence of sgRNA need not be known and it can detect gRNA and multiple sgRNAs simultaneously), DeSCo-PCR can easily detect sgRNAs in a variety of experimental settings to rapidly screen for sgRNA production. Similar to northern blot hybridization, DeSCo-PCR can be used for measuring relative abundance of sgRNAs in different experimental conditions such as those from mutant viral genomes or in transgenic hosts. While it does not measure absolute amounts of RNA, DeSCo-PCR quantitatively measures relative amounts of sgRNA and can detect differences in ratios of sgRNA:gRNA between different virus isolates. The advantages of DeSCo-PCR are particularly beneficial for experiments where several viral mutants or isolates need to be screened rapidly to identify the relative amount of a particular sgRNA each viral isolate produces. For examples, 10ΔZIKV produces sfRNA1 at a very low sfRNA1/genomic RNA ratio and has reduced accumulation and attenuated pathogenicity compared to wildtype ZIKV. This makes 10ΔZIKV a vaccine candidate against ZIKV infection (Shan et al. 2017a,b). In addition, DeSCo-PCR can be used by clinics or laboratories that do not have access to radioisotopes, expensive nonradioactive chemiluminescent northern blot reagents or an imager required for detection of fluorescent probes used in northern blots. For viruses that require replication to generate sgRNAs, DeSCo-PCR could be used as a quick or confirmatory assay to determine whether a virus is replicating, without need for measuring increases in total RNA or infectious units over time. DeSCo-PCR is not limited to virology. It can be used for detecting smaller coterminal RNAs of any origin. Coterminal RNAs are present in eukaryotes as truncated RNA isoforms transcribed by alternative transcription start sites (TSS) or may be produced as alternatively spliced RNA isoforms. These truncated mRNA isoforms may differ in their 5 ′ UTR, affecting their stability and translation efficiency or differ in their encoded protein domains, affecting their localization, function and protein stability (Rojas-Duran and Gilbert 2012; Wang et al. 2016;Galipon et al. 2017). For example in humans, adenosine deaminases acting on RNA (ADARs) are involved in RNA editing, and the ADAR1 gene produces two coterminal mRNA isoforms, ADAR1-p150 and ADAR1-p110 from an interferon-inducible promoter and a constitutive promoter, respectively (Galipon et al. 2017). Additionally, next-generation sequencing and computational analysis are often used to identify, predict functions and determine differential expression of these transcript isoforms with a certain degree of confidence (Kandoi andDickerson 2017, 2019;Qin et al. 2018). However, these analyses are often followed by molecular assays for validation and DeSCo-PCR provides a simple alternative to northern blot hybridization for confirming the production of a truncated RNA isoform with coterminal ends and measuring their relative abundance. Because DeSCo-PCR involves competition between blocking and forward primers for selective annealing to gRNA or sgRNA cDNA, it may be possible to design primers that tolerate a few mismatched bases at the 5 ′ end of sgRNA in cases where the exact 5 ′ end nucleotide of the sgRNA has not been determined precisely. Also, it may be possible to use a BP terminating in a dideoxynucleotide (Sanger et al. 1977) to make it universally nonextendable, instead of the mismatched 3 ′ terminal sequence on our BPs. This may eliminate the production of BP-derived primer-dimer and therefore, make DeSCo-PCR adaptable to qRT-PCR. In addition, a BP with a few locked nucleic acid (LNA) (Koshkin et al. 1998;Ballantyne et al. 2008;Veedu et al. 2008) nucleotides in the BP-CR region would increase its binding affinity to gRNA cDNA, helping BP to out-compete FP for annealing to gRNA cDNA at lower annealing temperatures, which would be more optimal for amplification. However, use of dideoxynucleotides or LNAs would increase primer costs many-fold. Although presently DeSCo-PCR cannot be used for absolute quantification of the number of copies of a sgRNA by qRT-PCR because of amplification of primer-dimer (Supplemental Fig. S1), it can reliably be used to quantitatively compare the relative abundance of sgRNAs of different virus strains or mutants in a highly sensitive manner. Similar to northern blot hybridization, DeSCo-PCR requires some optimization with every virus, but this can be done in a short time (2 or 3 d) (Table 1). In summary, DeSCo-PCR provides a simple, readily optimized, cost-effective method for rapid, sensitive quantification of viral subgenomic RNAs in only a limited amount of total RNA and without the use of expensive and hazardous chemicals. Oligonucleotide synthesis All primers were synthesized by Integrated DNA Technologies and purified by standard desalting. Sequences and genomic positions of primers that were used for construction of pRC169c, pRSR1f, pR1m1, ZIKV gRNA-mimic1 PCR product, and ZIKV sfRNA1 PCR product are listed in Supplemental Table S1. Sequences and genomic positions of primers that were used for all RT-PCR experiments, including DeSCo-PCR, are listed in Supplemental Table S2. Plasmid construction Full-length infectious cDNA clones of RCNMV Australian strain RNA1 (pRC169) and RNA2 (pRC2|G) (Xiong and Lommel 1991;Sit et al. 1998) were kindly provided by Dr. Tim L. Sit and Dr. S. A. Lommel. pRC169 and pRC2|G are cDNA clones with a T7-promoter for in vitro transcription of infectious RNA1 and RNA2, respectively. pRC169 was sequenced by Sanger sequencing and was found to contain several base changes compared to the sequence from NCBI (GenBank: J04357). Two of the base changes, at positions 3462 and 3494, were present near the 5 ′ end of SR1f and therefore, were changed from C to T and G to A, respectively, using Q5 Site-Directed Mutagenesis kit (NEB #E0554) according to manufacturer's protocol with primers 3UTR_R1_corrected_for and 3UTR_R1_corrected_rev. The corrected plasmid, pRC169c, was used as template for construction of pRSR1f and pR1m1, and as template for in vitro transcription of infectious RCNMV RNA1. pRSR1f pRSR1f is a cDNA clone with T7-promoter followed by SR1f sequence for in vitro transcription of SR1f. A Q5 Site-Directed Mutagenesis kit (NEB #E0554) was used according to manufacturer's protocol. A DNA fragment with the T7 promoter sequence, vector sequence and SR1f sequence was amplified from pRC169c with the following PCR reaction composition and conditions: Q5-hot start high fidelity 2× master mix (1×), T7-rev primer (0.5 µM), SR1f_for primer (0.5 µM), pRC169c as template (10 ng); initial denaturation at 98°C for 30 sec; 25 cycles of denaturation at 98°C for 10 sec, annealing at 60°C for 30 sec, extension at 72°C for 2.5 min; final extension at 72°C for 2 min. This was followed by ligation, according to manufacturer's protocol, to circularize the PCR product. Subsequently, the plasmid was transformed in pR1m1 pR1m1 is an infectious cDNA clone of RCNMV RNA1 (RNA1-m1) that does not generate SR1f during infection. pR1m1 has a sixbase substitution ("TGTAGC" to "ACGTTG") in pRC169c (nts 3462 to 3467) that disrupts the xrRNA structure required for SR1f production (Iwakawa et al. 2008). A Q5 Site-Directed Mutagenesis kit (NEB #E0554) was used according to manufacturer's protocol. The DNA fragment was amplified by PCR with the following reaction composition and conditions: Q5-hot start high fidelity 2× master mix (1×), SR1f.m1_for primer (0.5 µM), SR1f.m1_rev primer (0.5 µM), pRC169c as template (10 ng); initial denaturation at 98°C for 30 sec; 25 cycles of denaturation at 98°C for 10 sec, annealing at 59°C for 30 sec, extension at 72°C for 4 min; final extension at 72°C for 2 min. This was followed by ligation, according to manufacturer's protocol, to circularize the PCR product. Subsequently, the plasmid was transformed in E. coli sigma 10 cells and colonies were screened in LB-agar plates with ampicillin. Plasmids were extracted from selected colonies and the sequence was verified by Sanger sequencing. All RCNMV cDNA clones were linearized at a unique SmaI restriction site at the precise 3 ′ end of the RCNMV 3 ′ UTR prior to in vitro transcription. ZIKV gRNA-mimic1 PCR product ZIKV gRNA-mimic1 PCR product is a DNA fragment with a T7 promoter followed by a partial sequence of ZIKV gRNA (nts 9799 to 10807). It was amplified from pFLZIKV (Shan et al. 2016) using the primers NS5 (+) forward primer 1 and sfRNA (−) reverse primer. ZIKV gRNA-mimic1 PCR product was used for in vitro transcription to make noninfectious ZIKV gRNA-mimic1 that was used for DeSCo-PCR experiments. ZIKV sfRNA1 PCR product ZIKV sfRNA1 PCR product is a DNA fragment with a T7-promoter followed by the sequence of ZIKV sfRNA1 (nts 10392 to 10807). It was amplified from pFLZIKV (Shan et al. 2016) using the primers sfRNA (+) forward primer and sfRNA (−) reverse primer. ZIKV sfRNA1 PCR product was used for in vitro transcription to make ZIKV sfRNA1 that was used for DeSCo-PCR experiments. In vitro transcription One µg linearized plasmid for all RCNMV constructs, 200 ng ZIKV sfRNA1 PCR product, and 500 ng ZIKV gRNA mimic1 PCR product were used as templates for in vitro transcription using MEGAscript T7 Transcription kit (Invitrogen #AM1334) followed by DNase treatment according to manufacturer's protocol. The transcription reaction was carried out at 37°C for 4 h and DNase treatment at 37°C for 30 min. Subsequently, RNA was purified using Zymo RNA Clean & Concentrator -5 kit (Zymo Research #R1015) and eluted in nuclease-free water. RCNMV Nicotiana benthamiana plants at the four-leaf stage were used for inoculations. Two leaves per plant were inoculated. Per leaf, 1 µg in vitro-transcribed (IVT) RCNMV RNA1 plus 1 µg IVT RCNMV RNA2 were mixed in 10 mM sodium phosphate buffer (pH 6.8) and rubbed on the leaves. These are referred to as RCNMV-infected plants that make SR1f. Similarly, 1 µg IVT RCNMV RNA1-m1 plus 1 µg IVT RCNMV RNA2 were mixed in 10 mM sodium phosphate buffer (pH 6.8) and rubbed on the leaves. These are referred to as RCNMVΔSR1f-infected plants that do not make SR1f. For Figure 6, leaves from RCNMV-and RCNMVΔSR1f-infected N. benthamiana were collected at 5 d post inoculation (dpi) for PCR and at 14 dpi for northern blot hybridization, pulverized and total RNA was extracted using Zymo Direct-zol RNA Miniprep (Zymo Research #R2051). ZIKV Hela cells were seeded at a density of 3 × 10 5 cells per well in a sixwell plate. One day later, cells were infected with the wild-type (ZIKV-Cambodia) or mutant (10ΔZIKV) virus at an MOI of 3. After 48 h post-infection, cells were washed with PBS and total RNA was extracted from cells using the Direct-zol RNA MiniPrep kit (Zymo Research). cDNA synthesis Amount of in vitro transcribed RNA or plant/cell total RNA that were reverse transcribed is indicated in the results section. RNA (IVT or total RNA) and virus-specific reverse primer (15 pmol; same reverse primer as used for DeSCo-PCR) were mixed in nuclease-free water to 12 µL and incubated at 65°C for 5 min, transferred to ice followed by addition of 4 µL reaction buffer, 1 µL RiboLock, 2 µL 10 mM dNTPs and 1 µL RT enzyme from RevertAid First Strand cDNA Synthesis kit (Thermo Scientific #K1621). The reaction mix was incubated at 42°C for 60 min followed by enzyme deactivation at 70°C for 5 min. The cDNA reaction products from IVT RNAs were diluted fivefold and considered as "undiluted samples" for experiments with serially diluted templates while cDNA reaction products from total RNA from infected samples were not diluted but used as is for PCR. PCR GoTaq G2 green master mix (Promega #M7823) was used for all PCR reactions. Simple PCR with RP plus FP as positive control, RP plus BP as negative control, DeSCo-PCR with RP plus FP plus BP, and gRNA-specific PCR were carried out in a thermocycler with the capability of controlling the ramp rate. Ramp rate of 0.5°C per second was used for the PCR reactions specified below. A 20 µL PCR reaction mix was prepared with 2-µL template and final concentration of each of the primers, if used, were as follows: 0.2 µM RP, 0.2 µM FP, 4 µM BP. BP: FP = 20: 1 was determined, empirically, as optimum for RCNMV and ZIKV, for successful DeSCo-PCR to selectively amplify sgRNA cDNA and completely block amplification from gRNA cDNA (data not shown). The primers used are mentioned below and the primer sequences can be found in Supplemental Table S2. PCR conditions were as follows: RCNMV 98°C (2 min); 18 cycles of 98°C (30 sec), 65°C (20 sec, ramp rate = 0.5°C/sec), 72°C (30 sec); 72°C (2 min); 4°C hold. Primers used were RFP, RRP, RBP, RFP-m1, and RBP-m1. PCR reaction products were run on a 1% agarose gel, with SYBR Safe DNA gel stain (Invitrogen #S33102), in 1× TBE buffer and visualized on a Bio-Rad Gel doc. The gel images shown were cropped to show the band of interest. One thing to note is that all DeSCo-PCR performed with RCNMV and ZIKV resulted in amplification of BP-derived primer-dimer but it did not affect the relative band intensity measurement. Measurement of relative expression of sgRNA When the agarose gels were imaged for quantitative analysis, the exposure time was set for maximum duration at which no saturating intensity was observed in the amplified bands. Fiji software (ImageJ) was used to measure band intensity. Intensity of the background was measured from three separate regions of the gel where no band/DNA is expected, and the values were averaged. The averaged background intensity, referred to as "blank," was subtracted from the intensity of each band from gRNA-specific PCR. For DeSCo-PCR, either the "blank" values or the band intensity of "No sgRNA" samples were considered as background intensity and were subtracted from the band intensity of each sample. This was done three times for each gel and background-subtracted values from the three measurements were averaged. The final values were normalized with respect to the band intensity of undiluted cDNA. The values obtained from DeSCo-PCR are the relative band intensity representing the relative amount of sgRNA in each sample. For all results shown with the relative measurement of sgRNA, PCR was carried out three times and the values for the relative band intensities were averaged and plotted on a graph using Microsoft Excel. Error bars represent the standard deviation of the relative band intensities obtained from three PCR reactions. Northern blot hybridization RCNMV An amount of 9.5 µg total RNA from noninoculated leaves of RCNMV-and RCNMVΔSR1f-infected N. benthamiana were mixed with an equal volume of 2× RNA loading dye (NEB #B0363S), denatured by incubating at 70°C for 10 min and 5 min on ice and loaded on a 1.2% agarose-formaldehyde gel (1.2% [w/v] agarose, 20 mM sodium phosphate buffer [pH 6.8], 8 mL of 37% formaldehyde per 100 mL of gel). Electrophoresis was carried out at 100 V for 2 h in running buffer (74 mL of 37% formaldehyde per 1 L of running buffer, 20 mM sodium phosphate buffer [pH 6.8]). Integrity and equal loading of RNA were verified by visualizing the gel on a Bio-Rad Gel doc. The gel was washed in sterile water for 5 min at room temperature (RT) and blotted to a nitrocellulose membrane by the capillary transfer method overnight using 10x saline-sodium citrate (SSC) buffer (Invitrogen #AM9763). Post-transfer, the membrane was washed in 5× SSC for 5 min at RT, dried on a paper towel, and UV-crosslinked in StrataGene UV Stratalinker 1800 using the "Auto Cross Link" option. The membrane was placed in a glass cylindrical bottle and incubated in 5 mL hybridization buffer (50% [v/v] formamide, 5× SSC buffer, 0.2 mg/mL polyanetholsulphonic acid, 0.1% [w/v] SDS, 20 mM sodium phosphate buffer [pH 6.8]) at 65°C for 1 h in a hybridization oven (VWR). The buffer was discarded, and fresh 5 mL hybridization buffer was added to the bottle with 5 µL radiolabeled RNA probe. Probe hybridization was carried out overnight in a hybridization oven at 65°C. Post hybridization washes were carried out in a hybridization oven as follows: two washes with 50 mL high salt concentration buffer (1× SSC, 0.1% [w/v] SDS) at RT for 20 min, two washes with 50 mL low salt concentration buffer (0.2× SSC, 0.1% [w/v] SDS) at 68°C for 20 min, and one wash with 50 mL 0.1× SSC buffer at RT for 20 min. The membrane was dried on a paper towel, covered in a saran wrap and placed inside the phosphor cassette with phosphor screen, imaged by autoradiography using Bio-Rad PharosFX Plus Molecular Imager. ZIKV An amount of 5 μg of total RNA from mock and ZIKV-infected cells was mixed with 2× formaldehyde loading buffer (Thermo Fisher Scientific), and denatured by incubating at 65°C for 15 min and 2 min on ice. Electrophoresis was performed in 1% denaturing agarose gel and stained with ethidium bromide. After electrophoresis, the gel was incubated in the alkaline buffer (0.01 N NaOH, 3 M NaCl) for 20 min and subsequently transferred to a Biodyne B nylon membrane (Thermo Fisher Scientific) by upward transfer. The membrane was crosslinked using a UV Stratalinker and blocked at 42°C using ULTRAhyb Oligo hybridization for 1 h while rotating. Blots were probed overnight rotating at 42°C with a Biotin-labeled DNA probe prepared as described in Soto-Acosta et al. (2018). After hybridization, the membrane was washed in wash buffer for 15 min at 42°C four times. The blot was incubated for 1 h at room temperature with IRDYE 800CW streptavidin (LI-COR Biosciences) in Odyssey Blocking Buffer (LI-COR Biosciences) with 1% of SDS. Later the membrane was washed three times with TBS buffer containing 0.1% tween, and the membrane was scanned using an LI-COR Odyssey. SUPPLEMENTAL MATERIAL Supplemental material is available for this article.
2020-04-03T19:14:31.001Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "080f1a60416cf19a15781c8123d9fb5372ecfbbf", "oa_license": "CCBY", "oa_url": "http://rnajournal.cshlp.org/content/26/7/888.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9e91b460bbdff259f267dbf5c31eda73b8ebd9f0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
21126851
pes2o/s2orc
v3-fos-license
Neural Correlates of Auditory Verbal Hallucinations in Schizophrenia and the Therapeutic Response to Theta-Burst Transcranial Magnetic Stimulation Abstract Auditory verbal hallucinations (AVHs) are a core symptom of schizophrenia, and resistant to antipsychotic medication in a substantial proportion of patients. This study aimed to investigate the neural correlates of AVHs in schizophrenia patients and its response to a modified continuous theta-burst stimulation (cTBS) by transcranial magnetic stimulation. In a cross-sectional experiment, resting-state functional magnetic resonance images were collected from 31 AVH schizophrenia patients, 26 non-AVH schizophrenia patients, and 33 sex-/age-matched healthy controls (HCs). Functional connectivity strength (FCS) maps were compared among groups by 1-way analysis of variance (ANOVA). In a longitudinal experiment, 16 and 11 AVH patients received real and sham cTBS treatment for 15 days, respectively. Notably, this was not a randomized control trail. Changes in AVH and FCS were analyzed by 2-way ANOVA and 2-sample t-test, respectively. In the cross-sectional experiment, comparison of FCS maps identified 8 clusters among groups, but only one cluster (in left cerebellum) differed significantly in AVH patients compared to both HCs and non-AVH patients. In the longitudinal experiment, the real cTBS group showed a greater improvement in symptoms and a larger FCS decrease in left cerebellum than the sham group. Pearson’s correlation analysis indicated that baseline FCS of the overlapping cerebellum cluster (between the cross-sectional and longitudinal findings) was negatively correlated with symptom improvement in the real treatment group. These findings emphasize the role of the left cerebellum in both the pathophysiology and clinical treatment of AVHs in schizophrenia patients. Introduction Auditory verbal hallucinations (AVHs) are a characteristic symptom of schizophrenia, affecting approximately 60%−80% of patients. 1 Although most patients are responsive to antipsychotic pharmacotherapy, a substantial number (~25%) are treatment-resistant and continue to experience AVHs. Thus, alternative therapies are urgently needed to reduce the severity and frequency of AVHs experienced by these patients. AVH refers to the experience of perceiving speech in the absence of corresponding external stimuli. 2 Despite numerous studies using a variety of approaches, 3-6 the exact mechanisms by which AVHs arise spontaneously from intrinsic brain activity remains unclear. To address this issue, recent studies have examined the spontaneous functional connectivity in schizophrenia patients with AVH using resting-state functional magnetic resonance imaging (rs-fMRI). 5,7 Such studies have given rise to the "resting state hypothesis of AVH," which posits that anomalies in intrinsic brain connectivity and ensuing activity generate AVHs. 8 Indeed, compared to schizophrenia patients without AVH (non-AVH), AVH patients exhibited distinct intrinsic cortico-subcortical connectivity patterns [9][10][11] and interhemispheric circuits. [12][13][14] Particular attention has been paid to functional connectivity with the left temporo-parietal junction (TPJ), 13,15,16 because the TPJ may be an effective repetitive transcranial magnetic stimulation (rTMS) target for AVH patients. [17][18][19] Local activity and global network properties have also been investigated by amplitude of low-frequency fluctuations 20 and graph theory. 21 To exclude potential confounding variables (ie, variations in medication history and severity of symptoms), the neural correlates of AVH have also been examined by comparing nonpsychotic individuals with and without AVH. [22][23][24] Such rs-fMRI studies yielded useful information to enhance an understanding of the neurobiological mechanisms underlying AVH. However, most of them were cross-sectional 6,25,26 and rarely associated with novel therapies, such as rTMS. rTMS is a well-established, noninvasive technique that induces long-lasting changes in excitatory and inhibitory activity (aftereffects) of the target network depending on the frequency and temporal pattern of stimulation. It has been applied for the treatment of many neurological and psychiatric disorders. 27 Hoffman and colleagues 28,29 reported the possible efficacy of inhibitory rTMS on AVH in schizophrenia patients, and its frequency protocol (1-Hz rTMS over the left TPJ) was adopted in most subsequent studies. However, the clinical efficacy of this protocol was not consistently supported. [30][31][32][33] In most of these studies, the TPJ was defined according to the international 10/20 system of electroencephalography electrode placement. Given the anatomical variability of the human brain, this coarse localization method may prove less accurate and efficient than image-based navigation approaches. 18,34,35 Several studies have also tested continuous theta-burst stimulation (cTBS) for AVH treatment, [36][37][38] since this paradigm exhibited more powerful inhibitory aftereffects than 1-Hz rTMS in the motor system. 39 However, a randomized trial found that the efficacy of cTBS was not significantly higher than placebo treatment. 33 This negative finding may be attributable to the stimulation parameters, which would be improved in the current study by using a longer stimulation regimen, precise target localization, and an optimized inter-session interval (ISI = 30 min). 40 This study compared the resting-state brain function of AVH schizophrenia patients, non-AVH schizophrenia patients, and healthy controls (HCs) to provide a context for a longitudinal rTMS experiment. The researchers hypothesized that a modified cTBS protocol, compared to sham rTMS, may significantly alleviate AVH symptoms by remodeling the abnormal brain function observed in the cross-sectional experiment. Materials and Methods This study is composed of cross-sectional and longitudinal experiments. All participants provided written informed consent before experiments. The cross-sectional experiment investigated the neural correlates of AVH by comparing FCS maps among AVH, non-AVH, and HC participants. The longitudinal experiment tested the clinical efficacy of a modified rTMS protocol and its underlying neural mechanism by rs-fMRI. Finally, a spatial overlap map was produced, utilizing the fMRI findings of both experiments, to evaluate the rTMS mechanism in terms of baseline abnormality (figure 1). Notably, the longitudinal rTMS experiment was not a randomized control trail, although both real and sham treatment were performed in this part. Participants A total of 57 patients diagnosed with refractory schizophrenia at the Anhui Mental Health Center (Hefei, China) were consecutively enrolled in this study (supplementary tables E1-E3). The study protocol was reviewed and approved by the Medical Ethics Committee of Anhui Medical University, Hefei, China. All participants satisfied the following inclusion criteria: (1) diagnosis of schizophrenia using the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (SCID-IV), (2) were taking stable doses of psychotropic medication for at least 8 weeks before the study, and (3) verbal intelligence quotient >85 as measured After rTMS treatment, 11 AVH patients were recruited to represent a control group in the longitudinal study. Notably, their imaging data were acquired using different scanners (albeit the same brand) than the other groups. The "overlap" represents a spatial overlap map of significant clusters from cross-sectional and longitudinal studies. utilizing a Chinese version of the National Adult Reading Test. Exclusion criteria were as follows: (1) history of significant head trauma or neurological disorders, (2) alcohol or drug abuse, (3) focal brain lesions on T1-or T2-weighted fluid-attenuated inversion-recovery magnetic resonance images, (4) recent aggression or other forms of behavioral dysfunction, (5) head motion exceeding 3 mm in translation or 3° in rotation during rs-fMRI scanning, or (6) Hamilton Anxiety Rating Scale or Hamilton Depression Rating Scale score >7. Patients were further classified into 2 groups: those reporting AVHs of spoken speech at least 5 times per day during the preceding 8 weeks (AVH group, n = 31) and patients who had not experienced AVHs (or olfactory, gustatory, tactile hallucination) since the diagnosis of schizophrenia or within 5 years before scans (non-AVH group, n = 26). The most commonly positive syndromes of the non-AVH group were delusional disorder (n = 15), impulsion (n = 10), and restlessness (n = 8). All AVH patients were refractory, which may be defined as the experience of persistent daily hallucinations without remission despite antipsychotic medication administered at an adequate dosage for at least 12 weeks. With regard to AVH patients participating in the longitudinal rTMS study, additional exclusion criteria included (1) age <18 years, (2) non-removable metal objects in or around head, or (3) prior history of seizure or history in first degree relatives. Finally, 16 AVH patients provided their consent for real rTMS treatment. After the end of the rTMS treatment experiment, the researchers realized that a sham-control would be necessary to exclude the placebo effect of rTMS. According to the primary results of real stimulation (see Results), a sample of 7 participants proved large enough to identify the treatment effect (alpha = 0.05, beta = 0.8). Fortunately, an interim inspection of another ongoing study (Randomized Clinical Trail [RCT] number: NCT02863094) allowed the researchers to collect data from 11 refractory AVH patients (meeting the inclusion and exclusion criteria of this study) who had received sham rTMS. Thus, their data were utilized in the sham-control group in the current study. Thirty-three HCs with no history of neurological or psychiatric illnesses were randomly recruited from the local community. This group exhibited no gross abnormalities on brain MR images. MRI Data Acquisition Magnetic resonance images were acquired from 2 scanners of the same type (3.0T, Discovery GE750w, General Electric). One scanner was used for the sham treatment group, and the other for the real treatment group and cross-sectional experiment. Details were outlined in supplementary materials. Neuro-navigated rTMS The AVH patients made 17 study-related visits to the hospital. On the first and last visits, the researchers acquired MRI data and assessed neuropsychological conditions and clinical symptoms. From the 2nd to 16th day, participants received 3 daily sessions of cTBS treatment delivered using a MagStim Rapid 2 stimulator (Magstim Company Ltd.) with a 70-mm air-cooled figure-of-eight coil. One session of cTBS was 40 seconds in duration and consisted of 3-pulse bursts at 50 Hz repeated every 200 milliseconds (5 Hz) until a total of 600 pulses was reached. 39 To achieve cumulative aftereffects, this protocol was repeated 3 times and (1800 pulses in total) separated by two 15-minute breaks (controlled by a stopwatch) in line with previous methodological studies. 40,41 The researchers delivered cTBS at 80% of the resting motor threshold (RMT) 38 or the highest intensity the stimulator could deliver for this protocol (50% of maximum output). The RMT was determined at each visit according to a 5-step procedure. 42 The stimulation target, the left TPJ, was defined as a sphere of 6-mm radius centered at Montreal Neurological Institute (MNI) coordinates [−51, −31, 23]. 13,16 This target was transformed into each participant's T1 space by applying an inverse matrix produced during T1 segmentation in SPM (www.fil.ion.ucl.ac.uk/spm) and TMStarget software. 43 Then, each individual's target was imported into a frameless neuronavigation system (Visor 2.0, Advanced Neuro Technologies). The coil was held tangentially to the skull pointing forward, with the center over the target sphere. Patients in the sham control group received the same rTMS protocol and treatment duration as the real rTMS group. The only difference was the usage of a sham coil (Magstim Company Ltd.) that produced a similar feeling on the participant's scalp as the real coil but did not induce a current in the cortex. Clinical Symptom and Neuropsychological Assessments Clinical symptoms of all patients were graded according to the Positive and Negative Syndrome Scale (PANSS). In addition, all patients and HCs were evaluated using standardized neuropsychological tests (supplementary tables E1 and E2). For AVH patients participating in the rTMS study, the primary outcome was the Auditory Hallucination Rating Scale (AHRS), which would be administered with other measures on the 1st and 17th visits. MRI Data Processing Functional image processing was carried out using the DPARSF (http://rfmri.org) 44 and SPM (www.fil.ion.ucl. ac.uk/spm) toolkits. The preprocessing included (1) deleting the first 5 volumes; (2) slice timing and realignment; (3) co-registering T1 to functional images; (4) normalizing T1 to the MNI space and segmenting it into gray matter, white matter, and cerebrospinal fluid (spatial resolution: 3 × 3 × 3); (5) smoothing images with a 4-mm isotropic Gaussian kernel; (6) filtering temporal bandpass (0.01-0.1 Hz), and regressing out 15 nuisance signals (global mean, white matter, cerebrospinal fluid signals, and 24 head-motion parameters. 45 Subsequently, Pearson's correlations were conducted between the time series of all pairs of voxels to construct a whole-brain matrix for each participant. Finally, functional connectivity strength (FCS) defined as the sum of the coefficients between a given voxel and all other voxels, was then standardized by dividing by the average whole-brain FCS value. To eliminate voxels with weak correlations attributable to signal noise, the analysis was restricted to positive correlations (r > .25, P < .001). 46,47 Statistical Analysis For the cross-sectional study, demographic characteristics, neuropsychological scores, and imaging features were compared between the 3 groups by one-way analysis of variance (ANOVA). Notably, outliers in neuropsychological scores were identified before ANOVA by nonlinear regression analyses in GraphPad Prism. 48 The Bonferroni or Tamhane's test was used for pair-wise post hoc comparisons. Clinical characteristics were compared between patient groups using 2-sample t-tests. In the longitudinal study, the change in clinical symptom was analyzed by 2-way (group by time) ANOVA. Since the MRI data of real and sham groups were acquired using different scanners, comparisons of pre-or post-treatment data between the groups largely reflect the systematic error of scanners rather than brain functional changes. However, the pre-to post-treatment alteration is comparable, since the scanner effect could be well controlled by the within center subtraction (post-minus pre-state). Thus, the FCS changes following treatment were compared between groups by 2-sample t-test. All voxel-based imaging analyses were corrected by Gaussian Random Field (GRF) theory (clusterdefined threshold, P < .05, cluster-level corrected P < .05). Complementary Experiments First, target-to-whole brain functional connectivity was used as a secondary measure. Second, longitudinal data of 2 HC groups from 2 scanning sites were utilized to clarify whether the MRI findings for the TMS effect were influenced by scanner. Third, we re-computed the FCS for correlations >0 to test the robustness of the findings. Fourth, all image-based statistical analyses were corrected for multiple comparisons using Threshold-Free Cluster Enhancement in FSL software (corrected P < .05). 49 See details of these experiments in supplementary materials. Demographic, Clinical, and Neuropsychological Assessments at Baseline There were no differences in age, education, and sex ratio among the 3 groups (supplementary table E1 rTMS Improved Clinical Symptoms But Not Cognitive Function According to a recent double-blind randomized trial, 33 this study defined rTMS responders as participants who showed ≥25% decrease in AHRS. Six of 16 participants in the real group, and 3 of 11 patients in the sham group were deemed responders. Fisher's exact test did not reveal a significant difference in the responder/nonresponder ratio between groups (P = .69). However, the ratio for the real group was higher (P = .05; figures 3A and 3B) than that found in a previous rTMS study (4 responders in 37 subjects). 33 Two-way repeated ANOVA indicated a significant group × time interaction effect for clinical symptoms but not for cognitive functions (supplementary table E2; figures 3C and 3D). Specifically, the AHRS (t = −5.66, P <.0001), negative PANSS (t = −3.75, P = .002), positive PANSS (t = −5.25, P < .0001), and total PANSS (t = −6.17, P < .0001) scores (figures 3C and 3D) significantly decreased following real treatment. Negative PANSS consisted of 3 dimensions (emotion, behavior, and thought), and behavior factor was the most responsive one to the treatment (paired t = 3.93, P = .001, supplementary table E5). Compared to the sham group, the real group showed lower scores in AHRS (t = −2.13, P = .04), negative PANSS (t = −2.48, P = .02), and total PANSS (t = −3.30, P = .003), but similar positive PANSS (t = 0, P > .99, figures 3C and 3D) at the end of treatment. X. Chen et al rTMS Modified Functional Connectivity The frame-wise head motion before and after treatment 50 did not differ in either the real rTMS (t = 0.90, P = .39) or sham (t = 0.51, P = .62) group. Although the image datasets from the real and sham cTBS groups were obtained from 2 scanners, the alterations after rTMS were comparable between groups (see Complementary Experiments for the demonstration). Bar graph (B) indicates a higher responder/nonresponder ratio in the current study than in Koops et al. 33 The symptom improvements after real and sham treatment are illustrated at both the individual (C) and group (D) level. Notably, there is no outlier in the symptom measures. Error bars indicate SEM. *P < .05, **P < .01, ****P < .0001. Spatial Overlap and Correlation With Symptom Improvement The baseline binary result map ( figure 2D) overlapped with that of the longitudinal experiment (figure 4C) at the left cerebellum ( figure 5A). The FCS value of the overlapping cluster was extracted from HC, non-AVH patients, and 16 AVH patients before and after real cTBS treatment. One-way ANOVA (F 2,72 = 0.74, P = .0003) and post hoc analyses indicated that the left cerebellum cluster showed higher FCS in AVH patients before treatment than either non-AVH patients (t = 2.89, P = .006) or HCs (t = 4.66, P < .0001; figure 5B). Following rTMS treatment, FCS in the left cerebellum significantly decreased (t = 3.44, P = .004; figure 5B). Complementary Experiments All the 4 experiments confirmed the important role of left cerebellum in schizophrenia as the main text. See details in the supplementary table E7 and supplementary figures E1-E7. Discussion From a functional connectivity perspective, this study revealed the neuronal correlates of AVH in schizophrenia by a cross-sectional experiment involving patients exhibiting AVHs, patients without AVHs, and matched HCs. Among the clusters identified in the baseline ANOVA, only the cluster in the left cerebellum differed significantly in AVH patients vs the other groups. Importantly, FCS within this cluster was modulated by real rTMS treatment but not the sham condition, and the baseline value was negatively correlated with clinical symptom improvement. The cerebellum is connected to the cerebral cortex via a cortico-cerebellar-thalamic-cortical circuit. Its involvement in AVH has been demonstrated by metaanalyses of functional activation experiments, 51,52 though inconsistencies were also found. 53,54 Dynamic fMRI analysis further indicated the activation of the left cerebellum prior to the occurrence of AVHs, suggesting a "trigger" role of the cerebellum. 55,56 At a cellular level, the crucial role of the cerebellum in schizophrenia has been systemically reviewed. 57 Cerebellar Purkinje cells discriminate specific input conditions, such as variation in patterns of auditory input, through synaptoplastic processes including long-term potentiation and depression. In schizophrenia patients, the cerebellum fails to perform these error detection functions. As a result, input information from the auditory cortex without an external stimulus may be misinterpreted as "external" rather than internal, leading to the experience of AVHs. 9,57 Decreased FCS within the cerebellum following treatment may reflect an attempt to rebalance inhibitory and excitatory transmission, which may result in an appropriate perception of inner speech. 4 Both AVH and non-AVH patients showed similar alterations in the right cerebellum, right MPFC, and left ITG compared to HCs, which may underlie common symptoms of schizophrenia. On the other hand, non-AVH patients showed abnormalities in the bilateral cuneus, bilateral thalamus, left central sulcus, and left SMG compared to both AVH patients and HCs, which may be related to the unique symptoms of non-AVH patients. However, none of these clusters survived the fourth validation analysis. As such, their biological meaning should be interpreted with care. The researchers found significant symptom improvement after cTBS treatment, and the responder/nonresponder ratio was higher than in Koops et al. 33 This improved outcome may be due to optimized stimulation parameters, such as ISI, longer treatment and more sessions/day. 40,41 Psychological factor may also contribute to this difference, since patients in our real group knew they would received real treatment, while patients in Koops et al 33 did not know their group allocation. Our findings suggest that the left cerebellum constitutes an important neural correlate for this clinical improvement. The left cerebellum has directly structural connectivity with the right rather than left cortices, such as the stimulation target. Thus, future rTMS studies stimulating the right TPJ may produce stronger aftereffect in both neuroimaging and clinical efficacy. Moreover, the negative correlation between symptom improvement and the pretreatment FCS value in the left cerebellum suggests that the baseline FCS may be used to screen patients sensitive to rTMS treatment. Analyses of neuropsychological tests indicated that both patient groups showed decreased performance on the Stroop word and trail making B test compared to HCs; a deficit was found in the digit span [backward] of non-AVH patients. These findings derived from neuropsychological tests were in line with previous studies that reported cognitive deficits in schizophrenia patients. [58][59][60] Importantly, no cognitive difference was found between non-AVH and AVH groups. This finding suggests that extraneous factors were well-balanced between groups. Although rTMS could modulate cognitive function and neural circuitry, 43,61 no cognitive improvement or deterioration was found in this study after treatment. This finding may be attributed to the fact that the current cTBS parameters, especially with regard to the target, were specifically designed for AVH alleviation rather than any particular cognitive function improvement. Although the findings are encouraging, 3 limitations should be mentioned. First, the rTMS experiment was not a randomized control trial. The sham group was recruited after the real group, and their instructions were slightly different. The real rTMS group was told that they would receive a novel treatment through magnetic stimulation, but its clinical efficacy was still in controversial. The sham group was part of a real RCT experiment, and the patients were instructed that they would be assigned to the real or sham group randomly. Thus, both real and sham group in this study would not have extremely high or no expectation on the treatment. Additionally, the sham stimulation was performed by a placebo coil, which induced quite similar sensory on scalp as the real one, but actually no induced current within the brain. Thus, the patients hardly knew whether they were receiving real or sham stimulation. For both group, the clinical syndrome was estimated by an experimenter blinded to the group allocation of patients. In all, the real and sham group had similar expectation before treatment and experienced similar stimulation procedures. To further exclude the effect of the design flaws on our findings, we strongly suggest a real RCT investigation. Second, the real and sham groups were not acquired by the same scanner, which was not an ideal design. For cross-sectional comparison, the results may be affected by the systematic error between scanners. But for comparison of longitudinal alteration, as our complementary experiment indicated, no significant scanner effect was found in the left cerebellum area. Third, this study did not control for the possible confounding effects of AVHs during rs-fMRI data acquisition. A similar problem exists for rs-fMRI studies on epilepsy, though a previous study demonstrated that internal events do not exert substantial effects on brain function. 62 This rs-fMRI study revealed abnormally high FCS of the left cerebellum in schizophrenia patients with AVH. This abnormality as well as the AVH symptom could be significantly decreased through the administration of a modified cTBS treatment. Furthermore, the improvement in AVHs after rTMS treatment may be predicted by baseline FCS in the left cerebellum. In summary, these findings emphasize the role of the left cerebellum in both the pathophysiology and clinical treatment of AVH in schizophrenia patients. Supplementary Material Supplementary data are available at Schizophrenia Bulletin online.
2018-05-09T00:43:47.525Z
2018-05-03T00:00:00.000
{ "year": 2018, "sha1": "5d50980cb273055b004a87b5821c780f4f11cf59", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/schizophreniabulletin/article-pdf/45/2/474/28018418/sby054.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1a6e7615e7f88b3cbfa473d70372f70b06475f70", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
1467049
pes2o/s2orc
v3-fos-license
Phylogenetic relationships of the glycoprotein gene of bovine ephemeral fever virus isolated from mainland China, Taiwan, Japan, Turkey, Israel and Australia Background The glycoprotein (G) gene sequences of bovine ephemeral fever virus (BEFV) strains derived from mainland China have not been compared with those of the isolates from other countries or areas. Therefore, the G genes of four BEFV isolates obtained from mainland China were amplified and sequenced. A phylogenetic tree was constructed in order to compare and analyze the genetic relationships of the BEFV isolates derived from mainland China and different countries and areas. Results The complete BEFV G gene was successfully amplified and sequenced from four isolates that originated from mainland China. A total of fifty-one BEFV strains were analyzed based on the G gene sequence and were found to be highly conserved. A phylogenetic tree showed that the isolates were grouped into three distinct lineages depending on their source of origin. The antigenic sites of G1, G2 and G3 are conserved among the isolates, except for several substitutions in a few strains. Conclusions The phylogenetic relationships of the BEFV isolates that originated from mainland China, Taiwan, Japan, Turkey, Israel and Australia were closely related to their source of origin, while the antigenic sites G1, G2 and G3 are conserved among the BEFV isolates used in this work. Background Bovine ephemeral fever virus (BEFV) is an arthropodborne rhabdovirus which belongs to the genus Ephemerovirus in the Rhabdoviridae [1]. Bovine ephemeral fever (BEF), caused by BEFV, is an acute febrile disease in cattle and water buffalo in tropical and subtropical regions of Africa, Asia, Australia and the Middle East. The disease has a considerable economic impact on dairy farming in China. Most infected livestock present with a decrease in the quantity and quality of milk, and lameness or paralysis [2,3]. BEFV is a negative ssRNA genome and viral particles have a bullet-like appearance or tapered shape. In addition, the virus bears spikes on the surface of envelope proteins. Five structural proteins of BEFV have been described, which comprise nucleoprotein (N), surface glycoprotein (G), large RNA-dependent RNA polymerase (L), polymerase associate protein (P) and matrix protein (M) [4][5][6][7]. Monoclonal antibody (MAb) studies of the prototype virus indicate that the G protein is the main protective antigen [8]. Four distinct antigenic sites (G 1 , G 2 , G 3 and G 4 ) on the surface of the G protein have been identified [8][9][10]. Antigenic site G 1 is linear, and G 2 and G 3 are conformational. G 1 reacts only with anti-BEFV antibodies, but the other antigenic sites show cross-reactivity with sera against other related viruses [11]. A blocking enzyme-linked immunoabsorbent assay (ELISA) and two indirect ELISAs for the detection of the antibodies against the G 1 site of BEFV have been established [12][13][14]. BEF was first documented in China in 1955 [15]. The first BEFV strain JB76H was isolated from infected dairy cattle during the 1976 epidemic in mainland China [16]. The disease was reported to be prevalent in twenty-five provinces in mainland China from 1952 to 1991 [11,15]. From 1991 to date, because BEF is not carried out, there is a lack of detailed epidemiological data on the disease in mainland China, except for Henan Province in central China. The major BEF epidemics are shown in Table 1. The data were mainly obtained from the previous reports [11,15], while the data relating to the BEF epidemics in Henan Province from 1983 to 2011 were obtained from our monitoring system. Currently, no information is available on the variation in antigenic properties and nucleotide sequences of the G gene of BEFV isolated in mainland China. In this study, the complete G genes of four BEFV strains (LS11, LYC11, JT02L and JB76H) obtained from mainland China were amplified and sequenced. For the first time, the phylogenetic relationships and antigenic variation of the G genes of BEFV, isolated from mainland China, Taiwan, Japan, Turkey, Israel and Australia, were analyzed. Virus isolation and identification The JB76H strain was used in mainland China as the vaccine against BEFV. The BEFV strain JT02L was obtained from an outbreak that occurred in 2002 in Zhejiang Province. The LS11 and LYC11 strains of BEFV were isolated from blood samples of the infected dairy cattle during the 2011 epidemic in Luoyang city, Henan Province. The blood samples were collected in order to monitor BEF, which is required by the foundation item supporting for this work. Isolation of BEFV was carried out in the brains of suckling mice and baby hamster kidney (BHK-21) cells as described previously [17]. Briefly, the blood collected from infected dairy cattle was mixed with Alsever's solution, and BEFV was concentrated decuple by centrifugation of the blood sample. Subsequently, the BEFV samples were inoculated into the brains of suckling mice and subjected to seven blind passages in suckling mice. Thereafter, BHK-21 cells were inoculated with BEFV extracted from the brains of the 7th passage infected suckling mice. The virus underwent five to ten passages in BHK-21 cells, until cytopathogenic effects (CPE) were observed. The presence of BEFV was confirmed by reversetranscription polymerase chain reaction (RT-PCR) as reported previously [18]. The BEFV RNAs were extracted from the infected blood and BHK-21 cells using a QIAamp viral RNA mini kit (Qiagen, Hilden, Germany). For RT-PCR, the primers were 420F (5' AGA GCT TGG TGT GAA TAC 3') and 420R (5' CCA ACC TAC AAC AGC AGA TA 3'). The forward primer 420F was used to reverse-transcribe BEFV RNA to cDNA. Subsequently, a partial fragment of the BEFV G gene was amplified using the primers 420F and 420B. After the initial denaturation at 94°C for 5min, the amplification proceeded through a total of 35 cycles consisting of denaturation at 94°C for 40s, annealing at 46°C for 1min, primer extension at 72°C for 40s and a final extension for 10min at 72°C. The expected DNA fragments were 420 base pairs (bp) in length. Amplification and sequencing of the BEFV G gene The complete BEFV G gene was amplified as described in a previous report [19]. The primers were GF (5' ATG TTC AAG GTC CTC ATA ATT ACC 3') and GR (5' The amplified fragments of the BEFV G gene were purified with an agarose gel DNA purification kit (TaKaRa, Dalian, China) and ligated with the pGEM-T Easy vector. Subsequently, the ligated mixtures were transformed into Escherichia coli DH5α. The plasmids were extracted from positive clones and then sequenced by TaKaRa (Dalian, China). The sequences obtained were deposited in the NCBI GenBank database. Phylogenetic analysis of the BEFV G gene sequences The nucleotide length of the region encoding the entire ectodomain of BEFV G protein is 1527 bp [10]. An alignment of BEFV sequences corresponding to the ectodomain region were carried out using the Clustal W program [20]. The BEFV strains were isolated from mainland China, Taiwan, Japan, Turkey, Israel and Australia ( Table 2). The nucleotide and deduced amino acid (aa) sequence homologies among the isolates were analyzed using the MegAlign program of DNAstar. The phylogenetic tree based on the nucleotide sequence (1527 bp) of the analyzed G genes was constructed by the neighbor-jointing method [21] with the Kimura twoparameter model [22]. The reliability of the branching orders was evaluated by the bootstrap test with 1000 replicates [23]. Phylogenetic analyses were conducted using MEGA 5 software [24]. If the nucleotide sequences of several BEFV strains had 100% homology, a representative isolate was used to construct the phylogenetic tree. Amino acid sequence variation of the antigenic sites of BEFV G protein The aa sequences corresponding to the antigenic sites G 1 , G 2 and G 3 have been determined previously [8,10]. The sites G 1 and G 2 are located at the residues 487-503 and 168-189, respectively. The conformational site G 3 is located at residues 49-63, 215-231 and 262-271. The aa sequences deduced from BEFV G genes were aligned, and the variations in the aa corresponding to the sites G 1 , G 2 and G 3 were analyzed. The representative BEFV strains used were isolated at different times or from different countries and areas. Virus isolation and identification The DNA fragments of 420 bp were amplified from blood samples of the infected dairy cattle by RT-PCR. It was confirmed by sequence analysis that the gene fragments represented part of the BEFV G gene indicating that the disease shown in the cattle was in fact BEF. From the outbreak of BEF in Luoyang in 2011, infected blood was collected from dairy cattle in the Songxian and Yichuanxian areas, and two BEFV strains, designated as LS11 and LYC11, were isolated by intracerebral inoculation of suckling mice and in BHK-21 cells. The infected suckling mice showed paralysis and stiffness in their hind legs on the second to third day after inoculation and died during 12-24 hours postmorbidity. The infected BHK-21 cells showed specific CPE. The specific DNA fragments of 420 bp were also amplified from the LS11 and LYC11 strains. Amplification and sequencing of the BEFV G gene Complete BEFV G genes (1872 bp) were successfully amplified and sequenced from the JB76H, JT02L, LS11 and LYC11 strains. The G gene sequences of LS11, LYC11, JT02L and JB76H isolates have been assigned the accession numbers JX564637, JX564638, JX564639 and JX564640, respectively, in the GenBank database. Phylogenetic analysis of the BEFV G gene sequences The G gene sequences of the other forty-seven BEFV isolates were obtained from the GenBank database. A total of fifty-one BEFV isolates were used in this study (Table 2), and forty-one representative strains were used to produce Figures 1 and 2. All nucleotide and deduced aa sequences corresponding to the ectodomain region of BEFV G protein were highly conserved among the BEFV isolates obtained from mainland China, Taiwan, Japan, Turkey, Israel and Australia. The identities of the nucleotide sequences were between 89.3% and 99.9%, and those of the aa sequences were between 94.5% and 100%. Forty-one BEFV isolates were grouped into three distinct lineages (Figure 1). Cluster I contained the strains isolated from mainland China, Taiwan and Japan. The Turkish and Israeli isolates were grouped into cluster II, and Australian strains were placed in the independent cluster III. Table 2 Characteristics of BEFV strains used in this study (Continued) ISR04 Bovine blood 2004 Israel II JN833632 ISR10/1 Bovine blood 2010 Israel II JN833633 ISR10/2 Bovine blood 2010 Israel II JN833634 ISR10/3 Bovine blood 2010 Israel II JN833635 Note: For the superscripts a, b, c, d and e, the same letter indicates that the nucleotide sequences of the isolates used in this report have 100% homology. The symbol * represents the BEFV isolates that were not used to produce Figures 1 and 2. Amino acid sequence variation of the antigenic sites of BEFV G protein As shown in Figure 2, the antigenic sites of G 1 , G 2 and G 3 were highly conserved among the BEFV isolates obtained from mainland China and Taiwan Among the eight isolates, only two amino acids at positions 223 and 503 were substituted from E to D and from K to T, respectively. In the antigenic site G 1 of Australian isolates, the residue was N at position 499, differing from the majority of sequences, which contained S. The substitutions were found at positions 223-224 (ET to DK) in the G 3 site of the six Australian isolates. An additional substitution was observed at position 218 (R to K) in the CS1647, CS1619 and CS1180 isolates. Two additional aa changes were found at positions 215 (K to T) and 263 (E to K) in the CS1818 and CS42 isolates. Discussion The clinical signs, morbidity and mortality associated with current cases of BEF are different from those of BEF cases reported before 2000. The current disease cases showed more severe symptoms, and the morbidity and mortality have increased significantly. Luoyang city, in Henan Province, central China, is an epidemic area for BEF, and there have been eight BEF epidemics in the area from 1983 to 2011. The three BEF epizootics in 2011, 2005 and 2004 caused considerable economic loss to dairy cattle farming. During the latest an outbreak, which occurred in 2011, the infected dairy cattle showed a sudden onset of fever, stiffness, and nasal and ocular discharges. Moreover, difficulty in breathing and shortness of breath were the most obvious clinical symptoms shown by the infected dairy cattle. Some of the severe cases died between 6 and 12 hours after infection. The morbidity was about thirty percent, and the mortality rate was about five percent. However, the morbidity was from ten to twenty percent and the mortality was lower than one percent before 2000. html). The high feeding density of dairy cattle and the suffocation caused by the BEF may be the leading reasons for the high mortalities. The phylogenetic relationships of the G gene sequence of BEFV isolated in Japan, Taiwan, Turkey, Israel and Australia had been analyzed previously [25,26]. To date, the genetic relationships of BEFV derived from mainland China and those from other countries or areas have not been studied. In order to clarify the variation in the BEFV G gene with time and location, the G genes of four BEFV strains (LS11, LYC11, JT02L and JB76H) isolated from mainland China were amplified and sequenced. The G gene sequences of the three field isolates was repeatedly amplified and sequenced from infected blood, suckling brain and BHK-21 cells. The results showed that no change was found in the nucleotide sequences, indicating that the adaptation to suckling mice and BHK-21 cells through low passages had no significant effect on the nucleotide sequences of the BEFV G gene. However, it is worth noting that only one G gene sequence of the JB76H strain was used, because the original samples could not be obtained. It was unclear whether the extensive passages in BHK-21 cells affected variation in the G gene sequence of JB76H isolate. The nucleotide and deduced aa sequences of the region encoding the ectodomain of BEFV G protein were well conserved among the BEFV isolates. In particular, the strains that originated from mainland China, Taiwan and Japan had higher identities. The corresponding sequences of the isolates derived from Turkey and Israel were highly conserved. However, the identities of the sequences were slightly lower among Australian isolates and other strains. The phylogenetic relationships of the sequences of the ectodomain region of the BEFV G gene were analyzed in this work. The analysis revealed that the clusters of the BEFV isolates were closely related with geographical location. The strains derived from oriental areas (mainland China, Taiwan and Japan) had a close relationship. Turkish and Israeli isolates were grouped into one cluster, which had a close relationship. The Australian isolates were grouped into an independent cluster, and had a distant relationship with the Asian strains. The results revealed that the phylogenetic relationships among the BEFV isolates were closely interrelated with geographical location. Close genetic relationships among BEFV strains can be deduced if the isolates originate from adjacent areas. Similarly, the BEFV isolates derived from widely separated regions have distant genetic relationships. This may indicate that BEFV circulates in neighboring region for a long time. The clusters of the isolates were also chronologically related. In cluster I, the JT02L strain clustered with other East Asian isolates from 2001-2004, suggesting that the same BEF outbreak spread through mainland China, Taiwan and Japan across the borders. The LS11 and LYC11 strains slightly diverged from the isolates from previous epizootics in East Asia, which indicated that the new BEFV possibly invaded mainland China from a neighboring area via infected vectors carried on the seasonal wind over a long distance or the import of live cattle. In fact, some evidence has shown that both winds and animal transport have an important role in trans-boundary transmission of BEFV [25,26]. The oldest Chinese mainland vaccine strain, JB76H, and the oldest Japanese strain, YHL, sat separately. The Japanese and Taiwanese isolates from 1984-1989 clustered together. Similar results were obtained in the clusters II and III. The variation in the aa sequences of the antigenic sites G 1 , G 2 and G 3 of the BEFV isolates was analyzed. The mentioned aa sequences of the three field strains obtained from mainland China corresponded identically with those of the Japanese isolates from 2001-2004 and the Taiwanese strains except for the 1984/TW/TN1 and 2001/TW/TN10. The other Japanese isolates from 1988-1989 had the same aa sequences mentioned above except for a substitution in ON-BEF-89-3 strain. No residues were changed among the isolates derived from Turkey and Israel. Three to five substitutions were found in the antigenic sites of G 1 and G 3 of Australian isolates compared with the residues of the Chinese mainland strains. These results indicated that the antigenic sites G 1 , G 2 and G 3 of BEFV isolates that related closely in place or time were highly conserved. Conclusions The sequences of the ectodomain region of the BEFV G gene were analyzed. The BEFV strains were isolated from mainland China, Taiwan, Japan, Turkey, Israel and Australia. The nucleotide and deduced aa sequences were well conserved among the isolates. A phylogenetic tree based on the nucleotide sequences was constructed, and the isolates were grouped into three clusters. The variations in the aa sequences of the antigenic sites G 1 , G 2 and G 3 of BEFV G protein were analyzed. The results showed that the phylogenetic relationships of the isolates were closely related to their geographical and chronological sources.
2017-06-27T20:29:56.543Z
2012-11-14T00:00:00.000
{ "year": 2012, "sha1": "37ed30d05772f03403f74c4aa1b79146cff139e4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1743-422x-9-268", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97844549d778d444cba12cd274275d7984e840fd", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
216397811
pes2o/s2orc
v3-fos-license
Positron Spectroscopy of Hydrogen-Loaded Ti-6Al-4V Alloy with Different Defect Structure The defect structure of annealed cast, electron beam melted and ultrafine-grained titanium Ti–6Al–4V alloys before and after hydrogenation was studied. It has been established that before hydrogenation the predominant types of defects in electron beam melted and ultrafine-grained titanium alloy are dislocations and low-angle boundaries, respectively. The cast alloy after annealing is defect-free material. Hydrogenation from the gas phase to 1.00 ± 0.15 wt% leads to an increase of the concentration of the predominant type of defects. Moreover, vacancy complexes also presented in electron beam melted and ultrafine-grained Ti–6Al–4V alloys interact with hydrogen and form hydrogen–vacancy complexes. Introduction The presence of hydrogen in metals is an essential technical and scientific problem, since hydrogen, penetrating in structural materials, initiates the formation of various types of defects and significantly affects physical and mechanical properties of metals and alloys [1][2][3][4][5][6]. Hydrogen embrittlement leads to accelerated depletion of the constructional element resource, which is especially characteristic in cases of the development of damage accumulation processes localised near various defects in the metal structure. Due to the specifics of metal-hydrogen system monitoring, periodic control should be based on nondestructive methods that are sensitive either to changes in the physical and mechanical properties of the material or to the direct hydrogen accumulation. A comparative analysis of experimental data showed that positron spectroscopy, which is highly sensitive and allows determining not only the type and concentration of defects but also the chemical environment, is localised for monitoring the hydrogen interaction with defects and revealing the mechanisms of formation of defects of hydrogen origin. Nowadays, positron spectroscopy methods are increasingly used to control advanced structural materials [7,8]. Therefore, metals and alloys obtained using additive technologies may have a specific structure that is formed during the manufacturing process (lamellar structure, columnar grains, micro-and nanoscale precipitates of secondary phases, developed dislocation structure). At the same time, ultrafine-grained (UFG) materials obtained by methods of severe plastic deformation (SPD) are characterised by the presence of a large number of defects of various dimensions in the crystal structure, such as vacancies and their complexes, dislocations, and boundaries of different types [9][10][11]. * corresponding author; e-mail: laptevrs@tpu.ru The presence of specific defects and other inhomogeneities has a significant effect on the structure, mechanical properties, corrosion, and hydrogen resistance of metals and alloys obtained using additive methods and methods of severe plastic deformation [12][13][14]. Thus, the purpose of this work is to study the initial defect structure effect on the hydrogen-induced defects accumulation in a titanium alloy. Materials and methods For the study, samples were prepared in different microstructural states: cast material, samples produced by electron beam melting (EBM Ti-6Al-4V alloy), and samples obtained using methods of severe plastic deformation (UFG Ti-6Al-4V alloy). Samples in the cast state were annealed at a temperature of 750 • C. The EBM Ti-6Al-4V samples were built using an electron-beam melting 3D-printer designed in Tomsk Polytechnic University [15]. The Ti-6Al-4V powder was supplied by "Normin" Ltd. (Russia), the average particle size of the powder was 70 µm. The samples were blocks with dimensions of 20×20×1 mm 3 . The samples were manufactured at the scanning rate of 16 mm/s, and the substrate temperature of ≈ 750 • C. The UFG state in cast Ti-6Al-4V alloy was obtained by comprehensive pressing with a change in the deformation axis and a gradual temperature decrease in the range of 600-580 • C (ISPMS SB RAS, Tomsk) [16]. Hydrogen saturation was carried out on a Gas Reaction Controller LPB by Advanced Materials Corporation according to the Sieverts method. The source of hydrogen is a Proton HyGen 200 hydrogen generator (the purity of the generated hydrogen is 99.9995%). Hydrogenation occurred automatically at a temperature of 600 • C (heating rate of 4 • C/min) and a hydrogen pressure of 67 kPa in the chamber. Cooling took place in a vacuum at a rate of 1.5 • C/min. The average concentration for all hydrogenated samples, determined by melting in an inert gas, was 1.00 ± 0.15 wt%. A semi-digital positron spectrometric complex was used for studying the evolution of the defect structure of metals and alloys upon hydrogen saturation. In this complex, the methods of positron annihilation lifetime spectroscopy (PALS) and the coincidence analysis of the Doppler broadening of the annihilation spectroscopy (CDBS) are integrated. The time resolution and count rate of PALS module were 170±7 ps and 90±30 counts/s, respectively. For CDBS module the energy resolution was 1.16 ± 0.03 keV at 116 ± 15 counts/s count rate. The source of positrons was an isotope 44 Ti with 0.91 MBq activity. For individual samples, one two-dimensional CDBS spectrum and two PALS spectra were obtained with good statistics (3 × 10 6 for PALS and 4 × 10 7 for CDBS). The analysis was carried out according to a three-component trapping model using the LT10 software package [17,18]. The contribution of the positron source was 5.9% for titanium alloys, associated with annihilation in radioactive salt (τ 1 = 305 ± 1 ps with a relative intensity of 71.7%) and the annihilation of orthopositronium (τ 2 = 1779 ± 10 ps with a relative intensity of 28.3%). The CDBTools program package [19] was used for CDBS spectra processing through S and W parameters analysis obtained for OX cross-section of the twodimensional spectrum [7,20]. The structural and phase state of the obtained samples was examined using X-ray diffraction analysis, scanning and transmission electron microscopy. Results and discussion In the initial state, cast Ti-6Al-4V alloy has an inhomogeneous structure consisting of α single-phase and (α + β) two-phase regions. The single-phase regions 10-40 µm in size, as a rule, are surrounded by the two-phase regions. In the single-phase regions, grain structure with dimensions of 3-5 and 7-10 µm is observed in transverse and longitudinal sections, respectively. According to the data of X-ray diffraction analysis, the volume fraction of β phase in this state of the alloy is 5 vol.%. The internal grain structure of the EBM Ti-6Al-4V samples is represented by α-phase plates, along which β-phase layers are located. The α-phase plates form colonies similar to the perlite colonies in steel. The thickness of the alpha-phase plates is predominantly 9 µm, but plates with a thickness of 200 nm are also observed in EBM alloy. X-ray diffraction studies have shown that the volume fraction of β phase in samples of EBM Ti-6Al-4V alloy is equal to 4 vol.%. The microstructure of UFG samples is represented by a two-phase (α + β) grain-subgrain structure with an average grain size of 0.29 µm (Fig. 1). The formation of the UFG structure is found to lead to a slight increase in the volume fraction of the β phase (up to 5 vol.%). The hydrogenation of cast and UFG titanium alloy samples to a concentration of 1.00 ± 0.15 wt% results in the appearance of Ti-H x hydride precipitates. At the same time, according to XRD analysis data, after hydrogenation to the same concentration in the EBM Ti-6Al-4V alloy, the redistribution of the intensities of diffraction maxima is observed on the corresponding diffraction patterns. The diffraction patterns contain reflections of α phase of titanium, the cubic δ and tetragonal γ phases of titanium hydride, and intermetallic Ti 3 Al phase with hcp lattice. The results of PALS analysis for the experimental Ti-6Al-4V samples before and after hydrogenation are collected in Table I. In the cast alloy, there is only one short-lived component (τ F = 147 ± 1 ps) associated with the annihilation of positrons in the titanium lattice [21][22][23][24][25][26]. Hydrogenation of the cast titanium Ti-6Al-4V alloy up to of 1.00 ± 0.15 wt% leads to the appearance of lifetime components: τ A = 166 ± 2 ps and τ B = 276 ± 6 ps with intensities I A = 83% and I B = 2%. The component τ A = 166 ± 2 ps is related to annihilation of positrons trapped by dislocations in titanium [21,22]. The long-lived component τ B = 276 ± 6 ps is responsible for the annihilation of positrons in complex hydrogenvacancy clusters mV-nH (where m is the number of vacancies in cluster and n is the number of hydrogen atoms associated with a cluster) [27]. In EBM Ti-6Al-4V alloy, there are two positron lifetime components responsible for the annihilation of positrons trapped by dislocations (τ A ) and tetravacancies (τ B = 290±5 ps). After hydrogenation, the intensity of the dislocation component increases 4.7 times, while the lifetime of the long-lived component decreases significantly (to 207 ps) with intensity growing to 14%. Thus, the concentration of dislocations increases and hydrogenvacancy complexes (V-1H) are formed upon hydrogenation of EBM Ti-6Al-4V alloy [27]. In UFG Ti-6Al-4V alloy, an intense (71%) component is observed with a lifetime of 171 ± 2 ps, which can be associated with the annihilation of positrons trapped by dislocations or low-angle boundaries [13]. In this case, CDBS data is used to determine the prevailing type of defects. The S-W plot obtained by OX cross-section of the two-dimensional spectrum for the experimental Ti-6Al-4V samples is presented in Fig. 2. CDBS results for cast Ti-6Al-4V samples after cold rolling to various deformation degrees are added to the plot for comparison. The predominant type of defects after cold rolling are dislocations, so this type of defect will also be prevalent for all collinear experimental values in S-W plot. Thus, the predominant positron trapping center in UFG Ti-6Al-4V alloy before and after hydrogenation is low-angle boundaries. Also, these samples are characterized by the presence of a long-lived component with a lifetime of ∼ 290 ps, which is associated with the annihilation of positrons trapped by vacancy or hydrogenvacancy complexes. Conclusion This article aimed to study the effect of the initial defect structure on defects accumulation in a titanium Ti-6Al-4V alloy. The series of measurements were carried out for cast, electron beam melting, and ultrafine-grained Ti-6Al-4V alloys before and after hydrogenation. Samples are found to differ significantly in its initial defective structure. Annealed cast alloy is defect-free material. EBM Ti-6Al-4V alloy is characterized by an increased concentration of dislocations and vacancies complexes, and low-angle boundaries, and vacancy complexes are the main types of defects in UFG Ti-6Al-4V alloy. After hydrogenation, an increase in the concentration of dislocation defects was discovered for all samples. Besides, for cast and EBM alloys the growth of long-lived component intensities was observed, which indicates the formation of hydrogen-vacancy defects.
2020-03-19T10:37:17.850Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "f33a555bb6eb1a6e528d746aecc4416428947815", "oa_license": null, "oa_url": "https://doi.org/10.12693/aphyspola.137.242", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "92b61fa39e526c58d459237cc81e89643ea8e891", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
3994193
pes2o/s2orc
v3-fos-license
Welfare Risks of Repeated Application of On-Farm Killing Methods for Poultry Simple Summary During poultry production, some birds are killed humanely on farm, usually because they are ill or injured. Recent European Union (EU) legislation has restricted the number of birds that can be killed by manual neck dislocation to 70 birds per person per day. We examined whether this limit is meaningful by investigating the effects of repeated application of two methods of killing (neck dislocation and a percussive method, the CashPoultry Killer). Twelve male stockworkers each killed 100 birds (broilers, laying hens, or turkeys) at a fixed rate with each method. Both methods were highly successful, and reflex and behaviour measures confirmed they caused rapid loss of brain function. Importantly, there was no evidence of reduced performance with time/bird number up to 100 birds with either method. The Cash Poultry Killer caused a more rapid death, but it was prone to technical difficulties with repeated use. Neck dislocation has the important advantage that it can be performed immediately with no equipment, which may make it preferable in some situations. We present the first evidence that, at the killing rates tested, there was no evidence to justify the current EU number limit for performance of neck dislocation to kill poultry on farm. Abstract Council Regulation (EC) no. 1099/2009 on the protection of animals at the time of killing restricts the use of manual cervical dislocation in poultry on farms in the European Union (EU) to birds weighing up to 3 kg and 70 birds per person per day. However, few studies have examined whether repeated application of manual cervical dislocation has welfare implications and whether these are dependent on individual operator skill or susceptibility to fatigue. We investigated the effects of repeated application (100 birds at a fixed killing rate of 1 bird per 2 min) and multiple operators on two methods of killing of broilers, laying hens, and turkeys in commercial settings. We compared the efficacy and welfare impact of repeated application of cervical dislocation and a percussive killer (Cash Poultry Killer, CPK), using 12 male stockworkers on three farms (one farm per bird type). Both methods achieved over 96% kill success at the first attempt. The killing methods were equally effective for each bird type and there was no evidence of reduced performance with time and/or bird number. Both methods of killing caused a rapid loss of reflexes, indicating loss of brain function. There was more variation in reflex durations and post-mortem damage in birds killed by cervical dislocation than that found using CPK. High neck dislocation was associated with improved kill success and more rapid loss of reflexes. The CPK caused damage to multiple brain areas with little variation. Overall, the CPK was associated with faster abolition of reflexes, with fewer birds exhibiting them at all, suggestive of better welfare outcomes. However, technical difficulties with the CPK highlighted the advantages of cervical dislocation, which can be performed immediately with no equipment. At the killing rates tested, we did not find evidence to justify the current EU limit on the number of birds that one operator can kill on–farm by manual cervical dislocation. Simple Summary: During poultry production, some birds are killed humanely on farm, usually because they are ill or injured. Recent European Union (EU) legislation has restricted the number of birds that can be killed by manual neck dislocation to 70 birds per person per day. We examined whether this limit is meaningful by investigating the effects of repeated application of two methods of killing (neck dislocation and a percussive method, the CashPoultry Killer). Twelve male stockworkers each killed 100 birds (broilers, laying hens, or turkeys) at a fixed rate with each method. Both methods were highly successful, and reflex and behaviour measures confirmed they caused rapid loss of brain function. Importantly, there was no evidence of reduced performance with time/bird number up to 100 birds with either method. The Cash Poultry Killer caused a more rapid death, but it was prone to technical difficulties with repeated use. Neck dislocation has the important advantage that it can be performed immediately with no equipment, which may make it preferable in some situations. We present the first evidence that, at the killing rates tested, there was no evidence to justify the current EU number limit for performance of neck dislocation to kill poultry on farm. Abstract: Council Regulation (EC) no. 1099/2009 on the protection of animals at the time of killing restricts the use of manual cervical dislocation in poultry on farms in the European Union (EU) to birds weighing up to 3 kg and 70 birds per person per day. However, few studies have examined whether repeated application of manual cervical dislocation has welfare implications and whether these are dependent on individual operator skill or susceptibility to fatigue. We investigated the effects of repeated application (100 birds at a fixed killing rate of 1 bird per 2 min) and multiple operators on two methods of killing of broilers, laying hens, and turkeys in commercial settings. We compared the efficacy and welfare impact of repeated application of cervical dislocation and a percussive killer (Cash Poultry Killer, CPK), using 12 male stockworkers on three farms (one farm per bird type). Both methods achieved over 96% kill success at the first attempt. The killing methods were equally effective for each bird type and there was no evidence of reduced performance with time and/or bird number. Both methods of killing caused a rapid loss of reflexes, indicating loss of brain function. There was more variation in reflex durations and post-mortem damage in birds killed by cervical dislocation than that found using CPK. High neck dislocation was associated with improved kill success and more rapid loss of reflexes. The CPK caused damage to multiple brain areas with little variation. Overall, the CPK was associated with faster abolition of reflexes, with fewer birds exhibiting them at all, suggestive of better welfare outcomes. However, technical difficulties with the CPK highlighted the advantages of cervical dislocation, which can be performed immediately Introduction Poultry may need to be dispatched on farm for multiple reasons such as culling of individual injured and sick birds, and for stock management. The methods used to cull small numbers of birds on farm are different than those for slaughter [1][2][3] or emergency on-farm killing for disease control [4,5], relying heavily on manual cervical dislocation (so called 'necking' by hand) [6]. Manual cervical dislocation has been a source of welfare concern for poultry [7][8][9] and other species [10,11], and as of January 2013, the use of this method to kill poultry on-farm has been restricted to birds weighing a maximum of 3 kg and 70 birds per person per day through European Union (EU) legislation, Council Regulation (EC) no. 1099/2009 on the protection of animals at the time of killing [12]. These restrictions were imposed primarily in response to concern that animals may be conscious for a significant period post-application of cervical dislocation [7,13,14], since the method is thought to kill birds primarily by cerebral ischemia which is not instantaneous [7,9,13,14]. However, recent work suggests that manual cervical dislocation may be more humane than previously thought [9,[15][16][17], and it may be that the results of earlier studies did not correctly reflect the efficacy of manual cervical dislocation when done correctly or instead assessed mechanical cervical dislocation (i.e., use of a tool for dislocation) [18]. It has also been noted that there is variability in the application of any type of cervical dislocation method (manual or mechanically-assisted) by different operators (e.g., stockworkers, veterinarians, trained slaughtermen) for different poultry species [6,17,19], making generalizations problematic. The uptake of killing methods for small numbers of poultry depends on their practicality, cost, and availability for rapid deployment, because the majority of emergency killing needs to be done immediately and on farm sites. Manual cervical dislocation is attractive because it requires no equipment and can be performed anywhere. Although mechanically-assisted cervical dislocation shares many of these advantages, it does require a tool or aid (e.g., killing cone, pliers [6,18]). Alternative killing methods involve destruction of the brain primarily via application of captive bolt or percussion, and several of these methods have been developed over the last ten years (e.g., Cash Poultry Killer (CPK), Turkey Euthanasia Device) [7,14,18,[20][21][22][23]. Several studies have concluded that the captive bolt renders birds unconscious immediately or rapidly post application, inferring that this is a humane method of killing [7,24,25]. The welfare implications of both percussive and cervical dislocation techniques are reliant on skilful application by a human operator, with human error potentially having major welfare consequences. Variation in application of cervical dislocation (manual or mechanically-assisted) has been previously documented [17] and demonstrates a lack of standardization in training in the poultry industry. However, there is very little research investigating the welfare implications of repeated application of on-farm killing methods. One study investigated the performance of manual cervical dislocation in up to 60 birds in succession, and showed that while efficacy did not change over time, there was substantial variation between stockworkers [17]. There is no such study for captive bolt methods for poultry. Therefore, the evidence basis for legal restrictions on manual cervical dislocation is unclear, and there is a need to examine the effects of repeated performance (i.e., risk of fatigue affecting welfare outcomes) up to and exceeding this limit for both cervical dislocation and percussive methods. The aim of this study was to assess the effects of repeated application (up to 100 birds) and multiple operators on the killing efficacy and welfare impact of two commercially relevant killing methods: the CPK [18,23] and cervical dislocation. Two forms of cervical dislocation were assessed: manual (broilers and layers) and mechanically-assisted (turkeys). Materials and Methods A total of 2400 female birds were used; 800 turkey hens (25 weeks old, Kelly's Bronze, Meleagris gallopavo), 800 layer hens (76 weeks old, HyLine Brown, Gallus gallus domesticus), and 800 broilers (30-32 days old, Ross 308, Gallus gallus domesticus). The study was conducted in collaboration with three commercial poultry producers, on their farms. All birds sourced for the experiment were either destined to go to slaughter (broilers and turkeys) or were being killed at the end of their productive life (end-of-lay hens). The birds were reared and housed commercially until killing occurred. The layer hens were housed in enriched colony cages (Tecno Cages ® , Tecno Poultry Equipment Spa, EU) with 80 birds per colony. The broilers and turkeys were floor housed in a clear span shed with deep litter (wood-shavings). All birds had ad libitum access to food and water. The experiment was performed under the United Kingdom's (UK's) Animal Scientific Procedures Act, and as part of this, underwent review and approval by SRUC's AWERB (AU AE 49-2012, 22 November 2012). The experiment was a balanced factorial design: four stockworkers × two kill methods × 100 birds × three bird types in order to assess the repeated performance of cervical dislocation and CPK for operator consistency of method application (e.g., potential fatigue). Two types of cervical dislocation were used: manual cervical dislocation for broilers and layer hens and mechanically-assisted cervical dislocation for turkeys, which mimicked commercial practice and conformed to the EU Directive (EU 1099/2009). The manual cervical dislocation used was dependent on the stockworker's previous training and standard operating procedures at each farm, and did not always follow Humane Slaughter Association's (HSA's) guidelines [18]. Variation in the techniques related to how the bird's head was held in the operator's palm [17]. Manual cervical dislocation was performed in one swift movement with the operator pulling down on the bird's head, stretching the neck, while rotating the bird's head upwards into the back of the neck, as detailed in [16][17][18]. Mechanically-assisted cervical dislocation involved restraining and inverting the bird within a cone. The stockworker held the bird's head in one hand and placed a nylon cord loop (~60 cm) over and behind the head at the cranial/vertebral (C0/C1) junction on the neck. Holding the head firmly in place, the stockworker placed his foot through the lower part of the loop situated beneath the bird and forcefully and quickly pressed his foot to the floor causing a rapid dislocation of the neck (Figure 1). Figure 1. Mechanically-assisted cervical dislocation for turkeys. The bird is restrained in a cone (inverted position). A nylon cord is looped (600 mm) over and behind the head at the cranial/vertebral (C0/C1) junction. Holding the head firmly in place (by hand), the stockworker places his foot through the lower part of the loop situated beneath the bird (off the ground) and forcefully and quickly presses his foot to the floor causing a rapid dislocation of the neck. The second killing method was the Accles and Shelvoke Cash Poultry Killer .22 CPK 200, (using a 1 grain (65 mg) gunpowder cartridge), a non-penetrating captive bolt percussive device. To apply the CPK, the bird's body was held in sternal recumbency by an assistant stockworker, with the bird's head hanging freely. The beak tip was held by the stockworker applying the CPK, ensuring that free space was available for the head to swing away. The CPK was aimed at the top of the head, with the lower curved edge of the cowl midway between the eyes, with the device slightly angled towards the tail (following instructions in the manufacturer's [23] and Humane Slaughter Association (HSA) guidelines [18]). Twelve male stockworkers (four per farm) volunteered for the trial and were pre-screened for experience in performing cervical dislocation and were deemed competent by their respective managers. All stockworkers completed a full training day in the CPK, provided and audited by the HSA in November 2012, and all were deemed competent by the HSA representative. Biometric data for each stockworker was recorded on site and included: age (years), height (m), weight (kg), arm length (cm), and hand span (cm). The stockworkers were also asked to provide a rough estimate of how many years of experience they had in performing cervical dislocation as an emergency killing method. On each farm, 100 birds were killed with each method (cervical dislocation and CPK) per stockworker. For commercial logistical reasons, the culling sessions for all four stockworkers per farm were done over a four day period. Each stockworker performed one killing method per day. The kill method order, day, time of day, and the stockman's role (assist or kill) was allocated and balanced according to a Latin square design framework within each farm. On each farm, the four stockworkers were paired, where one acted as the assistant to the other (handling/catching birds), and the other conducted the killing (100 birds per session). After the completion of one session, the stockworkers swapped roles within their pairs, with a break between sessions. Within each kill session, the stockworkers were not permitted to have a break, however, a predetermined killing rate of one bird per 2 min was set in order to provide enough time for the recording of reflex and post-mortem measures for every 10th bird, to prevent kill rate becoming a confounding factor between individuals and to allow comparisons between stockworkers to be made accurately across farm and bird types. Fatigue could not be measured directly within stockworker, but was implied indirectly by kill sequence (1-100) per stockworker per session. All birds were weighed and identified with a numbered leg tag immediately prior to killing. Each killing session took approximately 3.5 h to complete without any breaks. Death was confirmed in every bird by an experienced poultry technician immediately post application of a method by assessment of two parameters: (1) cessation of rhythmic breathing and (2) absence of pupillary reflex [4,26]. The same technician recorded the number of kill attempts (e.g., multiple shots or cervical dislocation pulls) and assessed and recorded kill efficacy, which was defined as a single kill attempt resulting in rapid death and no emergency intervention required. If any bird did not display signs of death rapidly post-application, they were immediately emergency euthanized with a back-up CPK device operated by the poultry technician. A gross post-mortem examination was performed on every bird immediately after the application of the killing method and confirmation of death by a trained poultry technician. General binary yes/no post-mortem measures were recorded for all birds: skin broken, external blood loss, and subcutaneous hematoma. Kill method specific post-mortem measures were obtained with regard to damage to specific anatomical areas related to target areas of the approaches. For cervical dislocation methods, four post-mortem parameters were recorded: cervical dislocation confirmation (y/n), the level of vertebral dislocation (e.g., C0-C1), spinal cord severed (y/n), and the number of carotid arteries severed (0, 1, or 2). For the CPK method, seven gross post-mortem parameters were recorded: skull fracture (y/n), skull fracture location [16], damage (y/n) to the forebrain (left and right), midbrain (left and right), and cerebellum [15][16][17]. In order to establish gross anatomical damage, the brain was excised in order to accurately visualize each brain region. Any birds which underwent emergency euthanasia as a result of a failed kill were excluded from post-mortem data since their anatomical damage was confounded by the emergency method of killing. Based on post-mortem measures for any birds which were successfully killed (as defined above), if the optimal anatomical damage was achieved (e.g., cervical dislocation = C0-C1, both carotid arteries and spinal cord severed, and no skin broken; CPK = profuse damage to >1 brain region), the kill was classified as a 'method success'. For the first and every tenth bird within a session (total of 11 birds), reflex and behaviour latencies were recorded immediately post method application. Three cranial reflexes (pupillary [27], nictitating membrane [7,28], and rhythmic breathing [7,29]) and four relevant involuntary behaviours (presence of jaw tone [4,26], cloacal movement [4], and clonic wing flapping and leg paddling [4,7,26]) were assessed as present or absent at 15 s intervals post killing treatment application until an uninterrupted 30 s of absence of all behaviours and reflexes was observed. Descriptions of the reflexes and behaviours and methods of assessment have been validated in previous studies [7,16,26]. Measures were recorded in a predetermined order for each observer, and using the 1-0 sampling technique [30]: if a reflex/behaviour was present during any point of a 15 s interval, it was defined as present for the entire interval, providing a conservative measure of reflex/behaviour duration post killing treatment application. This interval count was placed on the time scale by assuming counts 0, 1, 2, 3 . . . , to be 3, 10, 20, 35 . . . etc., the midpoint of each reflex interval. The justification for no observation being conducted prior to 3 s post method application was that it was assumed that the reflex measurement could not be undertaken prior to this due to logistical reasons (e.g., bird hand over). If a reflex or behaviour could not be recorded, for example, if pupillary reflex was concealed due to damage to the eye, the data was recorded as missing. Statistical Analysis Data were collected at the bird level and stockworker level and were summarized in Microsoft Excel (2010) spreadsheets and analysed using Genstat (16th Edition). Statistical significance was termed by a threshold of 5% probability based on F tests. Summary graphs and statistics were produced at the stockworker level. For all models, the random effects included the stockworker. All fixed effects were treated as factors and classed as categorical classifications. Generalized Linear Mixed Models (GLMMs) using logit link function and binomially distributed errors due to the nature of the binary data were used to statistically compare kill efficacy and post-mortem parameters across stockworkers. Dispersion was fixed dependent on the variable. Random effects included in models were day of kill (n = 12) and kill session (n = 24), and stockworker (n = 12). In the maximal models, fixed effects included killing method, bird type, bird order, session, and all their interactions. Co-variates included bird weight and number of years of stockworker experience. For the subset of birds subject to reflex measurements (n = 264 birds), the presence/absence of each reflex and behaviour was summarized into interval counts (e.g., present in 0-15 s = 1 count), therefore, the data was summarized into means of the maximum interval counts at the bird level for each reflex, which were then converted back into the time dimension(s). GLMMs with logit-link function and Poisson distributed errors were fitted to the interval counts. Overall statistical comparisons across the killing treatments were conducted. Random effects included in models were day of kill, kill session, and stockworker. In the maximal models, fixed effects included killing method, bird type, bird order, and interactions between major factors. Co-variates included bird weight and stockworker experience. For both reflex data (264 birds) and post-mortem data (all birds), GLMMs were used to compare consistency across stockworkers. Model set-up was as described above for both types of measure, but stockworker was no longer included as a random effect and instead listed as a fixed effect. A total of 2400 birds were killed within the trial. While every attempt was made to keep the kill rate within each session at two minutes per bird, due to unavoidable technical issues (e.g., CPK jamming or multiple kill attempts required) the rate was not always consistent (Table 1), and in some cases the kill rate within a session was slightly altered in order to compensate and maintain the total time of session to 3.5 h. The longest delays were seen in the CPK on the layer farm due to technical issues with the killing device. Killing Performance Both the CPK and cervical dislocation were highly successful, achieving over 96% kill success on the first attempt, with rapid death confirmation post application (CPK = 99.1% (1200 birds); cervical dislocation (manual + mechanically-assisted) = 97.3% (1200 birds), including manual = 96.9% (800 birds) and mechanically-assisted = 98.3% (400 birds)). There was no difference between the CPK and combined cervical dislocation methods in terms of kill success, or within cervical dislocation methods. There was no interaction between killing method and bird type. There was a significant interaction between kill success and kill sequence (p = 0.023), in that success improved as time went on within a session, with the CPK improving more than cervical dislocation methods (mean kill success rates for CPK: 98.4% (birds 1-20) to 99.6% (birds 80-100 cervical dislocation: 97.6% (birds 1-20) and 97.3% (birds 80-100)). There was no interaction between kill sequence and species. The main cause for kill success failures was multiple attempts (e.g., double pulls in cervical dislocation [17] or misfires in the CPK). The maximum number of attempts recorded was three, which were in the cervical dislocation methods for layers and turkeys (overall cervical dislocation mean (±SE) = 1.0 ± 0.2). For the CPK method, the maximum attempts recorded in all bird types was two (overall CPK mean (±SE) = 1.0 ± 0.1). There was no significant difference between number of kill attempts between all methods. There was no evidence that weight or stockworker had a significant effect on kill success, however, this may have been limited due to the low number of unsuccessful kills for comparison. In all killing treatments, method success was numerically lower than kill success, however, a minimal difference was seen with the CPK, suggesting that when it was successfully applied, it produced optimal damage to the bird (Figure 2). There was an interaction between kill method and kill sequence (p = 0.040), again suggesting that method success improved over time, but there was no effect of species or weight on method success. Figure 2. Bar chart providing a comparison between the percentage of birds successfully killed (kill success = single kill attempt resulting in rapid death) and the percentage of birds where the method application was optimal (method success = single kill attempt resulting in rapid death and optimal anatomical damage) for each kill method (CPK (n = 1200 birds), cervical dislocation (manual + mechanically-assisted) (n = 1200 birds), manual cervical dislocation alone (n = 800 birds), and mechanically-assisted cervical dislocation alone (n = 400 birds). Note that the Y axis scale starts at 95%. Post-Mortem Parameters for Cervical Dislocation For birds that were successfully killed by cervical dislocation (97.3%), the majority received a C0-C1 dislocation (53.3%), did not have the neck skin torn (97%), and only 24.2% of birds had one or more of the carotid arteries severed. Dislocation level had a significant effect on kill success (p = 0.042) and method success (p < 0.0001), with both kill and method success being more likely with a higher dislocation (kill success (y) C0-C1 = 53.3%, C1-C2 = 29.6%, C2-C3 = 16.3%, >C3-C4 = 0.8%; method success (y) C0-C1 = 53.7%, C1-C2 = 29.5%, C2-C3 = 16.3%, >C3-C4 = 0.5%). There was also a significant interaction between dislocation level and species (p < 0.0001), with 83.5% of turkeys receiving a C0-C1 dislocation, while only 52-65% of broilers and layers received the same. However, it is interesting to note that no broilers received a dislocation lower than C2-C3, while the lowest recorded dislocation for hens and turkeys was C4-C5 ( Figure 3). Body weight had no effect on dislocation level or number of carotids severed. Whether one or more carotid arteries were severed was effected by bird type (p = 0.036), with artery severance more likely in turkeys (mean (±SE) = 1.2 ± 0.1 carotid arteries) compared to layer hens (0.1 ± 0.1), however, bird type was confounded with cervical dislocationmethod. Broilers were removed from analysis as no carotids were ever severed, but stretch damage was noted. Stockworker had an effect on the mean level of dislocation achieved (p = 0.002), with stockworker B and D both on average achieving a C0-C1 in >96% of their birds, while the joint worst performing stockworkers (E and G) achieved a C0-C1 in less than 20% of their birds. However, the stockworker effect was confounded with bird type: both stockworkers B and D were from the turkey farm, while stockworkers E and G were from the broiler farm. Kill sequence had no effect on dislocation level, carotid artery severance, and skin damage. Post-Mortem Parameters for the Captive Bolt For birds that were successfully killed by the CPK (99.1%), over 90% received damage to all regions of the brain assessed (forebrain (left (97.7%) and right (90.2%)), midbrain (left (99.2%) and right (99.2%)), and cerebellum (97%)). Statistical analysis could not be carried out on brain damage data because of its lack of variation and the low number of unsuccessful kills for comparison. Reflex and Behaviour Parameters Of the birds whose reflexes were measured, no birds showed rhythmic breathing post successful method application for any method. In general, birds killed with CPK displayed fewer reflexes post successful method application than those killed by cervical dislocation (Table 2), and the data also suggest that mechanically-assisted cervical dislocation was better than the manual method at rapidly eliminating jaw tone, pupillary, and nictitating membrane reflexes (Figure 4). The CPK had the shortest mean times for all reflexes (significant only for nictitating membrane (p < 0.001); pupillary reflex could not be modelled as too few birds killed with the CPK showed this reflex). Jaw tone was abolished with all methods in a mean time of <6 s after method application, with cervical dislocation taking 1.7 s longer than CPK, and showing more between bird variation. There was no effect of kill sequence on any behaviour or reflex parameter. Table 2. Proportion (%) of birds displaying reflexes/behaviours post successful method application at any time for both cervical dislocation methods (manual cervical dislocation and mechanically-assisted cervical dislocation) and the captive bolt (CPK). Mechanically-Assisted Cervical Dislocation CPK Jaw tone 18. The mean durations of reflexes and behaviours post killing method application for cervical dislocation (Table 3) and CPK (Table 4) are reported. With cervical dislocation, time to last observation of pupillary reflex (p < 0.001) and nictitating membrane reflex (p < 0.001) was reduced if one or more carotid arteries were severed. Dislocation level was also associated with time to loss of jaw tone (p = 0.009), pupillary (p = 0.003), and nictitating membrane (p < 0.001), with higher dislocations associated with more rapid loss of each reflex. Statistical analysis could not be carried out on reflexes in relation to brain damage data from CPK use because of its sparseness, lack of variation, or because reflexes were not seen. Table 3. Mean (±SD) duration (s) of reflexes and behaviours post cervical dislocation (manual + mechanically-assisted) killing method application in relation to achieved (Y/N) post-mortem parameters. Discussion Practical evaluations of on-farm killing methods in commercial environments are vital to accurately assess performance across multiple operators and in continuous use. In this study, both the CPK and cervical dislocation were highly successful with no evidence that one approach was significantly better than the other, in all three bird types. The main cause for kill success failures was multiple attempts (e.g., double pulls in cervical dislocation methods [17] or misfires in the CPK). Our standardization of kill rate and total kill session duration allowed all parameters to be measured (e.g., reflex and post-mortem measures), but also prevented a potential confound of some methods being faster to perform and potential fatigue effects. Thus, the experiment investigated the effects of repeated and sequential use of each method, which, while incorporating an element of fatigue, did not reflect maximal commercial killing rates with the methods. Nevertheless, there was no evidence of a reduction in kill success or method success with sequential use. In fact, the opposite was observed with some evidence of improved kill success with greater sequential use of the CPK, possibly reflecting within-session improvement with practice. This was not observed for cervical dislocation. However, the kill fail rate across all methods was very small (43/2400) and the n varied across killing methods, so this significant relationship shows only marginal evidence for this effect. The lack of improvement or reduction in performance for cervical dislocation could be attributed to the substantial experience of the stockworkers, limiting the scope for improvement, or likelihood of poor performance. There was also no evidence that sequential killing of certain species or bird weights were associated with decreased kill success or method success. The fixed killing rate employed may have masked fatigue by allowing recovery between kills, and some delays caused by technical issues may have also had an influence. However, it could be argued that the technical issues experienced would be at least as likely to occur with commercially-relevant killing rates. The CPK was the most prone to technical issues, which resulted in the longest delays. Twice, the CPK jammed due to a build-up of carbon deposits; twice, delays were caused by mis-fires or, in one case, partial explosion of the cartridge. On three occasions, the CPK had to be swapped with the spare in order to continue the experiment (after 71, 65, and 12 consecutive shots). The CPKs were cleaned at the end of every day (i.e., after 100 shots) by a trained individual, however, this did not prevent these issues from occurring repeatedly. It was noted that the recuperating sleeves which return the bolt to its initial firing position suffered higher than expected degradation after approximately 600 shots, and this may have contributed to failures. It is also essential to retrieve all fired cartridges, which can be difficult and time consuming, particularly in commercial settings on deep-litter floors. Another practical issue with the CPK is the requirement for the bird to be restrained, in this case, it was by another stockworker, however, in a commercial setting, this is unlikely and instead a killing cone or a rope would be used to hang and shackle the bird by the legs. This highlights that it is a potentially less convenient method than manual cervical dislocation, which requires no additional tools or support to apply. Although the mechanically-assisted cervical dislocation was a form of cervical dislocation, care should be taken (because of the techniques involved) when comparing the results from previous studies or grouping it with manual cervical dislocation. For this reason, we have reported the data with the two methods combined and separated. The primary issue identified with the dislocation methods is the variation in application between stockworkers, as seen in post-mortem measures of dislocation level which in turn was related to reflex responses. Variation in manual cervical dislocation performance across multiple operators has been observed previously [17]. This emphasizes the importance of training and its standardization in the poultry industry. In this study, kill success did not vary by bird type, although the stockworker data from the turkey farm were confounded with a different cervical dislocation method. Previous studies [16,17] comparing manual and a form of mechanically-assisted cervical dislocation showed the same higher success and reliability for manual cervical dislocation in broiler stockworkers compared to layer hen stockworkers, perhaps because broiler stockworkers more regularly kill birds with the method. It is also important to note that the exact manual technique used varied across the broiler and layer stockworkers, as documented in Martin et al. [17]. The majority (53.3%) of birds which were successfully killed by cervical dislocation methods received a C0-C1 dislocation. This focuses the anatomical damage to the top of the spinal cord and possibly the base of brain stem. Damage to this area is associated with spinal cord concussion, neurogenic shock, and loss of consciousness [31][32][33][34][35]. Results reporting the majority of dislocations as C0-C1 have been reported previously [15][16][17], suggesting that overall, cervical dislocation is consistent. Interestingly, the mechanically-assisted cervical dislocation was the most successful method in severing carotid arteries, suggesting that the force produced and resultant stretch by using the operator's leg was greater than the use of the arm, however, the method was confounded with bird type (turkeys). Continued use of cervical dislocation methods did not change the anatomical trauma induced in the birds (e.g., dislocation level, carotid artery severance, and skin damage), suggesting that the EU legislation, which restricts the use of manual cervical dislocation to 70 birds per day [12], is highly conservative. This result was also seen with repeated application of the CPK, where continued use in 100 birds was not an issue in terms of welfare consequences. Unlike in previous studies where heavier birds were more difficult to dislocate at C0-C1 compared to lighter birds, in this study, cervical dislocation performance (kill success and trauma) was not affected by bird weight [16,17]. Bird weight had no effect on the performance of the CPK. As with trauma, abolition of reflexes and durations of behaviours were not affected by repeated application of the methods, suggesting that for both methods, killing up to 100 birds resulted in reliable and rapid kills (at least at the rates used). Overall, the CPK was associated with better welfare outcomes, with shorter durations of reflex and behaviour persistence, and fewer birds exhibiting them. Similar results have been reported previously, showing captive bolt methods to be humane when applied successfully [7,14,24,25]. However, despite its welfare credentials, any long delays (e.g., due to technical problems) meant birds were restrained for excessive periods or even experienced a partial shot, which compromised welfare. It is also worth noting that, while the CPK produced a faster kill, the availability of a working and loaded device may be a barrier to its use on farm. Manual cervical dislocation may be employed without delay to end the life of a suffering bird (under 3 kg) that has been discovered by a stockworker, a benefit which may negate marginal reductions in time to loss of consciousness. Conclusions Both of the methods tested (CPK and cervical dislocation) are highly successful and acceptable on-farm culling methods for broilers, layers, and turkeys. Reflex and behaviour measures showed that both methods caused loss of brain function rapidly, however, there was more variation in birds killed by neck dislocation than CPK. High neck dislocation was associated with improved kill success and more rapid loss of reflexes compared to lower level dislocations for cervical dislocation. The CPK caused damage to multiple brain areas with little variation; post-mortem measures for neck dislocation showed greater variation as a result of the stockworker applying the technique. There was no evidence of a negative effect of sequential application of the methods on killing efficacy or welfare impact in this experiment, up to 100 birds at a killing rate of one bird per 2 min. However, it is important to note that technical difficulties with the CPK resulted in excessive delays, which could have compromised bird welfare in a realistic commercial setting.
2018-03-22T18:47:01.157Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "e3718c5d25a559a2d6fc8044ffa7ad49fbb2f6cd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/8/3/39/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "660bb848bdc53f67ea57232957d110dc44639b6a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
91307153
pes2o/s2orc
v3-fos-license
3D Numerical Study of Metastatic Tumor Blood Perfusion and Interstitial Fluid Flow Based on Microvasculature Response to Inhibitory Effect of Angiostatin 3D Numerical Study of Metastatic Tumor Blood Perfusion and Interstitial Fluid Flow Based on Microvasculature Response to Inhibitory Effect of Angiostatin Metastatic tumor blood perfusion and interstitial fluid transport based on 3D microvas - culature response to inhibitory effect of angiostatin are investigated. 3D blood flow, inter stitial fluid transport, and transvascular flow are described by the extended Poiseuille’s, Darcy’s, and Starling’s law, respectively. The simulation results demonstrate that angio statin has the capacity to regulate and inhibit the formation of new blood vessels and has an obvious impact on the morphology, growth rate, and the branches of microvascular network inside and outside the metastatic tumor. Heterogeneous blood perfusion, wide - spread interstitial hypertension, and low convection within the metastatic tumor have obviously improved under the inhibitory effect of angiostatin, which suits well with the experimental observations. They can also result in more efficient drug delivery and pen etration into the metastatic tumor. The simulation results may provide beneficial infor - mation and theoretical models for clinical research of antiangiogenic therapy strategies. Introduction Cancer is the second leading cause of mortality worldwide [1], right behind cardiovascular disease. Metastatic tumors, the ultimate causes of death for the majority of cancer patients, are the important biological characteristics of malignant tumors. Metastasis occurs when cancer cells spread from a primary tumor to distant and vital organs (secondary sites) in human body. Angiogenesis is necessary for tumor growth, invasion, and metastasis [2], since it supplies the nutrients and oxygen for continued tumor growth. The neovascularization accelerates the growth of tumor while simultaneously offering an initial route by which cancer cells can escape from a primary tumor to form metastatic tumor. Cancer cells migrate into the blood stream and surrounding tissues via microcirculation, then continue to grow giving rise to metastases [3]. The blood perfusion and interstitial fluid flow have been recognized as critical elements in metastatic tumor growth and vascularization [4]. However, tumor vessels are dilated, saccular, tortuous, and heterogeneous in their spatial distribution. These abnormalities result in heterogeneity of blood flow and elevated interstitial fluid pressure (IFP), which forms a physiological barrier to the delivery of therapeutic agents to tumors [5]. Abnormal microvasculature and microenvironment further lowers the effectiveness of therapeutic agents. Experimental research showed that the primary tumor in the Lewis lung model system was capable of generating a factor which was named angiostatin later suppressing the neovascularization and expansion of tumor metastases [6]. Angiostatin is a 38-kD internal peptide of plasminogen, which is a potent inhibitor of angiogenesis in vivo, and selectively inhibits endothelial cell (EC) proliferation and migration in vitro. Tumor cells express enzymatic activity which is capable of hydrolyzing plasminogen to generate angiostatin [7]. Angiostatin is then transported and accumulated in the blood circulation in excess of the stimulators and thus inhibiting angiogenesis of a metastatic tumor. Angiostatin, by virtue of its longer half-life in the circulation [8], reaches the vascular bed of metastatic tumor. As a result, growth of a metastasis is restricted by preventing and inhibiting angiogenesis within the vascular bed of the metastasis itself. A schematic diagram of this process is given in Figure 1. Indeed, antiangiogenic treatments directly targeting angiogenic signaling pathways as well as indirectly modulating angiogenesis show normalization of tumor microvasculature and microenvironment at least transiently in both preclinical and clinical settings. In spite of several mathematical models of metastatic tumors, there appears to be little in the literature by way of mathematical modeling of the mechanisms of antiangiogenic activity of angiostatin on blood flow and interstitial fluid pressure in a metastatic tumor. Liotta et al. [9] first developed an experimental model to quantify some of the major processes initiated by tumor transplantation and culminating in pulmonary metastases. The study suggested that "dynamics of hematogenously initiated metastases depended strongly on the entry rate of tumor cell clumps into the circulation, which in turn was intimately linked to tumor vascularization." Later in the study, Liotta et al. [9] confirmed their former observation and raised the idea that "larger clumps produce significantly more metastatic foci than do smaller clumps matched for the number of cells." Saidel et al. [10] proposed a lumped-parameter, deterministic model of the hematogenous metastatic process from a solid tumor, which provided a general theoretical framework for analysis and simulation. Numerical solutions of the model were in good agreement with their experimental results [9]. The possibilities of anti-invasion and antimetastatic strategies in cancer treatment have bestowed an added preponderance with the keen interest in the mathematical modeling in the areas of tumor invasion and metastasis. Orme and Chaplain [11] presented a simple mathematical model of the vascularization and subsequent growth of a solid spherical tumor and gave a possible explanation for tumor metastasis, whereby tumor cells entered the blood system and secondary tumor may rise with the transportation function of blood. Sleeman and Nimmo [12] modified the model of fluid transport in vascularized tumors by Baxter and Jain [13] to take tumor invasion and metastasis into consideration. Although these models did provide some features of tumor metastasis and interstitial fluid transportation such as perturbation analysis, they lacked in providing more detail information of metastatic tumor and as such were of limited predicted value. More realistic models of metastasis and interstitial fluid transportation were developed to better understand its mechanism. Anderson et al. [14] presented a discrete model from the partial differential equations of the continuum models which implied that haptotaxis was important for tumor metastasis. Iwata et al. [15] proposed a partial differential equation (PDE) that described the metastatic evolution of an untreated tumor, and its predicted results agreed well with successive data of a clinically observed metastatic tumor. Benzekry et al. [16] proposed an organism-scale model for the development of a population of secondary tumors that takes into account systemic inhibiting interactions among tumors due to the release of a circulating angiogenesis inhibitor. Baratchart et al. [17] derived a mathematical model of spatial tumor growth compared with experimental data and suggested that the dynamics of metastasis relied on spatial interactions between metastatic lesions. Stéphanou et al. [18] investigated chemotherapy treatment efficiency by performing a Newtonian fluid flow simulation based on a study of vascular networks generated from a mathematical model of tumor angiogenesis. Wu et al. [19] extended the mathematical model into a 3D case to investigate tumor blood perfusion and interstitial fluid movements originating from tumor-induced angiogenesis. Soltani and Chen [20] first studied the fluid flow in a tumor-induced capillary network and the interstitial fluid flow in normal and tumor tissues. The model provided a more realistic prediction of interstitial fluid flow pattern in solid tumor than the previous models. Some related works have been done on tumor-induced angiogenesis, blood perfusion, and interstitial fluid flow in the tumor microenvironment by using 2D mathematical methods [5,[21][22][23]. In spite of the valuable body of work performed in simulation of blood perfusion, interstitial fluid flow, and metastasis, previous studies have not examined blood perfusion and interstitial fluid pressure in the metastatic tumor microcirculation based on the 3D microvascular network response to the inhibitory effect of angiostatin which plays a significant role in suppressing tumor growth and metastasis. Metastatic tumor blood perfusion and interstitial fluid transport based on 3D microvasculature response to inhibitory effect of angiostatin are investigated for exploring the suppression of metastatic tumor growth by the primary tumor. The abnormal geometric and morphological features of 3D microvasculature network inside and outside the metastatic tumor, and relative complex and heterogeneous hemodynamic characteristics in the presence and absence of angiostatin can be studied in the 3D case. The simulation results may provide beneficial information and theoretical basis for clinical research on antiangiogenic therapy. Metastatic tumor angiogenesis 3D mathematical model we present in this section originates from the previous 2D tumor antiangiogenesis mathematical model [5,21] describing how capillary networks form in a metastatic tumor in response to angiostatin released by a primary tumor. The conservation equation of endothelial cells (EC) indicates the migration of EC influenced mainly by four factors: random motility, inhibitory effect of angiostatin, chemotaxis, and haptotaxis. Subsequently, from a discretized form of the partial differential equations governing endothelial-cell motion, a discrete biased random-walk model will be derived enabling the paths of individual endothelial cells located at the sprout tips, and hence the individual capillary sprouts, to be followed. Hence, realistic capillary network structures were generated by incorporating rules for sprout branching and anastomosis. The generated microvascular network inside and outside the metastatic tumor in the presence of angiostatin and in the absence of angiostatin is shown in Figure 2. General morphological features of the network such as growth speed, capillary number, vessel branching order, and anastomosis density in/outside the metastatic tumor are consistent with the physiologically observed results, which indicate that angiostatin secreted by the primary tumor dose has an inhibitory effect on metastatic tumor [5,11]. Blood perfusion To calculate blood flow through a given 3D microvascular network of interconnected capillary elements to the metastatic tumor, assuming flux conservation and incompressible flow at each junction where the capillary elements meet where B (l,m,j) n takes the integer 1 or 0, representing the connectivity between node (l, m, j) and its adjacent node n . Q (l,m,j) n is the flow rate from node (l, m, j) to node n and is given by n is the vascular flow rate without fluid leakage, described locally by Poiseuille's law and Q t, (l,m,j) n is the transvascular flow rate, following Starling's law Q t, (l,m,j) where p v, (l,m,j) and p v, (n) are the intravascular pressure of node (l, m, j) and node n ; p ¯ v, (l,m,j) n is the mean pressure in vascular element (l, m, j) ; p ¯ i, (l,m,j) n is the mean interstitial pressure outside of vascular element (l, m, j) . μ n , R n , and Δ L n are the blood viscosity, radius, and length of the vessel element n , respectively; L pv is the hydraulic permeability of vascular wall; σ t is the average osmotic reflection coefficient for plasma proteins; π v and π i are the colloid osmotic pressure of plasma and interstitial fluid. Interstitial flow in metastatic tumor Considering the metastatic tumor tissue as an isotropic porous medium, its interstitial flow is modeled by Darcy's law [24]: where u i is the interstitial fluid velocity; κ is the hydraulic conductivity coefficient of the interstitium; p i is the interstitial pressure. The continuity equation is given by: is the fluid source term leaking from blood vessels. ϕ is the lymphatic drainage term, which is proportional to the pressure difference between the interstitium and the lymphatics. Mass conservation at each junction where the interstitial fluid pressure satisfies equation: is the ratio of interstitial to vascular resistances to fluid flow. L pL is the hydraulic permeability of lymphatic vessel wall. S v / V and S L / V are the surface areas of blood vessel wall and lymphatic vessel wall per unit volume of tissue. In the model, L pL S L / V is assumed zero for tumor tissue, and given a uniform value for normal tissue referring to Baxter and Jain [13]. The continuity of pressure and flux on the interconnected boundary between the tumor and normal tissue Γ : p , and κ N are the hydraulic conductivity coefficients of normal tissue and tumor tissue, respectively. Table 1 shows the values of the parameters used in the microcirculation simulations. 3D blood perfusion of metastatic tumor We simulated the evolution of blood flow pressure in the presence/absence of angiostatin for 14 days representing the typical timescale for tumor vasculature to grow. b Stephanou et al. [18]. c Zhao et al. [25]. Subscript "N" and "T" represents the values in normal and tumor tissues, respectively. the snapshots of the pressure profiles of blood flow through each vessel segment in a threedimensional microvascular networks. We keep the inlet pressure and outlet pressure across parent vessel fixed at 25 and 16 mmHg [25] in the simulation, in accordance with physiological values at the capillary scale. Figure 3 highlights a direct comparison of blood pressure distributions (Figure 3a-c shows the blood pressure distribution in the presence of angiostatin, Figure 3d-f shows the blood pressure distribution in the absence of angiostatin). We observe that the overall blood pressure is higher in the presence of angiostatin than that in the absence of angiostatin over the same growth duration. The blood flow distribution is complex and chaotic which makes the variety of blood pressure small in the interior of the metastatic tumor compared to its exterior, contributing to the difficulties of efficient drug delivery in metastatic tumor. In the presence of angiostatin, the pressure-flows within some of the daughter vessels are elevated from the branching points to the metastatic tumor surface which provides effective blood perfusion and thus efficient therapeutic agents to the tumor. The simulation results indicate that blood perfusion varies significantly with the complex and chaotic three-dimensional microvascular networks inside and outside the metastatic tumor. The poor blood perfusion can be improved through the increased intravascular pressure with the presence of angiostatin. These results suggest that the inhibitory effect of angiostatin can affect the distribution of blood flow pressure and improve drug delivery to tumor. Figure 4 shows the distribution of interstitial fluid pressure (IFP) within the metastatic tumor under the two mentioned situations. From the simulation results, we obtain that maximum IFP near the tumor center significantly dropped from 3.3, 11.48, and 11.53 to 0, 4.7, and 10.3 mmHg with the presence of angiostatin at t = 3, 7 and 14 days, respectively, which indicated the IFP plateau is well relieved. As the growth days increase, IFP gradually elevates throughout the 3D metastatic tumor and the high pressure zone is at the center of the tumor and diminishes to the periphery and later becomes flatter. Comparing Figure 4a-c to Figure 4d-f, we come to conclude that angiostatin decreases the high IFP in the tumor, thus with the lower transvascular pressure in the 3D heterogeneous capillary networks, leading to an significantly improved situation for interstitial convection which plays a significant role in nonuniform distribution of drug delivery to the metastatic tumor. These results provide important references for cancer prevention and treatment. Furthermore, antiangiogenic therapies can normalize tumor vasculature and microenvironment, at least transiently in both preclinical and clinical settings [5]. Conclusion The inhibitory effect of angiostatin on the growth of metastatic tumor has been observed in some clinical and experimental malignancies. In this chapter, we develop three-dimensional mathematical models describing the metastatic tumor microvasculature and microenvironment to investigate the inhibitory effect of antiangiogenic factor angiostatin secreted by the primary tumor on metastatic tumor angiogenesis, blood perfusion, and interstitial fluid flow. Simulation results demonstrate that angiostatin has an obvious impact on the morphology, expansion speed, capillary number, and vessel branching order inside and outside the metastatic tumor. 2D and 3D mathematical models of tumor antiangiogenesis predict similar morphological behavior, such as vessels' length, branching patterns, anastomosis density, or geometric distribution, for metastatic tumor angiogenesis under the inhibitory efficiency of angiostatin. However, capillary number and microvascular density due to space growth of vessel networks are increased in the 3D model. Furthermore, the simulations reflect the influences of heterogeneous blood perfusion, widespread interstitial hypertension, and low convection within the 3D metastatic tumor by carrying out a comparative study relating to the inhibitory effect of angiostatin. We find that 2D antiangiogenesis model may be well suited to studying morphological behavior of vessel networks in the metastatic tumor, but 3D antiangiogenesis model can better analyze blood perfusion, interstitial fluid flow, or oxygen and nutrient transport within the metastatic tumor microenvironment based on its more realistic 3D microvascular networks. Although 3D simulation results are consistent with the experimental observed facts and can provide more detailed space information, however, angiogenesis and hemodynamics in the metastatic tumor by the antiangiogenic therapy are very complex. To further research tumor angiogenic mechanisms and help to improve antiangiogenic cancer therapy, more realistic features and complex biology factors need to be incorporated within the 3D model, such as the anatomy and physiology of the metastatic tumor, drug delivery of antiangiogenic therapy, behaviors of cells adhesion and interaction and coupled with the various factors or other therapy strategies.
2019-01-02T20:52:16.205Z
2018-10-31T00:00:00.000
{ "year": 2018, "sha1": "1f748dcfbe2c7c8eec2061f82e3d2330244679d3", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/62149", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4d57f331eb941fb5f3fb15c3036d558b46e95acc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
213420073
pes2o/s2orc
v3-fos-license
Aristotle’s Explanationist Epistemology of Essence Essentialists claim that at least some individuals or kinds have essences. This raises an important but little-discussed question: how do we come to know what the essence of something is? This paper examines Aristotle’s answer to this question. One influential interpretation (viz., the Explanationist Interpretation) is carefully expounded, criticized, and then refined. Particular attention is given to what Aristotle says about this issue in DA I.1, APo II.2, and APo II.8. It is argued that the epistemological claim put forward in DA I.1 differs from that put forward in APo II.2 and II.8, contrary to what has been claimed by Explanationists, and that each of these distinct epistemological claims rests on a distinct non-epistemological thesis about essence. Consequently, an ‘Enriched Explanationist Interpretation’ is developed which takes into account both of the aforementioned elements in Aristotle’s epistemology of essence. The paper con-cludes by highlighting an insight the preceding exegetical discussion offers to contemporary essentialists seeking to explain how we come to know what something’s essence is. §1. Introduction Essentialists claim that at least some individuals or kinds have essences. This raises an important question: how do we come to know what the essence of something is? Unlike the related topic of modal epistemology, the epistemology of essence has received little attention in contemporary philosophy. 1 The question of how we come to know essences is particularly pressing for essentialists who favor a non-modal account of essence, according to which the essence of something does not just consist of all the properties which it necessarily has if it exists. 2 Even if an answer is found to the modal epistemological question of how we come to know what the necessary properties of something are, the non-modal essentialist faces a further question as to how we distinguish a thing's essential properties from its non-essential but necessary properties. The primary aim of this paper is historical and exegetical: the goal is to explicate Aristotle's epistemology of essence, i.e., Aristotle's account of how it is that we can come to know what the essence of something is. It is well-known that Aristotle has a non-modal conception of essence. It is less well-known that Aristotle explicitly took up and tried to answer the question of how we come to know what a thing's essence is. Though my primary aim is exegetical and historical, my hope is that, just as contemporary essentialists have found it fruitful to consult Aristotle's work in their efforts to explicate and motivate a non-modal conception of essence, likewise contemporary essentialists who favor a non-modal conception of essence will find the following discussion of Aristotle's epistemology of essence to be fruitful in their efforts to explicate and motivate their own account of how it is that we come to know what a thing's essence is. This paper does not offer a comprehensive discussion of the various interpretations of Aristotle's epistemology of essence which can be found in the literature, nor a discussion of all of the many texts which bear on the issue. Instead, my aim is to refine one of three main interpretations of Aristotle's views on this issue, viz., what I call the 'Explanationist Interpretation.' 3 In what follows, I begin by reviewing the core claims of the Explanationist Interpretation. This interpretation's core epistemological claim is that we can come to know what a kind's essence is by identifying what feature(s) of the kind explain why it has certain other necessary but non-essential features (viz., what Aristotle calls the kind's 'in itself accidents ' (kath' hauto sumbebēkota)). This epistemological claim rests on a non-epistemological thesis at the heart of Aristotle's non-modal account of essence, viz., the idea that the essential feature(s) of a kind differ from its necessary but non-essential features in that the former are those which ultimately explain why the kind has the in itself accidents that it does ( §2). After discussing these core claims of the Explanationist View in detail, I consider two important pieces of evidence which Explanationists often cite in support of their interpretation: (1) a certain methodological passage in DA I.1 and (2) the discussions in APo II.2 and II.8 of how we come to know essences. I argue that while the DA I.1 passage provides strong support for the aforementioned core epistemological claim of the Explanationist Interpretation, the discussions in APo II.2 and APo II.8 do not. In the latter texts, Aristotle offers an epistemological thesis which differs from the idea proposed in DA I.1. Moreover, this distinct epistemological claim rests on a second, non-epistemological claim about essence, a claim which differs from the non-epistemological thesis about essence which underlies the epistemological claim in DA I.1 ( §3). Having argued that there are these two distinct elements in Aristotle's theory and epistemology of essence, I go on to offer an account of how these two elements fit together, proposing what I call an 'Enriched Explanationist Interpretation' of Aristotle's epistemology of essence ( §4). Finally, I conclude by highlighting one way in which my discussion is relevant to the contemporary essentialist's task of explaining how it is that we come to know what a thing's essence is ( §5). §2. The Explanationist Interpretation of Aristotle's Epistemology of Essence Before proceeding, it is necessary to note that the essentialism discussed in this paper concerns the essences of kinds rather than those of individuals. To use an Aristotelian phrase, the concern is with what it is for something to be an instance of a kind K (to ti ēn einai tō(i) K). The claim that kinds have essences can be distinguished both from the claim that individuals have essential properties (or are essentially members of certain kinds) and the claim that individuals have individual essences (i.e., what are sometimes called 'haecceities'). 4 Whatever his views concerning the essences of individuals, it is agreed that Aristotle thinks that kinds (e.g., human being, triangle, eclipse, thunder, etc.) have essences. 5 Hence, this paper focuses on Aristotle's claims about the essences of kinds and how it is that we can come to know what the essence of a kind is. This leads to a second point of clarification concerning my talk of the 'features of a kind.' By the 'features of a kind,' I mean to refer to the features which belong to all instances of the kind. Thus, for example, the property of having interior angles equal to two right angles is a feature of the kind triangle: all triangles have interior angles equal to two right angles. Likewise, when I speak of the 'necessary features of a kind,' I mean the features which are such that, necessarily, something has that feature if it is an instance of the kind. Finally, I note that I use 'feature' in a broad sense according to which the form, matter, parts, and properties of something can all be called 'features' of it; relatedly, I use the terms 'belong to' and 'have' in a broad sense according to which any feature of a thing can be said to 'belong to' it or be 'had' by it. Having made these clarifications, let us turn now to the question of how it is that we come to know what a kind's essence is. Though discussion of Aristotle's epistemology of essence has not been without controversy, one important point of agreement is that Aristotle holds that, in typical cases, some prior knowledge (gnōsis) of a kind is required for one to come to know what its essence is. On Aristotle's view, our knowledge begins with knowledge (gnōsis) acquired through perception. Perceptual episodes are retained in memory, and over time the accumulation of such memories eventually gives rise to a kind of knowledge which Aristotle calls 3 The name 'Explanationist Interpretation' derives from Bronstein (2016: ch.8). Versions of the Explanationist Interpretation are defended by Kosman (1973), Bolton (1987Bolton ( , 1991Bolton ( , 2017, Lennox (1987Lennox ( , 2001, McKirahan (1992), Charles (2000Charles ( , 2010Charles ( , 2014, and Bolton and Code (2012). The other two interpretations I have in mind are the Intuitionist Interpretation (defended by Ross (1949), Irwin (1988), andFrede (1996)) and the Socratic Interpretation (defended by Bronstein (2016) ' experience' (empeiria). 6 Aristotle characterizes one who has experience (empeiria) as knowing that something is the case (to hoti), (e.g., that the moon undergoes a certain kind of loss of light, which we call ' an eclipse') or whether a certain kind exists (ei estin) (e.g., whether there are human beings). 7 A person who merely has experience does not yet know 'the why' (to dioti, to dia ti) of the facts known by experience and does not yet know what are the essences of the kinds of whose existence she is aware. Thus, for example, the person who has mere experience may know that the moon undergoes a certain kind of loss of light known as an ' eclipse' but not know why it does; that some individuals are human beings but not what makes them human beings; that there are eclipses but not what the essence of an eclipse is; or that there are human beings but not what the essence of a human being is. 8 There are many interesting issues here, but, for our purposes, the key point is that Aristotle holds that some prior knowledge of a kind (knowledge included in or at least derived from one's accumulated experience (empeiria) involving that kind) is typically needed for one to acquire knowledge of its essence. 9 Hence, we can reformulate our question in the following way: once we acquire through experience (empeiria) sufficient knowledge concerning a kind (e.g., knowledge that there is such a kind and that its instances have such and such features), how do we then come to know what its essence is? According to the Explanationist Interpretation, Aristotle's answer is that we can come to know what the essence of a kind is by identifying the feature(s) of the kind which explain why all instances of the kind have the other, non-essential but in itself (kath' hauto) features which they are known by experience (empeiria) to have. The view is usefully broken down into two claims. The first claim is that our initial, experience-based knowledge of a kind includes knowledge that the instances of the kind have certain features, at least some of which belong to the kind in itself (kath' hauto) and are explicable by reference to more basic features of the kind. The second claim is that we come to know the essence of a kind by identifying why the kind, i.e., all of the instances of the kind, have the explicable features which we initially know them to have. For, of the features which belong by necessity to instances of the kind, the essential one(s) are those such that (a) they do not belong to the instances of the kind in virtue of other features of the kind belonging to those instances and (b) at least some of the other necessary features of the kind belong to its instances (at least in part) because the essential feature(s) belong to those instances. 10 This interpretation of Aristotle's epistemology of essence operates against a background interpretation of Aristotle's non-modal account of essence. Aristotle explicitly denies that all of the necessary features of a kind are included in its essence. In particular, Aristotle is quite clear that a kind's in itself accidents (kath' hauta sumbebēkota) are not part of its essence, even though such accidents are necessary features of it. For example, Aristotle says that the property of having interior angles equal to two right angles is an in itself accident of triangles; necessarily, anything which is a triangle has interior angles equal to two right angles, but this property is not part of the essence of a triangle. 11 In other words, Aristotle has a non-modal account of essence: not all of the necessary features of a kind are part of its essence. But if not all of a kind's necessary 6 See APo II.19 100a3-9; Metaph. A.1 98027-982a2; and APo I.18. See also DA III.8 432a7-8. 7 Throughout the paper, in keeping with Aristotle's usage, when I speak of ' an eclipse' or ' eclipses,' I mean to refer just to lunar eclipses. 8 See APr I.30 46a17-27, APo I.13 78b34-79a6, APo II.2, APo II.8, and Metaph. A.1 981b9-13. See also EN I.4 1095b6-8. 9 There is a debate about how to understand the content of empeiria and in particular whether 'the whole universal' mentioned at APo II.19 100a3-9 is part of the content of empeiria or is a reference to a stage of -12;and EN 1098a33-b4, 1139b28-31, 1151a16-18). In addition to looking for explanatory connections among the attributes one initially knows to characterize a kind, one can also hypothesize that the kind has certain additional feature(s), beyond those already known to characterize it, which would explain why it has the attributes it is already known to have. Thus, for example, in the eclipse example discussed in APo II.8, one initially knows by experience that there are eclipses and that an eclipse is a certain kind of loss of light from the moon. At this stage, one does not know that an eclipse is due to the interposition of the Earth between the moon and the sun (the moon's light source). But in the course of seeking to explain why eclipses occur, one hypothesizes that an eclipse is due to the interposition of the Earth between the moon and the sun. The hypothesis that eclipses are caused in this way could be justified not by perceptual experience (as in the fanciful case imagined in APo II.2 90a26-30, where a person standing on the moon sees the interposition occur) but by the fact that eclipses being caused in this way would explain why eclipses have certain other features, e.g., why they always involve a circular black spot which gradually overtakes the whole surface of the moon. features are included in its essence, then what distinguishes a kind's essential feature(s) from its necessary but non-essential features (other than the fact that the former are essential and the latter are not)? According to one influential line of interpretation, Aristotle holds that a kind's essential features can be distinguished from its merely necessary features by virtue of their explanatory role. 12 Put loosely, the idea is that the essential features of a kind are its explanatorily basic necessary features. More precisely, the claim is that the essence E of a kind is not only a necessary feature of the kind but also such that (a) E does not belong to any instance of the kind in virtue of other features of the kind belonging to that instance and (b) at least some of the other necessary features of the kind (viz., its in itself accidents) belong to its instances (at least in part) because E belongs to those instances. Thus, for example, if it is essential to being a triangle to be a three-sided closed plane figure, it follows that (a) there are no other features had by all triangles such that something is a three-sided closed plane figure because it has those features and (b) at least some of the other necessary features of triangles (e.g., the property of having interior angles equal to two right angles) belong to things which are triangles because these things are three-sided closed plane figures. This picture of essence falls out of Aristotle's theory of science (epistēmē) and in particular his idea that definitions (horismoi) are among the principles of a science. A science, on Aristotle's view, encompasses two kinds of facts: indemonstrable principles (archai) and the facts which are demonstrable from (i.e., explained by) the principles. The principles of a science are not explained by reference to other facts but instead are the fundamental, unexplained starting-points of the science by reference to which the other facts in the domain of that science can be explained. Aristotle identifies two kinds of principles: theses and axioms. The former group of principles are proper (oikeia) to the science in question, i.e., they are not used as principles in the demonstrations of other sciences, whereas the latter principles (e.g., the principle of non-contradiction) are common (koina) in the sense of being used in several sciences (if only be analogy -see 76a38-40). 13 According to Aristotle, the proper principles of a science include definitions (horismoi), where a definition is defined as an account of the definiendum's essence (ti esti). 14 Since a definition predicates a kind's essence of that kind and is indemonstrable, it follows that the fact that a kind's essence belongs to it is indemonstrable; in other words, there is no further fact which explains why it is the case that all instances of the kind have the features which make up the kind's essence. For example, if a triangle is essentially a three-sided closed plane figure, then there is no demonstration of the fact that anything which is a triangle is a three-sided closed plane figure; this fact does not obtain in virtue of any other facts but rather represents a basic or fundamental truth about triangles. The classification of definitions as principles implies not only that essential truths do not hold in virtue of other truths but also that such truths are explanatory of other, demonstrable facts. In particular, Aristotle's discussion of in itself accidents (kath' hauta sumbebēkota) implies that at least some of a kind's necessary but non-essential features, viz., those which Aristotle calls 'in itself accidents,' are explained by reference to the kind's essence. 15 More precisely, the claim is that certain features (viz., the in itself accidents of the kind) necessarily belong to any instance of that kind because such features follow upon its essence, belonging to something because the kind's essence belongs to it. Thus, for example, necessarily, human beings are capable of finding things funny (an in itself accident of human beings) because, necessarily, anything which is a human being has a rational soul (the essence, or part of the essence, of a human being) and the capacity to find things funny is a capacity which follows from and is explained by something's having a rational soul. This non-epistemological thesis about the explanatory role of essences opens the way for an explanationbased epistemology of essence according to which we can identify what the essence of a kind is by identifying what its explanatorily basic feature(s) are, i.e., which feature(s) of the kind are such that they (a) do not 12 See Barnes 2002Barnes /1993Bolton 1987: 145;Bronstein 2015;Bronstein 2016: 49, 57, and 106;Charles 2000: 202-203 belong to any instance of the kind in virtue of other features of the kind belonging to that instance and (b) can be used to explain why, necessarily, any instance of the kind has the other, in itself but non-essential features of that kind. Indeed, this is the view which the Explanationist Interpretation attributes to Aristotle. §3. Two Distinct Pairs of Epistemological and Non-Epistemological Theses about Essence A variety of methodological passages found throughout Aristotle's corpus have been invoked to support the Explanationist Interpretation. I do not offer an exhaustive discussion of these passages here. 16 Instead, I focus on two pieces of textual evidence which have been central to the Explanationists' case for their interpretation, viz., (1) a well-known methodological passage in DA I.1 and (2) the discussions of how essences come to be known in APo II.2 and II.8. In what follows, I argue that while the DA I.1 passage provides strong support for the Explanationist's claim that Aristotle thinks we can come to know a kind's essence by discovering what feature(s) of it explain why it has the in itself accidents that it does, the discussions of APo II.2 and II.8 do not. In APo II.2 and II.8, Aristotle does not make the same epistemological claim that he does in the DA I.1 passage but instead introduces a distinct epistemological claim, a thesis which rests not on the idea that the essence of a kind is explanatory of its in itself accidents but rather on the distinct idea that the essence of a kind includes 'its cause.' Let's start with the passage from DA I.1. In this opening, methodological chapter of Aristotle's treatise on the soul, Aristotle observes that a central aim of the science of the soul is to make clear the essence of the soul (402a7). He then raises a general question about how one is to ascertain what the essence of something is (see 402a10ff) and, later, gives at least a partial answer to this question when he remarks, (a) It seems that not only is knowing the essence useful for discerning the causes of the [in itself] accidents of substances… (b) but also knowing the [in itself] accidents [of something] contributes in great part (sumballetai mega meros) to knowing [its] essence (pros to eidenai to ti estin). (c) For whenever we are able to give an account in conformity to what is apparent concerning all or most of its [in itself] accidents, at that time we will be able to speak best about the essence (ousia In (a), Aristotle makes the point that knowing something's essence can be useful for explaining why it has the in itself accidents that it does, a point which recalls the idea that the essence of a kind is explanatory of why it has the in itself accidents that it does. He then goes on in (b) to make the point which is more crucial for his purposes and ours, viz., that knowing (by experience, as 'in conformity to what is apparent' suggests) the in itself accidents of a kind can play a crucial role in our discovering what its essence is. In (c), he explains how this prior knowledge helps us discover what the kind's essence is: we know a kind's essence 'best' when we know what accounts for or explains why it has the in itself accidents that it does. This suggests that Aristotle thinks we ought to use our knowledge of a kind's in itself accidents to guide us in our search for its essence, for the essence should explain why it has these accidents. Indeed, in (d) Aristotle condemns as ' dialectical and empty' (rather than genuinely scientific) definitions which cannot account for their definienda's in itself accidents. This underscores the point that our theorizing about the essence of a kind must be guided by the need for that essence to explain why the kind has the in itself accidents it is known to have. In fact, later on in DA I.4, Aristotle criticizes his predecessors' definitions of soul on precisely this basis (see 409b12-18). Overall, this passage provides strong evidence for the Explanationist Interpretation. Here Aristotle clearly has in view his idea, explicated at length in §2 above, that the essence of a kind is explanatory of why it has the in itself accidents that it does. Moreover, here Aristotle explicitly connects this claim about the explanatory role of essences with a claim about how it is that we can come to know what a kind's essence is. In particular, the claim in (c), viz., that we are in the best position to identify a kind's essence when we can explain why it has the in itself accidents that it does, fits well with the Explanationist's claim that we can identify a kind's essence by identifying the feature(s) of it which ultimately explain why it has the in itself accidents it is known (by experience) to have. Moreover, the claim in (d), viz., that a definition is ' dialectical and empty' if it fails to identify as the essence of a kind something which can explain why the instances of the kind have the in itself accidents that they do, not only reinforces this point but draws attention to a way of testing a definition for adequacy: the definition is adequate only if the essence specified by it can do the job of explaining why the definiendum has the in itself accidents that it does. Indeed, this is a natural epistemological test for Aristotle to recommend, given Aristotle's non-epistemological idea that it is part of the explanatory role of a kind's essence that it explain why the kind has certain further, non-essential features by necessity. Hence, I conclude, in line with other Explanationists, that this passage provides strong support for the Explanationist Interpretation. 18 But what of Aristotle's discussion in APo II.2 and II.8? In APo II.8, Aristotle, building on ideas put forward in APo II.2, attempts to explain, in the case of kinds which have ' causes other than themselves' (see 93a5-6, 93b18-19, and 93b21-28; see also 88a5-8), '…how the essence (to ti esti) is grasped (lambanetai) and comes to be known (gignetai gnōrimon)' (93b15-16). Later, I'll return to the issue of just what distinction Aristotle has in mind in restricting his attention to kinds which have ' causes other than themselves.' For now, I focus on the key claim of the chapter, which is that, for such kinds, though 'it [i.e., the kind's essence] is not deduced or demonstrated, nonetheless it is made clear through deduction and demonstration (dēlon mentoi dia sullogismou kai di' apodeikseōs)' (93b17-18). Through a demonstration of what? Given the discussion of DA I.1, one might expect the answer to be 'through a demonstration of the kind's in itself accidents,' i.e., by explaining why the kind has the in itself accidents that it does. Indeed, several proponents of the Explanationist Interpretation have suggested just this. These authors argue that in APo II.8 Aristotle outlines a procedure in which one starts with an initial account of a kind which defines it by reference to one or more of its explicable, in itself features. Using this initial account, one proceeds to make clear the essence of the kind by identifying other features of the kind which explain why it has the explicable features specified in the initial account. The process can then be repeated until one reaches what one takes to be the explanatorily basic feature(s) of the kind, feature(s) which are such that their belonging to the kind is not explained by any prior, more basic features of the kind. 19 But, as I will now argue, this reading mischaracterizes what Aristotle is up to in these chapters. The key epistemological claim of these chapters is not the idea, put forward in DA I.1, that we can come to know what a kind's essence is by explaining why it has the other, non-essential features that it does. Instead, the key epistemological claim of these chapters is that, in the case of kinds which have ' causes other than themselves,' we can make clear what a kind's essence is by making clear the kind's cause. Moreover, the key non-epistemological claim about essence in these chapters is not that the essence of a kind explains why it has the in itself accidents that it does; rather, it is the claim that, in the case of a kind which has ' a cause other than itself,' the kind's cause is included in its essence. To see this, consider more closely what Aristotle says in APo II.2 and II.8. In APo II.2, Aristotle claims that to seek something's essence is to seek its cause (aition) (see 90a1 with 90a5-6) and that 'to know what something's essence is is the same as to know why it exists (to ti estin eidenai tauto esti kai dia ti estin)' 18 Explanationists who appeal to this passage to support their interpretations include Bolton (1987: 133 n.27) and Charles (2010Charles ( : 302, 2014. Hicks (1907: 191), Johansen (2012: 10), and Shields (2016: 94) offer similar, Explanationist-friendly accounts of this passage, though their focus is on the role this passage plays in DA and not on its implications for Aristotle's epistemology of essence in general. I note that Bronstein (2016: 120-123) argues that this passage need not be taken to support the Explanationist Interpretation over his own 'Socratic Interpretation.' For a discussion of why Bronstein's argument fails, see my 'Aristotle's Epistemology of Essence' (manuscript). 19 Consider, for example, Bolton's summary of what he takes to be the main claim of APo II.8-10: '(1) We normally begin with a definition or account of the kind which is our object of inquiry which exhibits the features or manifestations of it which are perceptually most accessible. Typically, such features are not fundamental features of the kind in terms of which others can be explained, but rather explicable by reference to the more fundamental ones and, thus, features which figure in ' conclusions of demonstrations'. (2) Inquiry proceeds by moving from an understanding of something based on a definition of this sort to an understanding where we have an account or definition which exhibits why the thing has the characteristics which figure in the former type of definition. (3) We continue our inquiry to determine whether there is yet a further account or definition which explains the features already used to explain the features initially grasped, and so on, until we have a definition based on the features or features most basic from the point of view of explanation' (Bolton 1987: 145-146). For similar claims, see Charles 2000: 202-203 with 195 and 216;Lennox 2001: 161-2;and McKirahan 1992: 268. (90a31-32). In APo II.8, Aristotle fills out this picture by making clear what kind of knowledge is required for one to be in a position to seek the essence and cause of a kind. In particular, Aristotle claims that we are in a position to inquire (zētein) about a kind's essence only when we know in a non-accidental way that the kind exists (see 93a21-29). Knowing in a non-accidental way that the kind exists requires that we grasp something of the thing itself, e.g., of thunder, that there exists a certain kind of noise in the clouds; of an eclipse, that there exists a certain kind of loss of light; of a human being, that there exists a certain kind of animal; and of a soul, that there exists something which moves itself. In other words, to be in a position to seek the essence of a kind, one must have encountered some instances of the kind and have a grasp of some of its in itself (kath' hauto) features ('something of the thing itself'), on the basis of which one can offer a preliminary account of the kind, e.g., thunder is a certain kind of noise in the clouds, an eclipse is a certain kind of loss of light from the moon, etc. Crucially, Aristotle does not go on to claim, as one might expect given the discussion in DA I.1, that one can make clear the essence of the kind by identifying the reason why the kind has the features initially known to characterize it. Instead, he claims that we can make clear the essence of the kind by identifying the cause of the occurrence of the kind so characterized. Thus, for example, Aristotle's claim is not that we can come to know what the essence of thunder is by explaining why thunder has the feature of being a (certain kind of) noise (pace Lennox (2001: 162) and Charles (2000: 214)) or that we can come to know what the essence of an eclipse is by explaining why an eclipse has the feature of occurring to the moon (pace Charles (2000: 246)). Instead, Aristotle's claim is that we can make clear what kind of noise thunder is, i.e., the essence of thunder, by making clear why the clouds produce that kind of noise (i.e., the kind of noise which is thunder) (see 93b7-14, 93b39-94a5). Similarly, in the eclipse example, Aristotle's claim is that we can make clear what kind of loss of light an eclipse is, i.e., the essence of an eclipse, by identifying why the moon undergoes that kind of loss of light (i.e., the kind of loss of light which is an eclipse) (see 93a29-32, 90a14-18; cf. 87b39-88a2). 20 But what about the example in which Aristotle suggests that one can make clear the essence of an eclipse by identifying the cause of the moon's failure to cast shadows when there is nothing between the moon and the Earth (see APo II.8 93a37-b7)? I concede that it is an in itself accident of an eclipse that an eclipse involves the moon's not casting shadows even when there is nothing between it and the Earth. Still, Aristotle does not say in APo II.8 that we can make clear the essence of an eclipse by explaining why an eclipse has this feature (as one might expect given what he says in DA I.1). Instead, Aristotle says that we can make clear the essence of an eclipse by explaining why the moon fails to cast shadows even when there is nothing between the moon and the Earth. In other words, though the in itself accident helps characterize the kind of loss of light which an eclipse is, Aristotle never suggests in APo II.8 that the target explanandum is why an eclipse (the kind whose essence we are trying to make clear) has the in itself accidents that it does. Instead, Aristotle's claim is that we can make clear the essence of a kind by identifying the kind's cause. The point of this 'failure to cast shadows' case is that, in looking for the kind's cause, we may start with an initial account of the kind which characterizes it in terms of one of its in itself accidents and seek to identify the cause of the kind by identifying the cause of something characterized in terms of those accidents. In short, though knowledge of a kind's in itself accidents has a role to play in the epistemological story described in these chapters, the role that this knowledge plays differs from the role that it plays in the epistemological story of DA I.1. The epistemological thesis of APo II.2 and II.8, viz., that we can make clear or come to know what a kind's essence is by identifying the kind's cause, is backed by a non-epistemological thesis about essence introduced in APo II.2 and clarified in APo II.8. The non-epistemological thesis is that, in the case of a kind which has ' a cause other than itself,' the kind's cause is included in its essence. 21 Thus, for example, the cause of 20 Bronstein makes a similar point (though without noting that other authors have mistakenly suggested otherwise): 'Thunder just is a noise in the clouds; there is no reason why it is. The question of scientific interest is why is there thunder (i.e., a certain type of noise) in the clouds?' (2016: 140). Charles seems to recognize this when he notes that what is explained in APo II.8 is not why Noise in the clouds belongs to Thunder but why Such and Such Kind of Noise (i.e., Thunder) belongs to the Clouds (see Charles 2000: 198-199), but he then goes on to (mistakenly) suggest that part of what is explained is 'why it [i.e., thunder] is noisy' (202) and 'why thunder has the other genuine (or per se ) features it has' among which he includes 'being a noise' (214). Lennox (2001: 162) makes a similar error. 21 Sometimes Aristotle says that the kind's cause is its essence rather than that the kind's essence includes its cause (see 90a15). For example, Aristotle says at 93b7 that the essence of an eclipse is ' an obstruction by the Earth' and at 93b8 that the essence of an eclipse is included in the essence of an eclipse: an eclipse is essentially a loss of light from the moon due to the obstruction of the Earth; it is part of the essence of an eclipse that it is (efficiently) caused by the obstruction of the Earth (see 90a14-18). Likewise, the cause of thunder is included in the essence of thunder: thunder is essentially a noise due to the quenching of fire in clouds; it is part of the essence of thunder that it is (efficiently) caused by the quenching of fire in clouds (see 93b39-94a5). This non-epistemological thesis about the essences of kinds which have ' causes other than themselves' is what paves the way for the aforementioned epistemological thesis. If we know that the essence of a kind includes its cause, it follows that if we do not already know the kind's cause, we can advance our knowledge of the kind's essence by identifying the kind's cause and then including the identified cause in our account of the kind's essence. With these points in place, let us now return to an issue put off earlier, namely, the question of just what kinds Aristotle has in mind when he refers to kinds which have ' causes other than themselves.' David Bronstein argues that Aristotle has in mind ' attribute-kinds', i.e., kinds whose instances inhere in or belong to other things: 'Eclipse, thunder, leaf-shedding, and 2R are all states or conditions or affections -in general, attributes -that inhere in or belong to their respective subjects because of some cause' (2016: 99-100). Bronstein contrasts these attribute-kinds with what he calls 'subject-kinds': Aristotle distinguishes two main types of definable entity and two types of essence by which they are respectively defined. The first type of definable entity is what I call a 'subject-kind' (e.g., line, triangle, animal, human being). These are natural kinds (species and genera) whose individual members are primary substances (e.g., Socrates) or substance-like entities (e.g., this particular triangle)…The second type of definable entity is a demonstrable attribute of a subject-kind. (2016: 45-46 23 Here Aristotle alludes to a distinction drawn in APo I between (a) kinds which are such that one must 'hypothesize' (hupothesthai) or ' assume' (lambanein) that they are and (b) kinds which are such that their existence is demonstrable. 24 But this distinction is not equivalent to a distinction between attribute-kinds and subject-kinds, for there are subject-kinds among the kinds whose existence is demonstrable. Thus, for example, in APo I.10 Aristotle writes, I call 'principles' in relation to each kind those [things] of which it is not possible to prove that it is. On the one hand, what the primaries and what the things composed of them (ta prōta kai ta ek toutōn) signify [i.e., the definitions, or accounts of the essences, of the primaries and the things thunder is ' a quenching of fire in a cloud.' But though Aristotle sometimes speaks this way, his more careful way of putting his view is that the essence of a kind includes its cause. Thus, for example, elsewhere he says not that thunder is essentially a quenching of fire in clouds but rather that thunder is essentially a noise due to the quenching of fire in clouds (see 93b39-94a5). Likewise, elsewhere he says not that an eclipse is essentially an obstruction by the Earth but rather that an eclipse is essentially a loss of light from the moon due to the obstruction of the Earth (see 90a14-18). For further discussion of this issue, see Charles 2010: 288 n.4 and Charles 2014: 17 n.29. 22 Goldin (1996 and Ross (1949: 633) defend similar views. 23 The Greek here is difficult to construe and has been understood in different ways by different commentators. I follow Bronstein (2016: 137, n.14) and Charles (2000: 274-5, n.2) Notice that here the subject-kind triangle is included among those which are such that their existence is demonstrable. Indeed, Bronstein himself concedes this: 'The distinction between unit and triangle is clear. Unit is a primary whose existence is indemonstrable and assumed as a principle (a hypothesis) in the relevant science (arithmetic). Triangle, on the other hand, is a non-primary whose existence is demonstrated in the relevant science ' (2016: 172 Hence, when Aristotle speaks of the kinds which have ' causes other than themselves,' he has in mind not just attributes but indeed any kind which is such that its existence is demonstrable. This of course fits with what we have seen is Aristotle's claim in APo II.2 and II.8, namely, that a kind which has a ' cause other than itself' is such that there is some cause of its being, i.e., some middle term through which one can demonstrate that it exists (see especially APo II.2 90a1-11). For example, as we have seen, in APo II.2 and II.8 Aristotle maintains that one can demonstrate that there is an eclipse, i.e., explain why an eclipse (a certain kind of loss of light from the moon) occurs, by specifying its cause, viz., the obstruction of the Earth. Moreover, though all of Aristotle's examples in APo II.2 and II.8 are examples of attribute-kinds, in Metaph. VII.17 he applies the same idea to subject-kinds, using as his examples the kind house and the kind human being: One is particularly liable not to recognize what is being sought in things not predicated one of another, as when it is asked what a man is [i.e., what is the essence of a human being], because the question is simply put and does not distinguish these things as being that. But we must articulate our question before we ask it…And since the existence of the thing must already be given, it is clear that the question must be why the matter is so-and-so. For instance, the question may be 'Why are these things here a house?' (and the answer is 'Because what being is for a house [i.e., the essence of a house] belongs to them'), or it may be 'Why is this thing here a man?', or 'Why is this body in this state a man?' So what is sought is the cause by which the matter is so-and-so, i.e., the form. (1041a32-b3, b4-b8, Bostock translation). 25 This brings me to a final point about kinds which have ' causes other than themselves.' In APo II.8, Aristotle gives two examples in which the identified cause is an efficient cause: an eclipse is essentially a loss of light from the moon due to, i.e., efficiently caused by, the obstruction of the Earth; thunder is essentially a noise due to, i.e., efficiently caused by, the quenching of fire in clouds. But elsewhere, when Aristotle brings to bear his four-cause explanatory framework, he indicates that he thinks that other types of causes can serve as 'the cause' of a kind. For example, again in Metaph. VII.17, Aristotle writes, So what one asks is why it is that one thing belongs to another. (It must be evident that it does belong, otherwise nothing is being asked at all). Thus one may ask why it thunders, for this is to ask why a noise is produced in the clouds, and in this way what is sought is one thing predicated of another. And one may ask why these things here (e.g., bricks and stones) are a house. It is clear, then, that what is sought is the cause -and this is the what-being-is [i.e., the essence], to speak logically -which in some cases is that for the sake of which the thing exists (as presumably in the case of a house or a bed), while in some cases it is that which first began the change; for this latter is also a cause. (1041a23-a30, Bostock translation). Here Aristotle implies that in the case of some kinds (e.g., in the case of the kind house or the kind bed) the cause sought is a final cause, whereas in other cases (e.g., the case of the kind eclipse or the kind thunder) the cause sought is an efficient cause. For example, while thunder is essentially a noise due to, i.e., efficiently caused by, the quenching of fire in clouds, a house is essentially bricks and stones (or some durable stuff) arranged for the sake of sheltering people and possessions (see Metaph. H.2 1043a14-19 for a definition of house along these lines). Much more could be said about how the account in APo II.2 and APo II.8 connects with Aristotle's fourcause explanatory framework or with what Aristotle says in Metaph. VII.17 and related passages, but I set aside such complications here. 26 Instead, what I wish to emphasize is that, whatever the types of causes invoked, the procedure described in APo II.2 and II.8 is one in which one 'makes clear' or comes to know the essence of a kind by identifying the kind's cause rather than the cause of the kind's having the in itself accidents that it does. In the case of attribute-kinds like eclipse or thunder, this means identifying why the attribute-kind, or rather instances of the attribute-kind, inhere in or characterize some subject, e.g., why the moon undergoes an eclipse (i.e., a certain kind of loss of light), why the clouds produce thunder (i.e., a certain kind of noise), etc. In the case of subject-kinds like human being or house, this means identifying what makes a subject (perhaps characterized as such and such matter) an instance of the kind, e.g., why these things (e.g., bricks and stones) are a house, why this thing is a human being, etc. (Here it may be helpful to note that for a subject-kind to exist just is for there to be instances of that kind, and hence to explain or demonstrate the existence of a subject-kind is just to explain why there are instances of that kind, i.e., what makes such and such things instances of that kind). 27 Crucially, in either case, this is a different claim than the one found in DA I. (2) about essence, the fact that houses are essentially for the sake of protecting people and their possessions is explanatory of why houses have certain other features, e.g., why houses are made of bricks and stones (or, more generally, durable stuff), just as the fact that eclipses are essentially caused by the interposition of the Earth between the sun and the moon explains why eclipses have certain other features, e.g., why eclipses recur periodically. But just as Aristotle does not say in APo II.2 and II.8 that we can make clear the essence of an eclipse by making clear why an eclipse has the other, in itself features that it does but rather that we can make clear the essence of an eclipse by identifying the cause of an eclipse, i.e., the cause of the moon's undergoing a certain kind of loss light (the kind of loss of light which just is an eclipse), which in this case is an efficient cause, likewise in Metaph. VII.17 Aristotle does not say that we can make clear the essence of a house by making clear why it has the other, in itself features that it does but rather that we can make clear the essence of a house by identifying the cause of a house, i.e., 'why these things here (e.g., bricks and stones) are a house,' which in this case is a final cause (see 1041a26-27, b5-6). This is not to say Aristotle does not think the essence of a house is explanatory of why houses have certain other, in itself features but only that, as in APo II.2 and II.8, in Metaph. VII.17 this is not the thesis about essence with which Aristotle is primarily concerned. 27 Against this, one might raise the following worry: doesn't Aristotle suggest in Metaph. VII.17 that inquiring into why a member of a kind is of that kind is 'like inquiring into nothing at all?' (see 1041a14-22). In response, I note that the chapter suggests that asking 'why is a K a K?' is like inquiring into nothing at all, not that asking 'why is this sort of thing a K?' is like inquiring into nothing at all. Indeed, the chapter seems to raise questions of just this form, e.g., 'Why are these things here a house?' or 'Why is this thing here a human being?' (see 1041b5-7). Now one can press whether this really is a case of asking why something is a member of a kind on the grounds that Aristotle seems to think the question really is 'why is such and such matter a K?' (see 1041b5, b7-8). Some authors (e.g., Bostock 1994: 244) claim that the matter in question is not itself a member of the kind but rather only the matter of a member of the kind. But I'm inclined to disagree with this idea. The question 'Why is S a K?' presupposes that S is a K. For example, the question 'Why are these things here a house?' presupposes that these things are a house, and the question 'Why is this thing here a human being?' presupposes that this thing here is a human being. In fact, Aristotle himself makes this point when he notes that 'the existence of the thing must already be given' (1041b4-5): if we ask 'Why is S a K?', it must already be given that S is a K; indeed, earlier Aristotle says if we ask why K belongs to S, 'it must be evident that it does belong' (see 1041a23). Hence, contrary to what some authors (e.g., Bostock 1994: 244) suggest, whatever 'S' refers to, it is something which can be aptly characterized as ' a K', i.e., an instance of the kind K. (I note that this claim is compatible with it being the case that S is not actually a K except insofar as what-being-is-for-a-K belongs to S). I thank one of the anonymous referees for encouraging me to address this issue. of a kind not by identifying the cause of the kind but rather by identifying the cause of the kind's in itself accidents, i.e., why the kind has such and such necessary but non-essential features. §4. Combining the Two Ideas: An Enriched Explanationist Interpretation In the preceding pages, I have argued that Aristotle makes two distinct non-epistemological claims about essence. On the one hand, Aristotle holds that the essential features of a kind can be distinguished from its non-essential but necessary features by virtue of the former's explanatory role: the essence E of a kind is not only a necessary feature of the kind but also such that (a) E does not belong to any instance of the kind in virtue of other features of the kind belonging to that instance and (b) at least some of the other necessary features of the kind (viz., its in itself accidents) belong to its instances (at least in part) because E belongs to those instances. On the other hand, Aristotle claims that in the case of kinds which have ' causes other than themselves,' the essence of a kind includes its cause. Moreover, I have argued that each of these non-epistemological claims about essence opens the way for a distinct epistemological claim about essence, each of which can be found in evidence in Aristotle's texts. On the one hand, the idea that the essence of a kind is what explains why it has the in itself accidents that it does opens the way for the epistemological claim, discussed in DA I.1, that one can make clear what a kind's essence is by identifying what feature(s) of the kind explain why it has the in itself accidents that it does. On the other hand, the idea that, for some kinds, the essence of a kind includes its cause opens the way for the distinct epistemological claim, discussed in APo II.2 and II.8, that one can make clear what a kind's essence is by identifying the kind's cause. At this point, one might well wonder how, if at all, these two distinct non-epistemological claims and corresponding epistemological claims fit together. In what follows, I suggest that the two strands can be unified in what I call an 'Enriched Explanationist Interpretation' of Aristotle's epistemology of essence. Consider first the two non-epistemological claims about essence. Two important results follow from the combination of these claims. First, if a kind is such that it has a cause other than itself, then not only is the cause part of the kind's essence but also the fact that the kind's instances are caused in this way is an indemonstrable fact. Thus, for example, the fact that an eclipse is due to the obstruction of the Earth is an indemonstrable fact about eclipses; it is not the case that there are some other feature(s) of an eclipse such that the fact that an eclipse is due to the obstruction of the Earth can be explained by reference to these other features. Second, it also follows from the combination of the two theses that if a kind has a cause other than itself, then the fact that it has this cause plays a role in explaining why it has the other characteristic but non-essential features (i.e., the in itself accidents) that it does. Thus, for example, the fact that an eclipse is due to the obstruction of the Earth plays a role in explaining why an eclipse involves a circular black spot which gradually overtakes the whole surface of the moon. Likewise, the fact that thunder is due to the quenching of fire plays a role in explaining why thunder involves the booming sound that it does and why thunder is preceded by lightning. The combination of the two non-epistemological claims about essence also has consequences for each of the aforementioned epistemological claims. On the one hand, the non-epistemological claim that the essence of a kind includes the kind's cause, if there is one, implies that the epistemological claim in DA I.1 can be expanded with the suggestion that one investigate whether the kind is such that its instances are all caused in a certain way and, if so, whether the fact that they are caused in that way can be used to explain why they have the in itself accidents that they do. Thus, for example, in the case of an eclipse, the suggestion is that, in looking to explain why eclipses have the in itself accidents that they do (e.g., why eclipses involve a circular black spot which gradually overtakes the whole surface of the moon), one consider what the cause of an eclipse is and whether the fact that an eclipse is caused in that way can be used to explain why eclipses have such features. On the other hand, the non-epistemological claim that the essence of a kind is explanatory of why the kind has the in itself accidents that it does implies that the epistemological claim in APo II.2 and II.8 can be supplemented by the idea that, in looking for the cause of a kind, one should look for something which can play a role in explaining why it has the in itself accidents that it does. Thus, for example, in looking to identify the cause of an eclipse, one must look for something which can explain why eclipses have the characteristic but non-essential features that they do. Indeed, the hypothesis that an eclipse is caused by the obstruction of the Earth rather than by, say, the rotation of the moon or the destruction of the moon (see APo II.8 93b5-6) is confirmed by the fact that, unlike the latter hypotheses, the former can explain why eclipses have the in itself accidents that they do, e.g., why an eclipse involves a circular black spot which gradually overtakes the whole surface of the moon and why an eclipse is something which recurs periodically. In summary, the combination of Aristotle's two non-epistemological claims about essence provides the basis for what I call an 'Enriched Explanationist Interpretation' of Aristotle's epistemology of essence. The core insight of the Explanationist Interpretation is retained: one can identify what a kind's essence is by identifying what feature(s) of the kind explain why it has the in itself accidents that it does. However, Aristotle's additional non-epistemological thesis provides a way to develop the original Explanationist proposal. In looking to identify what feature(s) of a kind explain why it has the in itself accidents that it does, one should consider whether the kind has a cause other than itself, for if it does, then the cause is to be included in the essence which explains why it has the in itself accidents that it does. On the other hand, in looking to identify such a cause, one must attend to whether the proposed cause can play a role in explaining why the kind has the in itself accidents that it does. If it cannot, one has some evidence that one has misidentified the kind's cause, for the kind's cause must be something which would explain the occurrence of something which has those in itself accidents. §5. Concluding Remarks: an Insight for Contemporary Essentialists Before concluding, I wish to highlight an insight of the foregoing historical discussion which is relevant to the contemporary non-modal essentialist's task of explaining how it is that we can come to know what something's essence is. The core idea of the non-modal essentialist is that not all of the necessary properties of something need be included in its essence; to be part of something's essence involves more than just being a property which it necessarily has if it exists. But if one leaves one's non-modal essentialism at that, it seems that little can be said to explain how, in any given case, we could know which of a thing's necessary features are part of its essence and which are not. However, if one adds to one's non-modal essentialism some further non-epistemological claims about the explanatory role of essences or about which sorts of features in general are suitable for inclusion in the essence of something, then a more substantive answer to the epistemological question becomes possible. For such non-epistemological theses about what sorts of features are suitable to be part of the essences of things provide further marks which can be used to determine which of the necessary features of a thing are part of its essence and which are not. Hence, I suggest to the contemporary non-modal essentialist that if she wishes to take up the little-discussed question of how it is that we come to know what something's essence is, she should begin with a non-epistemological question: what exactly distinguishes a thing's essential features from its merely necessary features? The more that can be said here, the more that can said in answer to the question of how we can come to know what the essences of things are. 28
2019-11-28T12:23:32.430Z
2019-11-26T00:00:00.000
{ "year": 2019, "sha1": "7d1ff11dda3b5f3c1c13d81f855f44d672fbc63d", "oa_license": "CCBY", "oa_url": "http://metaphysicsjournal.com/articles/10.5334/met.24/galley/23/download/", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f576c33ce9a7032450d17d26bd1aaf8644196d14", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Philosophy" ] }
237546341
pes2o/s2orc
v3-fos-license
Temporal subtraction CT with nonrigid image registration improves detection of bone metastases by radiologists: results of a large-scale observer study To determine whether temporal subtraction (TS) CT obtained with non-rigid image registration improves detection of various bone metastases during serial clinical follow-up examinations by numerous radiologists. Six board-certified radiologists retrospectively scrutinized CT images for patients with history of malignancy sequentially. These radiologists selected 50 positive and 50 negative subjects with and without bone metastases, respectively. Furthermore, for each subject, they selected a pair of previous and current CT images satisfying predefined criteria by consensus. Previous images were non-rigidly transformed to match current images and subtracted from current images to automatically generate TS images. Subsequently, 18 radiologists independently interpreted the 100 CT image pairs to identify bone metastases, both without and with TS images, with each interpretation separated from the other by an interval of at least 30 days. Jackknife free-response receiver operating characteristics (JAFROC) analysis was conducted to assess observer performance. Compared with interpretation without TS images, interpretation with TS images was associated with a significantly higher mean figure of merit (0.710 vs. 0.658; JAFROC analysis, P = 0.0027). Mean sensitivity at lesion-based was significantly higher for interpretation with TS compared with that without TS (46.1% vs. 33.9%; P = 0.003). Mean false positive count per subject was also significantly higher for interpretation with TS than for that without TS (0.28 vs. 0.15; P < 0.001). At the subject-based, mean sensitivity was significantly higher for interpretation with TS images than that without TS images (73.2% vs. 65.4%; P = 0.003). There was no significant difference in mean specificity (0.93 vs. 0.95; P = 0.083). TS significantly improved overall performance in the detection of various bone metastases. 1. A large-scale observer study was performed for detection of bone metastases. Numbers of radiologists and patients were 18 and 100, respectively. 2. To validate the robustness of detection with TS images, the radiologists with a variety of backgrounds and the patients with various primary tumors and various bone metastases were included. To include various bone metastases, osteoblastic, osteolytic, intertrabecular, and mixed types of newly-developed and preexisting bone metastases at various locations were included. 3. Although the studies of Onoue et al. 18 and Sakamoto et al. 14 did not show significant improvement between with and without TS images, the current study shows significant improvement. Materials and methods This retrospective study was approved by the institutional review board (Kyoto University Graduate School and Faculty of Medicine, Ethics Committee), and requirement for informed consent was waived. This study conformed to the Declaration of Helsinki and Ethical Guidelines for Medical and Health Research Involving Human Subjects in Japan (https:// www. mhlw. go. jp/ file/ 06-Seisa kujou hou-10600 000-Daiji nkanb oukou seika gakuka/ 00000 80278. pdf). Subject selection. Six board-certified radiologists (M.Y., M.N., T.K., Y.E., K.O., T.A.; all of these are authors of this paper) with 9-22 years of experience in interpreting CT images selected subjects meeting pre-defined criteria (Supplementary Information A) from a clinical database, sequentially scrutinizing CT images. Briefly, the criteria are as follows. (i) The six board-certified radiologists included subjects with a history of malignancy who were examined with at least three CT studies (previous, current, and future CT). (ii) The subjects had a history of examinations of 18F-fluoro-2-deoxy-d-glucose positron emission tomography and/or bone scintigraphy which was performed for evaluation of bone metastases. (iii) Positive subjects (subjects with bone metastases) had at least one bone metastasis measuring 5 mm or more in diameter. (iv) Negative subjects (subjects without bone metastases) had no bone metastasis. Supplementary Information B shows the procedure of subject selection. With reference to images from CT and other imaging modalities, they selected 50 positive subjects and 50 negative subjects. Furthermore, they selected a pair of CT images (previous and current CT) for each subject that satisfied predefined criteria (see Supplementary Information A). Negative subjects were selected to match the background characteristics (e.g., age and sex) of positive subjects. The 6 radiologists detected and reviewed all suspicious lesions, and identified lesions over 5 mm or more to create the reference standard. Finally, lesions were determined to be bone metastases with sufficient confidence by consensus. In this procedure, future CT was used for confirming the reference standard. The three-dimensional region of each bone metastasis was manually segmented on current CT images by consensus. Subject-and lesion-based attributes were investigated as shown in Tables 1 and 2, respectively. Table 1 shows that the CT scan conditions, such as slice thickness and use of contrast media, were different between previous and current CT in some subjects. The following CT scanners (Canon Medical Systems, Otawara, Japan) were used; Aquilion 16 (16-detector row CT), Aquilion 64 (64-detector row CT), Aquilion Prime (80-detector row CT), and Aquilion One (320-detector row CT). www.nature.com/scientificreports/ TS image generation. The process for generating TS images is almost identical to that of Onoue et al. 18 . Previous CT images for each subject were non-rigidly transformed to match current CT images. The non-rigid image transformation was performed fully automatically. Subsequently, transformed previous CT images were subtracted from current CT images to generate TS images using the Intel Xeon E5-1650v4 processor (Clock, 3.50 GHz, number of cores 6; memory, 32 GB). Processing time for TS generation was recorded. Projection images, which were the average of the maximum and minimum intensity projections of TS images, were also generated to enable observers to immediately grasp osseous temporal changes across the whole area. Observer enrollment. This experiment was a fully crossed multi-observer multi-subject study. Based on Sakamoto's study 14 Observer study. To reduce memory bias, observers were randomly assigned to two groups of equal size (n = 9). One group independently interpreted the image pairs for each subject first without and then with TS images. The other group interpreted the image pairs first with and then without TS images. The interval between two sessions without and with TS for each observer was more than 30 days. Moreover, the order of subjects was randomized for each observer. www.nature.com/scientificreports/ Observers used a medical monitor (Radiforce RX440, EIZO) and a dedicated image viewer ( Fig. 2) with multi-planar reconstruction and window level/width modification functions to view CT and TS images. To control practice effects, observers were trained to use the viewer with training data of ten subjects prior to the actual study. Observers were blinded to all clinical data except the age and sex of each subject and the interval between previous and current studies. Observers were asked to mark the location of any suspicious lesions measuring 5 mm or more on current images and to rate the percentage likelihood of bone metastasis. The interpretation time for each subject was automatically recorded by the viewer excluding the time for rating. After interpretation of each subject, observers were asked to subjectively rate on a five-point scale the confidence level for their interpretation (1, very low; 2, low; 3, moderate; 4, high; 5, very high) and the usefulness of TS images (1, useless; 2, not very useful; 3, somewhat useful; 4, very useful; 5, extremely useful). After completion of all assessments, the marked locations of lesions were compared against the reference standard for lesion identification. A lesion with a likelihood rating of 51% or higher was considered positive in lesion-based analyses. A subject with at least one positive lesion was considered positive in subject-based analyses. TS images were considered beneficial for identifying lesions where at least one observer could correctly identify and positively rate only with TS images. Meanwhile, they were deemed detrimental to identifying lesions where www.nature.com/scientificreports/ at least one observer could correctly identify and positively rate only without TS images. All false positives were further reviewed by the six radiologists. Statistical analyses. JAFROC analysis 22,23 was conducted with JAFROC software with random-observersand-random-subjects models and the figure of merit (FOM) was calculated to evaluate overall observer performance. Sensitivity at lesion-based, false positive count (FPC) per subject, sensitivity and specificity at subjectbased, interpretation time, and confidence levels were compared between sessions (with TS images vs. without TS images) with the Wilcoxon signed rank test. SAS (Version 9.4, SAS Institute, Cary, North Carolina) was used for statistical analyses, and P < 0.05 was considered to indicate a significant difference. Table 1 and Supplementary Information D. In total, the reference standard consisted of 160 bone metastases. Their detailed characteristics are shown in Table 2 and Supplementary Information E. TS images were generated for the image pairs of the 100 subjects. The mean processing time per image pair was 973 s (range 322-2310, standard deviation 405). TS images were not generated for one metastasis because it was out of the scan area of the previous CT image. Results Observer characteristics. All 18 enrolled observers were board-certified radiologists, with the following specialties in radiology: general radiology (n = 4), nuclear medicine (n = 2), neuroradiology (n = 2), cardiovascular radiology (n = 1), respiratory radiology (n = 2), upper abdominal radiology (n = 6), gastrointestinal radiology (n = 2), and urological radiology (n = 2). They had 10-36 years of experience in the interpretation of CT images. In clinical practice, they interpreted 3000 to 10,000 CT examinations each year. Two radiologists had previously used computer-aided diagnosis system. None had previously used TS-CT. Image interpretation. The 18 observers evaluated the 100 image pairs with and without TS. In total, 3600 reading sessions were performed. Figure 3 and Table 3 show the main results for image interpretation. Representative cases are shown in Figs. 4 and 5. Compared with interpretation without TS, TS images were associated with a significant increase in mean FOM from 0.658 to 0.710 (JAFROC analysis, P = 0.0027). Mean sensitivity at lesion-based was significantly higher for interpretation with TS compared with that without TS (46.1% [73.8 Median confidence levels ranged from 2 (low) to 5 (very high) for interpretations without TS and from 3 (moderate) to 5 (very high) for those with TS. The median ratings for usefulness of TS images ranged from 3 (somewhat useful) to 5 (very useful), indicating that all observers evaluated TS as useful. Subjects were divided into subgroups according to the type, location, and preexistence of bone metastases (Table 4). Sensitivity with TS was higher than or equal to that without TS for all subgroups. The gain in sensitivity for interpretation with TS compared with that without TS was small in metastases in the scapulae. Moreover, the gain for metastases in extremities was zero because sensitivity for both interpretations without and with TS were also zero. Effects of TS images on metastases detection. Of the 160 metastases, a beneficial effect of TS images was observed for 118 and a detrimental effect was observed for 82. In particular, there were eight notable metastases for which detection was improved by TS images for 10-15 of the 18 observers, while a detrimental effect was observed for 0-1 observer. These metastases comprised not only three small metastases but also five larger ones, measuring 21.8-32.9 mm. These larger lesions were "lost" on current CT images, disguised by commonlyobserved degenerative changes and sterically complex structures of the sternum, ribs, or pelvic bones. . When an observer clicks on a suspicious lesion, the dialog box appears to rate its likelihood (low to high) of being a bone metastasis. These representative images are obtained from a 55-year-old male patient with renal cell carcinoma who developed two osteolytic metastases in a thoracic vertebra (red circle) and the left iliac bone (blue circle). Both metastases are clearly visualized. www.nature.com/scientificreports/ In contrast, there were seven notable metastases for which detection was detrimentally affected by TS images for 5-8 observers, while a beneficial effect was observed for 0-2 observers. These metastases resembled commonly-observed benign findings on TS images, especially the projection images, such as degenerative changes of the vertebrae and joints, healing fractures of the ribs and pelvic bones, and subtraction artifacts around the scapulae. Confidence levels Scientific Reports | (2021) 11:18422 | https://doi.org/10.1038/s41598-021-97607-7 www.nature.com/scientificreports/ The review of false positive marks without and with TS images identified 161 and 212 bone lesions, respectively. In most of the false positives (n = 130 without TS and 148 with TS), the number of observers who marked was one, while the number was 5 to 15 in some lesions (n = 7 without TS and 23 with TS). It is speculated that these lesions represent degenerative changes (n = 4 without TS and 10 with TS), healing fractures (n = 1 without TS and 6 with TS), post-operative changes (n = 1 without TS and 1 with TS), and other benign bone lesions (n = 1 without TS and 6 with TS) such as bone islands. Discussion This study investigating the effects of TS on bone metastases detection in CT images indicated that TS images could be made available at follow-up CT without any extra physical burden on patients. Moreover, TS images significantly improved overall performance in detection of various types of bone metastases at various locations by radiologists without additional interpretation time. This study recruited a relatively large number of radiologists to assess CT images from a large number of subjects. Furthermore, considering the frequency of CT scans in oncology patients, we believe that our TS method could bring considerable benefit to clinical diagnostic imaging. This is the first study to report a significant improvement in overall radiologist performance at detecting various types of newly-developed and preexisting bone metastases at various locations by using TS images. Table 4 suggests that TS was beneficial for all types of bone metastases unlike 18 F-fluoro-2-deoxy-d-glucose positron emission tomography or bone scintigraphy, which are reported to only have benefits for specific metastases [25][26][27] . Moreover, TS retains the advantages of CT, which has finer resolution and is more frequently performed in oncology patients than other imaging modalities. All these advantages are essential for earlier detection of bone metastases. TS method is clinically applicable because our study evaluated TS images without excluding subjects for inconsistencies between previous and current CT images in posture, breathing depth, and other study attributes (Table 1), which are inevitable with real-world application. Furthermore, these results were obtained with the 18 radiologists who have various backgrounds and no previous experience of TS for bone metastasis detection. Moreover, TS is likely to be accepted by radiologists based on their usefulness ratings. As such, clinical application of TS could enable early detection of bone metastases, reducing SRE and cancer-related mortality and improving quality of life of cancer patients. www.nature.com/scientificreports/ There were some detrimental effects of TS on detection, which can presumably be attributed to conspicuous visualization of commonly-observed degenerative and traumatic changes, premature judgment of such changes or bone metastases on TS images, and abbreviated observation of CT images based on this judgment. To minimize these effects, radiologists should be educated about TS. Some visualization aids might also be helpful to minimize such effects, including image fusion and synchronized scrolling to assist radiologists in exploiting both CT and TS information. It was observed that sensitivity for intertrabecular metastases was lower than that for other types even with TS. Although TS improved sensitivity, the improvement was smaller presumably due to a smaller density change. To increase the advantage gained with TS images for such metastases, computer-aided detection might be developed. By its nature, the TS method used here exploits follow-up CT images and requires prior images, which are unavailable at initial imaging assessments. In such situations, another modality should also be considered because some cancer patients already have bone metastases at initial diagnosis [28][29][30] . However, follow-up evaluations, as well as detection of bone metastases, are important for their management, including the prevention of SRE. Based on Table 4, TS appears to assist radiologists in identifying both preexisting and newly-developed bone metastases. Follow-up evaluation of bone metastases with CT is generally considered difficult in some cases 31 . Further research is therefore required to investigate the use of TS for follow-up evaluations. Although the processing time in this study was much shorter than that of Sakamoto's study 14 , it would be preferable to further shorten it for clinical application of TS, especially in emergency CT assessments for SRE. According to preliminary results using in-house software, processing time can be reasonably expected to be reduced to less than 10 min with the use of a graphics processing unit. There were several previous studies for investigating the usefulness of TS for detection of bone metastases [14][15][16]18,21 . To the best of our knowledge, the current study was the first to show that TS was useful for detecting bone metastases even when inconsistent CT sets (such as slice thickness) were included for generating TS. There were several limitations to this study. First, despite repeated scrutinization of CT images by the 6 board-certified radiologists, reference to all available images including those obtained after current images, and determination with sufficient confidence by consensus, the definition of the reference standard might be incomplete because any use of clinical information other than images was not accepted by the Japanese regulatory body (Pharmaceuticals and Medical Device Agency). This study was conducted as a clinical performance www.nature.com/scientificreports/ test for which the results were to be submitted to the body for approval of TS for clinical use 32 . Although TS images would have considerably assisted the definition of the reference standard, they were not referred to for the definition. Second, TS effects were not sufficiently evaluated for metastases in the skull, scapulae, and extremities due to the small number of subjects with these metastases. All three metastases in extremities happened to be too difficult to detect and differentiate with CT without reference to other modality images. Therefore, further studies focusing on specific types of metastases are also required. Third, the effect of bone metastasis therapy on detectability with TS was not examined in the current study. The therapy can change CT density of bone metastases 33,34 . Therefore, detectability with TS may be changed with the bone metastasis therapy. Because the access of medical records was severely restricted in performing the current study 32 , we could not examine the effect of bone metastasis therapy on detectability with TS. In conclusion, TS images obtained from serial CT scans using nonrigid image registration significantly improved radiologist performance in the detection of bone metastases.
2021-09-18T06:17:05.166Z
2021-09-16T00:00:00.000
{ "year": 2021, "sha1": "28b43bb12b73b9846c20f284f5f16a5002ffec80", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-97607-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c798fa82628d55085b8ffea787d489c2e7b80c1a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
138612316
pes2o/s2orc
v3-fos-license
Low Temperature Curable Polyimide for Advanced Package Advanced packaging technology requires low temperature curable and low residual stress material as dielectric layer. We developed appropriate product based on our previous study and so on. As a result, we used polyimide structure with acidity control of amine unit techniques for polymer and additive A for copper adhesive. And we also evaluated the product regarding various items. Introduction For long years, the progress of semiconductor devices has been achieved in accordance with Moore's law, namely miniaturization. But recently, it needs very complicated techniques, so performance and productivity are gradually hard to compatible. Therefore the packaging technology is paid attention and many types of advanced packaging are actively developed. Particularly, Fan-Out Wafer Level Package and Through Silicone Via are considered to be the future mainstream. Because those packaging techniques are possible to achieve high I/O, superior electrical performance, low energy consumption and small form factor. Both packaging techniques use low thermal stability materials within the structure or during production process. So, they need low temperature curable polymer materials as their dielectric layer. And now, almost every semiconductor devices are required to be thin, especially for mobile application. So, many chips are becoming thinner and thinner, and the bending of chips and wafer are concerned. Accordingly, low internal stress also required for dielectric layer materials. As dielectric layer materials, polyimide and polybenzoxazole are widely used, because of its excellent heat resistance, chemical resistance, and high mechanical properties [1]. In our previous study, we showed polyimide is more suitable than polybenzoxazole from reliability for thermal cycle test viewpoint by experiment and finite element analysis (FEA) because of its good elongation at -55deg.C and the fatigue properties [2,3]. On the other hand, we also reported it was necessary to choose appropriate thickness and modulus to work as stress buffer at drop test by FEA [4]. In consideration of these information, we showed our solution of dielectric layer materials for advanced package in this article. Synthesis of polyimide precursor Tetracarboxylic acid dianhydride, pyridine and alcohol which is introduced to side chain of polyimide precursor were dissolved into γ-butyrolactone (GBL), and the solution was stirred at room temperature for 16 hr. Then, the solution was cooled with ice bath, GBL solution of N,N'-dicyclohexylcarbodiimide was added to the solution. Subsequently, diamine in GBL was added to the solution. The solution was stirred at room temperature for 2 hours. After that, The solution was filtered to remove N,N-dicyclohexylurea as a by-product and poured into a large amount of water. The polymer precipitate was filtrated and dried at 40 o C for 72 hours under vacuum. The molecular weight of polyimide precursor was measured by gel permeation chromatography (GPC). GPC measurement 0.02g of polymer was added to 5mL N-methyl pyrrolidone (NMP). GPC measurement of each sample was performed with the analytical curve of standard polystyrene. Preparation of photosensitive polyimide precursor composition Photosensitive polyimide precursor varnishes were prepared by adding the polyimide precursors, photo-initiators, crosslinking agents, and adhesion promoters to NMP. The solution was filtered by 1.0um pore PTFE filter. Lithography and observation Photosensitive polyimide precursor varnishes were coated on 6 inch silicon wafer and pre-baked at 100 o C for 4 minutes (Mark-8, Tokyo Electron). The coated film about 14 m thickness on wafer was exposed through a patterning mask by i-line stepper (NSR2005i8A, Nikon) from 100 mJ/cm 2 to 800 mJ/cm 2 . The exposed film was developed with cyclopentanone and then rinsed with 2-propylene glycol-1-methyl ether acetate by a spray development (SC-W60A-AV9, SOKUDO). The patterned film was observed by optical microscope. Curing Pre-baked or post-developed film on wafer was cured under nitrogen atmosphere (VF-2000B, Koyo Thermo System). FT-IR measurement Cured film on wafer was measured by ATR method (Nicolet 380, Thermo Scientific). Measurement of elongation, strength and Young's modulus 7-m thickness of cured film was measured by the tensile testing machine (Tensilon UTM-II-20, Orintec). 2.8. Measurement of thermal properties 5% weight loss of cured film was measured by thermogravimetric analysis (TGA50, Shimadzu). Glass transition temperature (Tg) and coefficient of thermal expansion (CTE) of cured film was measured by thermomechanical analysis (TMA50, Shimadzu). Measurement of residual stress Bending amount of bare 6 inch wafer was measured by film stress measurement system (FLX-2320, KLA-Tencor). Then, photosensitive polyimide precursor varnish was coated on the wafer and cured. Bending amount of the wafer with cured film was measured same as above and residual stress was calculated with bending amount data of bare wafer. Chemical resistance Patterned and cured film was treated with photo resist stripper, metal etchants and flux. In the cases of photoresist stripper and metal etchant, cured film on wafer was immersed in these chemicals under each condition. The tested film on wafer was rinsed by deionized water for 5 minutes and then dried at room temperature. As for flux resistance, the flux was put on cured film on wafer and it was passed through reflow oven under nitrogen atmosphere. The maximum temperature in the reflow oven was 260 o C. Then the sample was rinsed by deionized water for WS-600, by PineAlfa ST-100SX and deionized water for R5003 and dried. Appearance after the test was observed by optical microscope. Thickness change after the test was measured by contact film thickness measuring apparatus. Adhesion Cured film on various substrate wafer was prepared. And stud with epoxy resin was adhered by heating with 150 o C for 90 min to the surface of the cured film. This sample was measured by universal materials tester (ROMULUS IV, Quad group). Design concept of PIMEL TM BL-301 One of the requirements to be 'low temperature curable polyimide' is completing the imidization at low temperature cure. Because the excellent mechanical and thermal properties of polyimide come from polyimide structure itself, not precursor. Imidization usually occurs with high temperature, i.e. over 300 o C. And many approach to imidization with low temperature were known. For example, adding basic catalyst or plasticizer, using flexible unit or difficult to rotate unit as polymer backbone and so on. But these methods are apt to gather undesirable side effects. Therefore, we selected another means, that is to control the acidity of amine unit of polyimide precursor. On the other hand, good adhesion to copper is very important for re-distribution layer materials. Because almost every re-distribution metal is copper. We studied and found the additive A to improve the adhesion to copper. As a result, we have developed low temperature curable polyimide product PIMEL TM BL-301 with referring to all the above information. In the following section, various types of evaluation of PIMEL TM BL-301 with low temperature cure and high temperature cure condition are described. Evaluation of imidization We measured the degree of imidization of PIMEL TM BL-301 by FT-IR. We defined 'Imide index' as following equation. As shown in Figure 1, imide index of PIMEL TM BL-301 is same and saturated in the range between 190 and 300 o C cure. It indicates imidization of this product already completes in 190 o C cure condition. This result tells our strategy to achieve imidization at low temperature cure condition is correct. Table 1 shows thermal and mechanical properties of PIMELTM BL-301. Even low temperature cure condition, this product shows excellent and almost same mechanical properties as high temperature, except for residual stress. It also indicates PIMELTM BL-301 can be used sufficiently above 200 o C cure condition. And its modulus value is appropriate for the target we set. Thermal properties changed in accordance with cure temperature and it leads to residual stress value change. Evaluation of chemical resistance We tested chemical resistance of PIMEL TM BL-301. The chemicals we tested are ST-44 (ATMI Inc.), Acetone, PGME and NMP as photoresist stripper, 30% Nitric acid aq., 30% sulfuric acid aq. and 1% HF aq. as metal etchant, WS-600 (Alpha Metals Japan) and R5003 (Cookson Electronics Co. Ltd) as flux. Flux resistance was done by 260 o C reflow condition. We made judgment by pattern inspection, thickness measurement and so on. PIMEL TM BL-301 shows excellent chemical resistance even low temperature cure as Table 2. These properties are enough to apply for re-distribution layer. Table 2. Chemical resistances of PIMEL TM BL-301. Evaluation of adhesion We evaluated adhesion of cured PIMEL TM BL-301 to various substrate by stud-pull test method. We only showed the results of after PCT treatment in Table 3. In the table, '>70 MPa' means over limitation of the measurement due to breakage of the epoxy adhesive between cured film and stud. In all cure condition, there is no problem at all. And it goes without saying that adhesion without PCT is also excellent. Table 3. Adhesions of PIMEL TM BL-301 to various substrates. Table 4. The effect of additive A for adhesion to copper after HTS treatment. As we described above, adhesion to copper is especially important in re-distribution application and found additive A to improve it. We indicated the effect of additive A by stud pull test after HTS (high temperature storage) treatment at harder condition (175 o C, 270 hr) than general in Table 4. Without additive A, adhesion measurement value is low and delamination between cured film and copper is observed. On the other hand, with additive A (i.e. PIMEL TM BL-301) shows excellent adhesion to copper. Conclusion We have developed low temperature curable photosensitive polyimide precursor product successfully as PIMEL TM BL-301. This product showed low temperature curability (i.e. ability of imidization at low temperature cure), excellent mechanical and thermal properties, chemical resistance and adhesion to various substrate including copper. Furthermore, it can be also used at high temperature cure condition. Namely, PIMEL TM BL-301 is wide cure temperature margin product. We believe this product will satisfy the demand for advanced package dielectric layers.
2019-04-29T13:13:23.128Z
2016-06-21T00:00:00.000
{ "year": 2016, "sha1": "3946af7d79c3f86673c391128f5212e11f51cd27", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/photopolymer/29/3/29_379/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f219e4d6c12d650e4a54bf8a0894d67005cb0090", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
245035180
pes2o/s2orc
v3-fos-license
Learnable Leaky ReLU (LeLeLU): An Alternative Accuracy-Optimized Activation Function : In neural networks, a vital component in the learning and inference process is the activation function. There are many different approaches, but only nonlinear activation functions allow such networks to compute non-trivial problems by using only a small number of nodes, and such activation functions are called nonlinearities. With the emergence of deep learning, the need for competent activation functions that can enable or expedite learning in deeper layers has emerged. In this paper, we propose a novel activation function, combining many features of successful activation functions, achieving 2.53% higher accuracy than the industry standard ReLU in a variety of test cases. Introduction Activation functions originated from the attempt to generalize a linear discriminant function in order to address nonlinear classification problems in pattern recognition. Thus, an activation function is a nonlinear, monotonic function that transforms a linear boundary function to a non-linear one. The same principle was used in perceptrons in order to allow the perceptron to classify the inputs. The most straightforward activation function is the identity function (y = x), along with the binary activation function in Equation (1) that resembles an activation/classification switch. This is the first nonlinearity used in perceptrons and multilayer perceptrons and made its way to more complex neural networks later on. Despite its simplicity, the discontinuity at x = 0, which rendered the calculation of the corresponding derivative rather difficult, encouraged the search for new monotonic and continuous activation functions. The first continuous, nonlinear activation function that was used was the sigmoid, also called the logistic or the soft-step activation function, and is described by Equation (2). This allowed the computation of nonlinear problems by using a low number of neurons. The sigmoid was used in the hidden layers of common neural networks and enabled the training and inference of these systems for years. A similar function can arise from the sigmoid function through a linear transformation of the input and the output is the Hyperbolic tangent (Tanh) presented in Equation (3). Again, this was widely used in neural networks for years, and it was generally accepted that the Tanh function favored faster training convergence, compared to the sigmoid Information 2021, 12, 513 2 of 16 function. However, the computation of these activation functions is rather expensive, since it entails look-up table solutions; thus, they are non-optimal choices for neural networks. The emergence of deeper architectures and deep learning, in general, has also highlighted another deficit of the two traditional activation functions. Their bounded output restricted the dissipation of derivatives in back-propagation when the network was deep. In other words, deeper layers received almost zero updates to their weights; that is, they were able to learn during the training process. This phenomenon is also known as the vanishing gradient problem. The difficulty in computational calculation and deep learning is partially solved with the introduction of the rectified linear unit (ReLU) [1] in Equation (4). The ReLU achieves great performance, while being computationally efficient. Since it poses no restriction on positive inputs, gradients have more chances to reach deeper layers in back-propagation, thus enabling learning in deeper layers. In addition, the computation of the gradient in backpropagation learning is reduced to a multiplication with a constant, which is far more computationally efficient. Thus, a whole new era in learning and inference with neural networks has emerged, dominating the last decade. One drawback of the ReLU is that it does not activate for non-positive inputs, causing the deactivation of several neurons during training, which can be viewed again as a vanishing gradient problem for negative values. The non-activation for non-positive numbers is solved with the introduction of the Leaky rectified linear unit (Leaky ReLU) [2], which activates slightly for negative values, as expressed in Equation (5). One can encounter a number of other variations of ReLU in the literature. One basic variation of the ReLU is the Parametric Rectified Linear Unit (PReLU) [3], which has a learnable parameter, α, controlling the leakage of the negative values, presented in Equation (6). In other words, PReLU is a Leaky ReLU; however, the slope of the curve for negative values of x is learnt through adaptation instead of being set at a predetermined value. Moving away from the family of ReLU, we see that there is the Gaussian Error Linear Unit (GELU) [4] in Equation (7). This activation function is non-convex and non-monotonic and features curvature everywhere in the input space. The authors in Reference [4] claim that GELU can offer a regularization effect on the trained network, since the output is determined on both the input and the stochastic properties of the input. Thus, neurons can be masked off the network, based on the statistical properties of x, which resembles the batch normalization [5] and the Drop-out [6] mechanisms. Another nonlinear activation function is the Softplus [7,8], as described by Equation (8). The Softplus function features smooth derivatives and less computational complexity, as compared to the GELU; however, it is still more complex compared to the ReLU family. The exponential linear unit (ELU) [9] in Equation (9) is another smooth, continuous and differentiable function that tackles the vanishing gradient problem for negative values through an exponential function. This function saturates for great negative values; however, the degree of saturation is controlled by the learnable parameter, α. The scaled exponential linear unit (SELU) [10] is another version of the ELU with controllable parameters that induce self-normalizing properties. where the values of α = 1.6733 and λ = 1.0507. These last activation functions act similar to the ReLU family, providing slightly higher accuracy in complex problems, while having higher computational cost due to the exponential/logarithmic part in the computation and the more complicated implied derivatives at back-propagation. In Reference [11], Courbariauxet et al. introduced a stricter version of the original sigmoid function, coined "hard sigmoid", which is given by the formula: The proposed function was less computationally expensive as compared to the original sigmoid and yielded better results in its experiments [11]. Another derivative from the original sigmoid function is the Swish activation function, which was introduced in Reference [12] and is described by the following formula: where β can be a fixed or a trainable parameter. Swish can be regarded as a smooth function that serves as an intermediate between a linear function and a ReLU. Finally, the Mish activation function [13] is a self-regularized non-monotonic activation function that was inspired by the Softplus function and Swish and is described by the following: There is no trainable/adjustable parameter here, nonetheless, it seems to outperform Swish and other functions in a study [13]. The computational complexity of estimating the function is noteworthy in this case. More complicated activation functions have also recently been proposed. In Reference [14], Maguolo et al. propose the Mexican ReLU, which is described by the following equation: where ϕ a,λ (x) = max(λ − |x − a|, 0) is a Mexican-hat-type function, and a and λ are learnable parameters. In Reference [15], the concept of reproducing activation functions is introduced, where a different activation function is applied to each neuron. The applied activation function is a weighted combination of a set of known activation functions with learnable parameters and weights for each neuron. In Reference [16], Zhou function that is generated is parameterized, and a fitness score is estimated. The functions that yield the best fitness scores are added to the population of activation functions to be used in the NN. The objective of the paper is to propose a novel activation function that (a) expands the ReLU family by adding support to the negative values; (b) the degree of saturation for the negative values is controlled by a learnable parameter, α; (c) this parameter α simultaneously controls a learning boost for positive values; (d) in the case of α → 0, the learning at these nodes ceases, leading to a regularization of the network, similar to Drop-out, which eliminates the need of such techniques; (e) the accuracy performance gain of the proposed activation function over ReLU increases with the information complexity of the dataset (i.e., the difficulty of the problem); and it (f) remains a simple function with a single learnable/adaptive parameter and a simple update rule, in contrast to far more complicated adaptive activation functions. The Proposed Activation Function In this paper, we propose a novel activation function combining the best qualities of the ReLU family, while having low computational complexity and more adaptivity to the actual data. The equation that describes the Leaky Learnable ReLU (LeLeLU) is as follows: where α is a learnable parameter that controls the slope of the activation function for negative inputs, but what is different here is that it simultaneously controls the slope of the activation function for all positive inputs. There is a constant multiplier, 0.1, that reduces the slope for negative input values in a similar manner to the Leaky ReLU, which seems to work well in our experiments. LeLeLU is depicted in Figure 1 for various values of α. known activation functions. The evolution begins with a parent activation function that evolves through four evolutionary operations (insert, remove, change and regenerate). Each function that is generated is parameterized, and a fitness score is estimated. The functions that yield the best fitness scores are added to the population of activation functions to be used in the NN. The objective of the paper is to propose a novel activation function that (a) expands the ReLU family by adding support to the negative values; (b) the degree of saturation for the negative values is controlled by a learnable parameter, α; (c) this parameter α simultaneously controls a learning boost for positive values; (d) in the case of α → 0, the learning at these nodes ceases, leading to a regularization of the network, similar to Drop-out, which eliminates the need of such techniques; (e) the accuracy performance gain of the proposed activation function over ReLU increases with the information complexity of the dataset (i.e., the difficulty of the problem); and it (f) remains a simple function with a single learnable/adaptive parameter and a simple update rule, in contrast to far more complicated adaptive activation functions. The Proposed Activation Function In this paper, we propose a novel activation function combining the best qualities of the ReLU family, while having low computational complexity and more adaptivity to the actual data. The equation that describes the Leaky Learnable ReLU (LeLeLU) is as follows: where α is a learnable parameter that controls the slope of the activation function for negative inputs, but what is different here is that it simultaneously controls the slope of the activation function for all positive inputs. There is a constant multiplier, 0.1, that reduces the slope for negative input values in a similar manner to the Leaky ReLU, which seems to work well in our experiments. LeLeLU is depicted in Figure 1 for various values of α. The derivative of LeLeLU can simply be calculated by the following: The update formulations of the parameter α can be derived by using the chain rule. The gradient of α for one layer for each neuron, i, can be given by the following: where L(•) denotes the neural network's loss function, and y i denote the output of the i-th neuron. In order to reduce the computational cost in demanding situations, one can choose to keep the parameter α the same for a number of neurons, i.e., for a layer. For the layer-shared variant, the gradient of α is as follows: where the summation Σ i sums over all neurons of the layer. The complexity overhead of α, the learnable parameter, is negligible for both forward and backward propagation, while gradient descent with the momentum method was used during training. where η is the learning rate, and µ denotes momentum. The parameter α is learnable per filter during training, and during testing, we observed a correlation between dataset complexity, depth-wise position of respective filter in the neural network topology and training phase. It is obvious in Figure 1 that, for α = 1, our proposed activation function turns into the leaky ReLU activation function. The strong point of the proposed activation function is that the learnable parameter influences both the negative and the positive values. This implies that the adaptation of α can accelerate training in certain parts of the network during certain epochs of the training procedure, when α gets values that are larger than 1. In contrast, when α gets lower than 1 values, learning slows down for certain parts of the network. In the special case that α gets values close to zero, not only learning is halted for these neurons, but their output is close to zero, which implies that these neurons are severed from the network. Hence, by de-activating several neurons, the network is automatically regularized during training in a similar manner to the popular Drop-out technique [6]. The difference is that, by using the proposed activation function, network regularization is performed by the adaptation of the activation function and network training, whereas a Drop-out is a mechanism that works as an extra step during network training. The adaptation of the parameter α is investigated in more detail in the next section. Parameter Adaptation and Network Regularization In this section, we investigate the role and behavior of parameter α during training. As a testbed, we used the Fashion MNIST dataset and the corresponding network architecture in Figure 2. The programming environment was MATLAB 2020a on a Haswell i7 4770 s, 16 GB DDR3 RAM, NVidia GTX 970 4 GB PC, running Windows 10. The code for implementing LeLeLU can be found here (https://github.com/ManiatopoulosAA/LeLeLU, accessed on 10 October 2021). In the proposed network architecture, we included the use of Batch Normalization [5], which is a form of network regularization that keeps the mean and variance of neurons' output normalized. The use of Drop-out is often complementary to Batch Normalization; therefore, we can see in the literature that they can be used in parallel. Since the proposed activation function is similar to PReLU, we would like to compare the performance of the proposed activation function with PReLU on the previously described testbed. In addition, since the proposed activation function is performing regularization in the same manner as Drop-out, we would like to compare its performance with a combination of PReLU, using Drop-out on each layer. The results are very conclusive. The architecture using PReLU only yields classification accuracy of 0.82, with notably slower convergence. The architecture using PReLU and Drop-out yields a classification accuracy of 0.829, whereas the proposed activation function with batch normalization but without Drop-out achieves an accuracy of 0.912. Thus, at first, LeLeLU performed better than PReLU itself. At the same time, LeLeLU is performing better than PReLU with a regularizer (Drop-out). This implies that the adaptation of LeLeLU is regularizing the network itself and even works better than Drop-out by 8.8%. There is an extra computational cost for the adaptation of parameter α. Based on the previous testbed, the runtime of the proposed scheme is marginally longer by 2.56%, compared to the PReLU+Drop-out combination. We reckon that this might be due to the fact that Drop-out completely removes and does not process some neurons from the network, whereas, in our case, the network continues to process these neurons, even in the case that α→ 0 in their LeLeLU. In Figure 3, we visualized the adaptation of parameter α for a random neuron/filter as it changes for every epoch. It is obvious that there is an active Drop-out-like behavior at least twice for every neuron during the training process, while there are instances where the parameter α is near 1, accelerating the learning of the neuron in question. Results In this section, we perform a more thorough comparison between the various activation functions for various different datasets. Datasets and Network Topologies The topology of all networks used to compare the four activation functions is dis- Results In this section, we perform a more thorough comparison between the various activation functions for various different datasets. Datasets and Network Topologies The topology of all networks used to compare the four activation functions is displayed in Figure 3. More specifically, MNIST and Fashion MNIST run on a three-hidden-layer convolutional neural network with 16, 32 and 48 5 × 5 filters, while the last layer was a 10-neuron classification layer. The Sign MNIST runs on a five-hidden-layer convolutional neural network with 16, 32 and 48 5 × 5 filters, while the last two hidden layers have 64 and 96 3 × 3 filters, respectively. The last layer is a 24-neuron fully connected classification layer. Lastly, the CIFAR-10 classification dataset runs on a five-hidden-layer convolutional neural network with 32, 36 and 48 5 × 5 filters, while the last two hidden layers have 64 and 96 3 × 3 filters respectively, with the last layer being a 10 neuron classification layer. The MNIST topology was trained for 15 epochs, the Fashion MNIST for 20 epochs, the Sign Language dataset for 20 and the CIFAR-10 dataset for 60 epochs. Since the scope of this paper is the comparison of different activation functions, and since the ReLU activation function is the most widely known and used, all results presented were normalized to the accuracy obtained used by the ReLU activation. All testing was conducted with five-fold validation, and the results presented in the next section are the mean of the three median values. In other words, from the five accuracy results of five-fold validation, the largest and lowest values were dropped, and the three median values were averaged to give a more balanced score that is less prone to outliers. In our experiments, we benchmarked the following activation functions: Tanh, ReLU, PReLU, ELU, SELU, HardSigmoid, Mish, Swish and the proposed LeLeLU. These activation functions were chosen as representative examples of each category of baseline activation functions, as described earlier in the introduction. We preferred to compare with simple activation functions with minimal computational cost or adaptation, such as the proposed one, avoiding those mentioned earlier with great adaptation complexity and many trainable parameters. Numerical Results Here, we evaluate all experiments, using accuracy, i.e., the number of correctly classified examples over the total number of examples in the testing dataset. As stated previously, the overall accuracy is estimated via five-fold validation. Then, we consider the accuracy achieved by ReLU as the baseline result, and we calculate normalized accuracy as the ratio (in percentage) of the new activation function accuracy over the accuracy achieved by ReLU. In Table 1, we can see the accuracy and normalized accuracy on the MNIST dataset, using the nine activation functions. All activation functions perform well, with the LeLeLU giving a small boost of 0.23% over the baseline ReLU. The proposed LeLeLU outperforms current state-of-the-art activation functions, including Swish and Mish. The MNIST dataset contains a well-studied and easy-to-classify dataset, and therefore the improvement is minimal but existent. It should be noted that PReLU slightly underperforms in this experiment, but this is minimal. In Table 2, we can see the accuracy and normalized accuracy on the Fashion MNIST dataset, using the nine activation functions. All activation functions perform relatively well. The LeLeLU gives a significant boost of 1.8% over the baseline ReLU, whereas PReLU improves slightly by 0.06%, with the ELU giving the second best improvement of 1.2%. Mish and Swish outperform the traditional ReLU, but they are well below the proposed LeLeLU. In Table 3, we can see the accuracy and normalized accuracy on the Sign Language dataset, using the nine activation functions. Here, the results are more impressive. All other activation functions clearly underperform, as compared to the baseline ReLU, with the LeLeLU giving the only improved performance with a significant boost of 3.2% over the baseline. Here, again, we witness the superiority of the proposed LeLeLU, compared to Mish and Swish, which are the only ones that offer an improvement to ReLU, but their improvement is less impressive than that of the LeLeLU. This experiment clearly demonstrated the significant ability of LeLeLU to adapt over the dataset and improve both positive and negative values learning, as compared to the stationary ReLU. In Table 4, we can see the accuracy and normalized accuracy on the CIFAR-10 dataset, using the five activation functions. Here, the LeLeLU is again scoring the best improvement over the baseline, with a significant boost of 4.9%. PReLU and ELU have demonstrated improvement in this example of 3.5% and 3.4% respectively, with the Tanh underperforming, as expected. Mish and Swish offer less significant improvement, whereas, SELU is the second runner-up, offering an improvement of 4%. Overall, LeLeLU shows a consistent tendency to improve classification accuracy over the baseline ReLU, which is not the case for the other tested activation function. PReLU, which is very close to LeLeLU, shows very unstable performance with cases of serious underperformance. It is evident that the performance of all competing tested activation functions depends on the dataset used. Some might underperform or overperform the original ReLU function. Only the proposed LeLeLU seems to consistently offer an improvement in all tested cases. This clearly demonstrates that the addition of a controllable slope (parameter α) in the positive values area of the activation function has improved classification performance. This parameter also controls the speed of adaptation of positive values and seems to improve performance by either accelerating or slowing down learning, in contrast to the fixed slope for positive values of ReLU and PReLU. LeLeLU Performance in Larger Deep Neural Networks In this section, we evaluate the performance of the proposed LeLeLU in more real-life deep network architectures, such as the VGG-16 and the ResNet-v1-56. VGG-16 with LeLeLU The first large neural network in our experimentation is the VGG-16, used to classify Cifar-10 and Cifar-100 datasets. The topology of the network and the results for different activation functions are also presented in Reference [17]. The CIFAR-100 dataset is an expansion of the Cifar-10. It has 100 classes, containing 600 images per class. From those 600 images per class, 500 are considered training images and 100 test images per class. The resolution of the images is also 32 by 32 pixels, the same as with Cifar-10. The VGG-16 topology used in our work is the same with Reference [17], with two convolutional layers with 64 filters, followed by max pooling; two convolutional layers with 128 filters, followed by max pooling; three convolutional layers with 256 filters, followed by max pooling; and two similar blocks of three convolutional layers with 512 filters each, followed by max pooling, one after the other. The final layer is a classification layer. Figure 4 depicts the VGG-16 topology. Cifar100 for an element-wise bound trainable parameter σ (comparable to ours). Their score is better than LeLeLU in Cifar10, but far worse in Cifar100; however, it should also be noted that the parameter σ should be bound by another sigmoid function during adaptation (i.e., computational complexity) in order to stabilize the performance, which is far more complicated than our simple unbound adaptation rule. In Tables 5 and 6, we depict the performance of VGG-16 for Cifar10 and Cifar100 for various activation functions. We can easily see that the proposed function LeLeLU offers the best or the second best classification accuracy among the competing activation functions and a clear improvement over the widely used ReLU. More specifically, it offers the best performance for Cifar-10 and the second best for Cifar-100, behind ELU. However, it outperforms the more state-of-the-art Swish function, which is more prominent in the modern deep-learning literature. We preferred again to compare against simple activation functions with minimal computational complexity and adaptation. The ProbAct function that is proposed in Reference [17] yields a maximum of 0.8892 for Cifar10 and 0.5583 for Cifar100 for an element-wise bound trainable parameter σ (comparable to ours). Their score is better than LeLeLU in Cifar10, but far worse in Cifar100; however, it should also be noted that the parameter σ should be bound by another sigmoid function during adaptation (i.e., computational complexity) in order to stabilize the performance, which is far more complicated than our simple unbound adaptation rule. ResNet-v1-56 with LeLeLU In Reference [18], there is an extensive comparison of various activation functions, using the ResNet-v1-56 architecture for the classification of the Cifar-100 dataset. Here, we use the same topology and training methods as in Reference [18], along with the published results, to compare our proposed activation function. Again, in our comparison, we prefer baseline activation function with minimal complexity, such as the one proposed in this paper. Table 7 contains the classification accuracy for CIFAR-100, along with the proposed function. The proposed LeLeLU activation function enables the network to better adapt to the complex dataset, having the highest classification accuracy in this test. Again, LeLeLU seems to perform better, as compared to modern counterparts, including Mish and Swish. It is also noteworthy that the complicated activation function produced by the genetic algorithm in Reference [18] for the ResNet-v1-56 architecture does not exceed accuracy of 0.7101. Table 5. Test accuracy in the CIFAR-10 dataset of the activation functions in question, using the VGG-16 neural network, and accuracy normalized to that attained by ReLU activation function. Activation Function Accuracy Normalized Accuracy LeLeLU Performance vs. Dataset Complexity In this section, we attempt to identify possible correlation between the gain in accuracy, offered by the proposed activation function LeLeLU, and the dataset used in the experiment. We witnessed that in the previous experiments LeLeLU featured an increasing improvement in accuracy. Thus, we attempt to quantify the difference between the four datasets. One feature of a dataset that we can identify is its complexity. We propose to estimate the complexity of the dataset by using an approximation of the Kolmogorov complexity theorem. Kolmogorov complexity can be defined for any information source. It can be shown [19][20][21] that, for the output of Markov information sources, Kolmogorov complexity is related to the entropy of the information source [22]. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges most probably to the entropy of the source (since the output's length can be assumed to go to infinity) [23]. Based on this conclusion, we deduce that it is possible to evaluate the complexity of the dataset by using the product of the mean entropy of each sample and the bits required to represent every category (e.g., 7 for 80 classes). This method is very efficient, even in the case of large datasets. One could also employ only a representative amount of samples from each class and not the full dataset, without generally losing accuracy in the estimation of complexity. The following pseudocode (Algorithm 1) outlines the proposed procedure. calculate mean entropy (ME): ME ← T/N 10 calculate the bits Q required to represent the number of classes 11 Dataset_complexity = ME × Q We use the algorithm to estimate the complexity of each dataset used in our experiments. The findings are outlined in Table 8. It is clear that the complexity of each dataset correlates highly with the improvement offered by LeLeLU. Figure 5 depicts this finding in a logarithmic plot. We can clearly see that the more complex the dataset is, the bigger the improvement we can attain by using the proposed activation function. It also appears that the improvement is almost analogous to the logarithmic complexity of the dataset (see Figure 5. This implies that the adaptation of the parameter α for positive values helps the overall neural network to adapt faster to the complexity of the dataset, thus giving more improvement compared to the fixed non-adaptive baseline ReLU in more challenging problems. In this section, we attempt to derive an empirical equation that provides an estimate of the accuracy improvement, offered by the LeLeLU over ReLU, given the complexity of the dataset. The equation that correlates the improvement over ReLU of the proposed function, based on our testing in small arbitrary topologies, is computed by finding the fit function of the two lines of Figure 5 and combining the two equations: Let C denote the dataset complexity, and x the increasing integer identity of the dataset. The dataset complexity fit function can be estimated by exhaustive parameter search of an exponential function, as follows. Let AccImpr denote the accuracy improvement percentage. The accuracy improvement fit function can be estimated by linear fitting, as follows: Thus, we end up with Equation (23), which yields the accuracy improvement offered by the proposed LeLeLU in terms of the dataset complexity. It is clear that Equation (23) is a monotonic rising function; that is, the more complex the dataset, the more accuracy improvement yielded the proposed LeLeLU. To verify the validity of Equation (23), we use the experiment of Cifar-100 with VGG-16, which was not used in the derivation of Equation (23). The Cifar-100 dataset has a In this section, we attempt to derive an empirical equation that provides an estimate of the accuracy improvement, offered by the LeLeLU over ReLU, given the complexity of the dataset. The equation that correlates the improvement over ReLU of the proposed function, based on our testing in small arbitrary topologies, is computed by finding the fit function of the two lines of Figure 5 and combining the two equations: Let C denote the dataset complexity, and x the increasing integer identity of the dataset. The dataset complexity fit function can be estimated by exhaustive parameter search of an exponential function, as follows. C = 3.159e 0.789x ln C 3.159 = 0.789x x = 1.267ln C 3.159 (21) Let AccImpr denote the accuracy improvement percentage. The accuracy improvement fit function can be estimated by linear fitting, as follows: AccImpr = 1.54x − 2.018 x = AccImpr + 2.018 1.54 (22) By combining Equations (20) and (21) Thus, we end up with Equation (23), which yields the accuracy improvement offered by the proposed LeLeLU in terms of the dataset complexity. It is clear that Equation (23) is a monotonic rising function; that is, the more complex the dataset, the more accuracy improvement yielded the proposed LeLeLU. AccImpr = 1.951ln(C) − 3.521 (23) To verify the validity of Equation (23), we use the experiment of Cifar-100 with VGG-16, which was not used in the derivation of Equation (23). The Cifar-100 dataset has a complexity of 146.988, and the proposed function achieved an improvement of 6.38% over ReLU, as presented in Table 7. By substituting these figures in Equation (23), we can see that they verify Equation (23) very closely. Discussion The activation function is a core component in the neural network topology that affects both the behavior and computational complexity. By combining the best features of the ReLU family, we proposed the Learnable Leaky ReLU (LeLeLU), being linear and, thus, easily computable, while providing the parametric freedom to model the problem effectively. In our experiments, the proposed activation function consistently provided the best accuracy among the tested functions and datasets. It is very interesting that it features an almost analogous increase in accuracy gain to the complexity of the dataset. Thus, LeLeLU assists the network to adapt to the demands of challenging datasets, achieving almost analogous performance gain. In the future, we will investigate methods to overcome the limitation of having to use batch normalization as a core component when implementing LeLeLU in a network. We will also investigate the effect of using higher-order polynomial versions of the original LeLeLU activation function and/or adding noisy perturbations in a similar manner to ProbAct [17].
2021-12-12T16:26:52.273Z
2021-12-09T00:00:00.000
{ "year": 2021, "sha1": "8eae42131afbcbd6a68aaf6e6fd377828d868c45", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2078-2489/12/12/513/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "aa2eb5aa95bce1a417f377ae0036777c62c8704e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
20349721
pes2o/s2orc
v3-fos-license
A Canadian clinical practice algorithm for the management of patients with nonvariceal upper gastrointestinal bleeding 1Department of Medicine, Division of Gastroenterology, McGill University, Montreal, Quebec; 2Division of Gastroenterology, McMaster University, Hamilton, Ontario; 3Division of Gastroenterology, University of British Columbia, Vancouver, British Columbia; 4Department of Family Medicine, University of Alberta, Edmonton, Alberta; 5Department of Physiology & Pharmacology, University of Western Ontario, London, Ontario; 6Division of Gastroenterology, University of Ottawa, Ottawa, Ontario; 7Department of Emergency Medicine, Calgary Health Region, University of Calgary, Calgary, Alberta; *See Appendix 1 Correspondence: Dr Alan Barkun, Division of Gastroenterology, Montreal General Hospital Site, The McGill University Health Centre, 1650 Cedar Avenue, Room D7.148, Montreal, Quebec H3G 1A4. Telephone 514-934-8233, fax 514-934-8375, e-mail alan.barkun@muhc.mcgill.ca Received for publication March 15, 2004. Accepted July 13, 2004 A Barkun, CA Fallone, N Chiba, et al. A Canadian clinical practice algorithm for the management of patients with nonvariceal upper gastrointestinal bleeding. Can J Gastroenterol 2004;18(10):605-609. U pper gastrointestinal (GI) bleeding represents a substan- tial clinical and economic burden, with a prevalence of approximately 170 cases per 100,000 adults per year (1).Approximately 50% to 70% of cases are due to peptic ulcer disease (2,3), and despite recent advances in therapy, an estimated 6% to 8% of these patients die (1,4,5).Causes can broadly be divided into variceal and nonvariceal with limited accurate prediction based on clinical criteria alone. With the exception of the recent British Society of Gastroenterology guidelines (2002) (6), the last widely disseminated consensus conference and publication of practice guidelines occurred more than 10 years ago (7,8).For this reason, a multidisciplinary consensus conference was held in Canada in June 2002.The group included Canadian and international gastroenterologists, endoscopists, surgeons, family physicians, emergency room physicians, pharmacologists, epidemiologists (with methodological and health economic expertise) and a hospital pharmacist, representing 11 national societies.Using stringent, accepted criteria for guideline development, new data and a series of evidence-based systematic reviews and meta-analyses (9,10), recommendations for the management of nonvariceal upper GI bleeding were developed (11).The complete review and consensus processes details are published in full elsewhere (11).These consensus recommendations have now been used to develop an algorithm for the management of patients with nonvariceal upper GI bleeding specifically tailored to the Canadian environment. RECOMMENDATIONS The present article will present highlights of the recommendations (published in full in the Annals of Internal Medicine 2003 [11]) as they relate, in a Canadian setting, to decision points in the algorithm shown in Figure 1. Stabilization When a patient presents with a nonvariceal upper GI bleed, appropriate initial resuscitation, including stabilization of blood pressure and restoration of intravascular volume, is paramount and should precede any further diagnostic and therapeutic measures (11).Placement of a nasogastric tube should be considered in selected patients because the findings may have prognostic value (11,12).Empiric therapy with a high dose, oral or intravenous, proton-pump inhibitor (PPI) should be considered for patients awaiting endoscopy (11), but is not a replacement for urgent endoscopy and hemostasis, where appropriate (3,(13)(14)(15)(16).Although there are no controlled data directly assessing this approach, its possible usefulness is suggested by the conclusions of randomized trials of postendoscopy high dose oral (13)(14)(15)(16) and intravenous PPI (17)(18)(19)(20), coupled with results from preliminary observational and cost-effectiveness studies (3,21,22). Clinical risk stratification Approximately 80% of patients will stop bleeding spontaneously without recurrence, but the main goal of management is to identify the remaining 20% of patients who are at greatest risk of morbidity and mortality from continued or recurrent bleeding (23). Once patients are clinically stabilized, they should be stratified into low-and high-risk categories for rebleeding and mortality, based on clinical criteria initially, with endoscopic criteria also considered when available (11).The most important clinical predictors of increased risk of rebleeding or mortality are age over 65 years, shock, comorbid illnesses and fresh red blood on rectal examination, in the emesis or in the nasogastric aspirate (3,11,(24)(25)(26)(27)(28)(29)(30).Endoscopic stigmata defined as low-and high-risk are discussed below. The need for urgent endoscopy, or conversely, suitability for early discharge, can be determined using risk stratification tools, such as those reported by Blatchford et al (31) or Cameron et al (32), which include older age, significant comorbid illnesses, presence of hematemesis, shock or syncope. Endoscopic risk stratification and therapy Endoscopy should be performed within the first 24 h, the patient stratified according to the endoscopic stigmata and endoscopic therapy performed if needed.Clinical risk stratification Figure 1) A Canadian clinical practice algorithm for the management of patients with nonvariceal upper gastrointestinal (GI) bleeding.Refer to the text for relevant references.*Combination therapy with injection plus thermocoagulation is preferred; † High-risk patients can be moved to a general ward after 24 h if appropriate.The duration of admission should take into consideration the rebleeding period (72 h), local practice and availability of resources; ‡ There are no data favouring protonpump inhibitors (PPIs) over histamine 2 -receptor antagonists as oral follow-up therapy, but this is a reasonable approach, because this was the strategy in the high-dose intravenous (IV) PPI studies; § Acute testing for Helicobacter pylori (Hp) should be followed, if negative, by a confirmatory test once bleeding has resolved.There is no rationale for urgent IV eradication therapy; oral therapy can be initiated either immediately or during follow-up of patients who are H pylori-positive. Early discharge is appropriate in the absence of risk factors such as age over 65 years, shock, comorbid illnesses and fresh red blood on rectal examination, in the emesis or in the nasogastric (NG) aspirate according to the criteria mentioned above, can assist in differentiating between those who require urgent endoscopy (based on clinical criteria) and those who can safely wait for a finite period of time, depending on available resources.Evidence indicates that the risk of further bleeding is strongly associated with the hemorrhagic stigmata seen at endoscopy.The risk is reportedly less than 5% in patients with a clean ulcer base, and increases progressively with a flat spot (10%), adherent clot (22%), nonbleeding visible vessel (43%) or active bleeding (oozing and spurting, 55%) (11,23). Recently it has been demonstrated in randomized controlled trials that a single dose of intravenous erythromycin (3 mg/kg infusion over 30 min or 250 mg bolus) administered 20 min to 90 min before endoscopy improves the visibility and quality of the examination, and decreases the need for repeat examinations (33,34).Erythromycin acts as a potent gastrokinetic to empty the stomach of blood; a clear stomach was found significantly more often with erythromycin than with placebo (82% versus 33%) (34).Erythromycin may be useful in patients undergoing emergency endoscopy for upper GI bleeding when blood obscures visibility. A finding of active bleeding or a visible vessel in an ulcer bed (high-risk lesion) requires immediate endoscopic hemostatic therapy, while a finding of a clean-based ulcer or a nonprotuberant pigmented dot (low-risk lesion) does not (11,23).The optimal management of adherent clots remains more controversial (11).Adherent clots obscure underlying stigmata that may be at high or low risk of rebleeding.Recent evidence supports that a clot in an ulcer bed should undergo targeted irrigation in an attempt to dislodge it and the underlying treated appropriately (35,36). High-risk lesions should be treated with endoscopic therapy.Monotherapy, with injection or thermal coagulation (9,10), is an effective endoscopic hemostatic technique for high-risk stigmata, but the combination is superior to either treatment alone (11,(35)(36)(37). Clinical and endoscopic classification of risk allows for safe and prompt discharge of patients classified as low-risk; improves patient outcomes for patients classified as high-risk; and reduces resource utilization for patients in all classifications (11,(38)(39)(40)(41)(42)(43)(44).Clinical criteria for early discharge generally include age less than 60 years, stable vital signs, no endoscopic stigmata or flat spot, and no concomitant serious medical illness (39,45). Acid suppressive therapy Recent meta-analyses have found PPIs to be more effective than histamine 2 -receptor antagonists (H 2 -RAs) in preventing persistent or recurrent bleeding (9,10,46,47).H 2 -RAs have demonstrated inconsistent and only marginal benefits, and, as such, are not recommended for the management of acute upper GI bleeding (11).High-dose PPI therapy administered by intravenous bolus followed by continuous infusion is effective in decreasing rebleeding in patients who have undergone successful endoscopic therapy and should be used to treat patients with high-risk endoscopic stigmata, including adherent clots (9)(10)(11).Evidence suggests a class effect for PPI treatment and that improvement in rebleeding rates can be achieved using either intravenous omeprazole or pantoprazole at a dose of 80 mg bolus followed by 8 mg/h for the 72 h following endoscopic therapy (11).Patients can be safely switched to oral PPI therapy following the 72 h, or when oral intake has been reestablished in those at lower risk.As mentioned above in the section on stabilization, empiric therapy with an oral PPI can be considered for patients awaiting endoscopy, particularly in institutions where intravenous PPI or endoscopy is not available (11). Admission and follow-up Patients identified as being at high risk of rebleeding, such as those with active bleeding or visible vessels, and those with adherent clots, should be admitted to a monitored setting for at least the first 24 h and receive high-dose PPI therapy (6,11).If intensive care beds are unavailable, wards with more intensive monitoring than standard units can be considered.The greatest risk of rebleeding is in the first 72 h after endoscopy.Routine second-look endoscopy is not recommended (11,48); a second look is indicated in cases of rebleeding, and perhaps in selected patients at high risk of rebleeding (11,49).Patients who have failed endoscopic therapy or who are at high risk of failing endoscopic therapy should receive a surgical consultation (11), or alternatively, angiography with possible embolization could be considered (50). Patients with a low-risk lesion who are not yet stable or those with pigmented lesions should be admitted for at least the first 24 h and treated with an oral PPI (11).Those with endoscopic findings of a Mallory-Weiss tear or an ulcer with a clean base or flat spot, who are otherwise stable, may be discharged home on an oral PPI (11).Studies show that patients with these endoscopic findings are at low risk and no major complications have been reported in those triaged to outpatient care (38)(39)(40)(41)(51)(52)(53)(54). All hospitalized patients, high or low risk, should be monitored and assessed daily, and when stable, discharged with appropriate follow-up arranged (11).If not performed during the hospitalization, Helicobacter pylori testing should be done as part of follow-up in patients with peptic ulcers (11).Eradication of H pylori can reduce the rate of ulcer recurrence and rebleeding (55)(56)(57)(58).Negative tests in the setting of acute bleeding or after initiation of PPI therapy should be interpreted with caution (11,59). This treatment approach also applies to patients with nonsteroidal anti-inflammatory drug-associated ulcers; however, the roles of cyclooxygenase-2 selective inhibitors, and coprescription with a PPI or misoprostol, were beyond the scope of the published recommendations (11). SUMMARY It is hoped that this algorithm will be used to direct clinical and endoscopic risk stratification, the application of endoscopic therapy and the appropriate use of PPIs, and thus help optimize the care of patients with upper GI bleeding.The algorithm should be customized to the resources of individual medical centres.The impact of the recommendations should be studied with appropriate outcomes recorded and validation performed.The efficacy of newer endoscopic therapeutic technologies, the optimal regimen of PPIs and the roles of other pharmacological agents all require further research and as such, it is anticipated that the guidelines and this algorithm will need to be updated as new data become available. APPENDIX 1: LIST OF ATTENDEES
2018-04-03T05:25:49.160Z
2004-10-01T00:00:00.000
{ "year": 2004, "sha1": "609a36a09c5ef97c3d2f35d1a20d422fd1434030", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/cjgh/2004/595470.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9b87ff2c245bbc633f18782c05997bce3435ad53", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249998082
pes2o/s2orc
v3-fos-license
Creative Self-Efficacy Scale for Children and Adolescents (CASES): A Development and Validation Study Abstract. Creative self-efficacy has emerged as one of the most striking constructs in education. Yet, instruments to assess it in children and adolescents are scant. This article introduces the CASES, a new creative self-efficacy scale designed to address this concern. The process of development and initial validation of the scale are presented herein. Following the items’ conception, exploratory, and confirmatory factorial analysis was performed. The final structure comprises nine items, evenly distributed by three factors: fluency, elaboration, and personality. Preliminary reliability and validity analysis display good psychometric properties, highlighting CASES as a potentially relevant addition to the creative self-efficacy assessment instruments array. Designed for children and adolescents (ages 3 to 16), it can uphold a developmental approach of creative self-efficacy, with potential implications within educational settings. Thus, it might be of interest for parents, educators, educational psychologists, researchers, and policymakers involved in designing curricula and interventions to nurture and enhance creative potential. Introduction In the last decade, creativity research has flourished as never before. Labelled as one of the essential skills for the 21st century Batelle for Kids, 2019), it has permeated educational settings, programs, and policies. Nevertheless, due to its multidimensional, dynamic and complex nature (Corazza, 2017;Glȃveanu, 2015), defining and measuring creativity remains challenging (Puente-Díaz, 2016). From a componential standpoint, creativity emerges from the interaction between creative potential and creative production , requiring motivation to be enacted. Indeed, belief systems can play a decisive role in the development of creativity, since transforming creative potential into creative behavior depends upon a person's intentional action which, in turn, is sculpted by creative self-beliefs. These encompass creative self-awareness, creative self-image, and creative confidence beliefs . The latter are vital for creative action because they reflect a person's belief in one's ability to think or act creatively in a specific domain , comprising the creative self-concept and creative self-efficacy (CSE). As components of a multi-layered and continuously evolving belief system, these dimensions are deeply interconnected (Beghetto & Karwowski, 2017;Karwowski & Barbot, 2016). However, even deriving from a common core, they involve different facets of psychological functioning. While the creative selfconcept refers to a person's cognitive and affective judgment of one's ability to be creative, CSE can be defined as the belief one has in his/her/their ability to do something creative in a specific time and context. CSE is active during and after a person's engagement with a task, as observed with general self-efficacy beliefs (Bandura, 1997). It helps to assess if our creative investment should be sustained, providing vital feedback to the ongoing process of constructing our belief system, which will be mobilized when facing future creative performance demands. Thus, it possesses a performative, dynamic, and prospective character , crucial when considering creativity relies upon the ability to confidently overcome difficulties and challenge oneself by embracing the perpetual process of (re)constructing the worlds we live in (Goodman, 1978). Moreover, CSE is situational or task dependent, revealing the profound influence of contextual determinants on its development. This singular combination of developmental and contextual influences supports a thoughtful analysis of CSE when aiming to comprehend the development of creativity in childhood and adolescence. The ascendancy of self-efficacy in creativity stems from the fact that the ability to self-motivate and pursue hard-to-achieve goals is almost a sine qua non condition for success in the creativity domain. A person with high self-efficacy levels tends to anticipate actions by creating potential cognitive scenarios, revealing greater cognitive resourcefulness and strategic flexibility, potentially translating into a contextually situated, more effective, and productive management capacity (Wood & Bandura, 1989). Simultaneously, the challenges underlying the social valuing of creativity demand a resilient and positive sense of self-efficacy, highlighting how creative abilities are not enough by themselves. Creative expression (deeply nuanced by self-efficacy beliefs) is also needed. Thus, having creative self-efficacy entails being able to mobilize cognitive resources and motivation to actively pursue an action path that grants better odds of success when facing a creative task or problem (Shaw et al., 2021). Recent studies underline CSE as a powerful predictor of different types of creative performance (Beghetto et al., 2011;Jaiswal & Dhar, 2016), as well as a mediator between motivation and social influence (Malik et al., 2015), critical thinking, and employee creativity (Jiang & Yang, 2014), or even between creative mindsets and creative problem-solving (Royston & Reiter-Palmon, 2017). Recently, it has been positively and significantly associated with mental well-being (Fino & Sun, 2022). Furthermore, higher levels of creativity appear to be associated with higher CSE levels (Valquaresma, 2020), both contributing to an improvement of self-competence perceptions and increased engagement in creative activities (Beghetto & Karwowski, 2017;Puente-Díaz & Cavazos-Arroyo, 2018). On the other hand, CSE is significantly influenced by knowledge gained through experience and observation, emotional activation, and verbal encouragement (Dampérat et al., 2016;Farmer & Tierney, 2017). The latter is especially important in educational settings, where peer, parent, and teacher-sup-ported behavior and classroom atmosphere emerge as significant factors in the process of development of CSE (Beghetto, 2006;Karwowski et al., 2015). CSE develops through balancing the expression of a person's psychological functioning (grounded in a dynamic cognitive structure) with an externalized manifestation of creativity, in a particular time and space. Therefore, attempts to dissect and comprehend CSE should equate its multidimensionality, avoiding the oversimplification of its developmental process. Creative Self-Efficacy and Creativity as Complexity In a comprehensive approach to creative behavior, Karwowski and Beghetto (2019) proposed a conceptual model that asserts how important confidence beliefs are in creative action, acting as mediators (i.e., predictors) of creative performance. When creative behavior is envisaged as an agentic action, creative confidence beliefs become critical elements of the development of creativity. To do something creative, one must first decide to be creative. From our perspective, this underpins an understanding of creativity as a construct that resonates with one's psychological complexity . CSE is activated when a person is faced with a task to perform, triggering a set of cognitive processes, whose direction will be determined by the self-judgment about one's self-confidence in carrying out the task in a creative way. A multitude of dialogical and contextual variables informs the decision to perform (or not) a given creative task, nuancing CSE with myriad shades. To decode them, one must resort to one's psychological structures. Thus, the decision to engage, avoid, or sustain interest in a specific creative task heavily relies upon CSE (Beghetto & Karwowski, 2017). This dynamically evolving process brings to light the plasticity of CSE, which is particularly important for psychological and educational interventions. Additionally, it points us in the direction of a developmental approach, because a comprehensive analysis of the developmental processes of CSE in children and adolescents holds the promise of gaining a broader and more impactful perspective on creativity. At the same time, it emphasizes the importance of early interventions in education for encouraging transformative developmental trajectories. Creative Self-Efficacy: broadening research contexts Departing from this holistic developmental stance, we considered observing CSE not only in middle-school, highschool, or university (Atwood-Blaine et al., 2019;Joët et al., 2011;Karwowski, 2012;Ohly et al., 2017), but also in preschool and primary school. Despite previous research focusing mainly on the development of creative self-beliefs from the age of ten (Karwowski & Barbot, 2016), Piaget's framework (1952Piaget's framework ( , 1978 suggests that children's cognitive development occurs not as a result to their awareness of the skills they are acquiring through development, but due to the internalization of those skills in a psychological structure. In fact, Bandura (1999) contends that self-beliefs are formed at a young age and serve as the foundation for subsequent self-efficacy beliefs. In other words, even though the youngest children have not yet developed the mechanisms that structure their awareness of their selfefficacy beliefs, those beliefs are integrated into a psychological structure that makes them implicitly present. Therefore, it appears critical to expand empirical research on the development of CSE to preschool and primary school. After all, in most Western education systems, preschool is a significant milestone of a child's educational journey: it is the first experience within a curriculum-based educational program. Given CSE's conceptual background, formal educational contexts (such as preschool education) can have a significant influence on the development of creativity (Craft, 2002;. Hence, designing psychological instruments for younger children may provide relevant data for understanding how CSE evolves. Nonetheless, as Joët et al. (2011) reported, CSE measures for children under the age of ten are scarce. To our best knowledge, a CSE scale for preschool and/or primary children has yet been developed, disclosing a potential vulnerability in the CSE developmental studies. Bearing these premises in mind, we set out to design, analyze, and validate a new CSE scale (the Creative Self-Efficacy Scale for children and adolescents [CASES]) that could contribute to a more inclusive and comprehensive understanding of CSE's impact on child and adolescent creative development. Designing ESPAC: Preliminary Steps To increase content and criterion validity, we draw inspiration from previous instruments such as Abbott's CSE Inventory (Abbott, 2010), Beghetto's CSE Inventory (Beghetto, 2006), and Karwowski's Short Scale for the Creative Self-Concept . Following the scholarship on the CSE's multidimensionality, we considered two CSE dimensions: thought and action. The first, linked to creative idea generation, can be defined as a person's belief in his/her/their ability to produce creative thoughts (and can be manifested, for example, by the confidence in the production of multiple creative ideas [fluency] or in the ability to elaborate an idea or thought creatively [elaboration]). In contrast, the second refers to believing one can perform a certain creative activity, in a given situation and context. Hence, it is more attuned with creative action and implementation, and highly associated with motivational and personality variables. Bandura's (2006) guidelines for assessing self-efficacy beliefs in children and adolescents were considered when designing the CASES. As a result, it was designed to be a short, multidimensional scale, appropriate for children aged 3 to 16. It aimed to overcome the age and school level boundaries discussed above, thereby opening future research possibilities for understanding how school influences the development of CSE. Moreover, this age range encompasses the preschool and basic education levels of the Portuguese education system 1 , which were the focal points of a larger project (aimed at exploring the approach to creativity within the preschool and basic education curriculum), from which the current research stems. Because it is aimed at an underage population, ethical implications were considered before collecting data. In harmony with the principles of The Declaration of Helsinki, we gathered, a priori, protocols of school board consent, oral or written child assent, as well as their legal guardian's parental written consent. The study was also reviewed and approved by the Ethics Committee of the Faculty of Psychology and Education Sciences of the University of Porto, in Portugal (Ref.2019/07-3). Sample Selection and Collection Process Participants had to be enrolled in preschool and basic education levels, and they had to be aged between 3 to 16 years old. Parental consent and child/adolescent assent were required. Several meetings with the school director and teachers were held prior to data collection to ensure that all data collection requirements were met, as well as the timely gathering of informed consent and participant's assent protocols from parents and guardians. Institutional permission was granted to collect data in five schools. The study's 18 classes were chosen at random using an online selection software (www.miniwebtool. com). The first author contacted each class's director and made the Informed Consent Protocol and the Participant Assent Protocol available to Parents and Guardians. As soon as they were granted, several data collection dates were set between the months of January and March 2019. Individual written responses were provided by the majority of participants. However, for 1 In a nutshell, the Portuguese education system complies four main levels: preschool (from 3 to 6 years old), basic education (from 6 to 15 years old, which encompasses three cycles: 1st cycle -grades 1 to 4; 2nd cycle-grades 5 and 6; 3rd cycle grades 7 to 9), upper secondary education (15 to 18 years old, which includes grades 10-12) and, lastly, higher education (polytechnic and university). those who attended preschool and the first year of basic education, CASES was answered orally with the first author's assistance because the participants' basic reading abilities had not yet been acquired or cemented. The school provided a private room to ensure complete confidentiality and to increase the participant's comfort level. Oral instructions were limited to those written at the beginning of the scale in order to standardize the application conditions. Whenever there were questions or concerns, they were addressed and directed back to the original guidelines. Participants could opt out of the study at any point during the process if they did not want to continue. Sample Characterization Through a convenience sampling process, a total of 393 children and adolescents (50.9% female), aged 3 to 16 (M = 9.06, SD = 3.60), and enrolled in preschool and basic education levels in a school cluster from the metropolitan area of Porto, participated in this study. The socioeconomic and cultural level (SECL) distribution of the participants had 26.2% in the lower level, 59.0% in the middle level, and 14.8% within the upper level. This distribution resembles the Portuguese socioeconomic reality. To perform an exploratory factorial analysis (EFA) and a subsequent confirmatory factorial analysis (CFA), the total sample was randomly divided into two sub-samples using a stratified random sampling procedure. Table 1 presents each sample's demographic characteristics. Sociodemographic Questionnaire Prior to data collection, participants were asked a few questions about their age, gender, and school level. The responses were provided by the teacher in the case of children aged 3 to 7. SECL is the average result of the participants' legal guardians' education level and current occupation, plus the number of experiences the child/adolescent has had in art settings. We averaged the results of each item using a Likert-type scale ranging from 0 (rare) to 3 (frequently) to obtain the SECL final score. Given the breadth of the art experiences item, we decided to compute it using three different elements: museum visits, frequency of extracurricular arts activities, and concert attendances. This decision is based on a previous observational analysis, which revealed that those art experiences were the most frequent and accessible in the participants' daily life contexts. Item Development The process of item development involved several stages. As mentioned above, it began by performing a literature review and assessment of existing scales; therefore, following a deductive method (Boateng et al., 2018). Aiming to construct a multidimensional scale, we sought to produce items that could express the thought and action dimensions of CSE. To avoid potential confusion with other creative self-related constructs (i.e., creative self-concept), the items referred to a perception of the participant's confidence instead of only focusing on their perception of competence. We developed an initial set of twenty-one items, stated in the first person, clearly and without negative phrasing. The items were designed to capture the participants real-life experiences. Participants could express their level of confidence in completing the task at hand using a five-point Likert-type response scale (1=not at all confident; 2=not very confident; 3=confident; 4=quite confident; 5=totally confident). Response scales with five points are recommended by Boateng et al. (2018) for items reflecting relative degrees of a single item response quality. The CASES was designed as a paper-and-pencil instrument, with a maximum completion time of twenty minutes for the initial version. Before distributing the CASES to the participants, we assembled a panel of five Psychology experts (3 female) with advanced knowledge in the key-concepts of the scale, who asserted the items' suitability to the overall goals. This procedure also contributed to enhance the scale's content validity (Boateng et al., 2018). The facial validity was initially tested with a focus group of 24 children/adolescents (12 female), aged 3 to 14, evenly distributed per educational level (preschool to the 3 rd cycle of basic education). The participants were asked to think aloud while responding to the scale (Tsang et al., 2017) and identify words or items they did not understand. In the case of children between 3 and 7 years old, the responses were registered by the first author. This procedure allowed verifying the items' adequacy regarding language comprehension and developmental level appropriateness, which led to minor grammatical changes to CASE's first version. Exploratory Factor Analysis (EFA) To determine the scale's underlying factor structure and to support decisions regarding item retention, we performed an EFA. Using IBM Statistics SPSS 24, we asserted the assumptions fulfilment to perform it, addressing outliers and excluding missing values cases' listwise. We also assessed item sensitivity by examining the descriptive statistics for each item (i.e., range, means, medians, skewness, and kurtosis; see Appendices: Table A). Furthermore, we looked at inter-item correlations and the anti-image diagonal to confirmed if the values were higher than .50. Before factor extraction, we tested for homogeneity of variances across data by performing Bartlett's Sphericity Test (Snedecor & Cochran, 1980). We examined the common variance in data through the Kaiser-Meyer Olkin (KMO) measure of sampling adequacy (Kaiser, 1970). Following Hair et al. (2018) recommen-dations, dimensionality was measured using Principal Axis Factoring (PAF) (considering an eigenvalue criterion greater than 1) through a reflective model with Oblique Rotation, because CSE dimensions represent latent variables and were expected to be correlated (Reise et al., 2000). Through an iterative, repeated EFA process, we retained items with no cross-loadings and factor loadings greater than .32, until we reached the final factor solution (Tabachnick & Fidell, 2014). Confirmatory Factor Analysis (CFA) To determine if the model's covariance structure was similar to the covariance structure of data, we performed a CFA (Cheung & Rensvold, 2002), using SPSS Statistics AMOS 24. Firstly, we tested for multivariate normality by confirming asymmetry (sk) and kurtosis (ku) coefficients absolute values were within 3 and 10, respectively (Weston & Gore, 2006). Following Brown (2015) recommendations, several indices were considered to assess the global quality of adjustment of the factorial model, namely: chi-square test and the chi-square/degrees of freedom between 1 and 2; Comparative Fit Index (CFI) above .90 (Bentler, 1990); Goodness of Fit Index (GFI) above .90 (Jöreskog & Sörbom, 1981); and, the root mean square error of approximation (RMSEA), P [RMSEA ≤ .05] below .80 (Steiger, 1990). The quality of local adjustment was assessed by observing each item's standardized regression weights. When theoretically grounded, the model was also adjusted based on the modification indices suggested by AMOS (greater than 11; p < .001). Common Method Variance Analysis. Common method variance can introduce a significant bias to research results. To assess if it existed, we conducted some diagnostic procedures, namely, a Harman single-factor test followed by a CFA analysis where all items loaded on a single factor. Additionally, we analyzed the correlations between the scale's dimensions to check if there could be a high commonality between the factors. Reliability To calculate the reliability of the EFA, we resorted to JASP (version 0.10.2) to determine McDonald's ωt coefficients because recent research suggests it is a more robust and reliable measure than Cronbach's alpha, especially when using multidimensional data (Trizano-Hermosilla & Alvarado, 2016). In the CFA case, composite reliability (CR) was computed (McNeish, 2018). Convergent validity was analyzed, using factor loadings (standardized regression weights) to calculate the average variance extracted (AVE). In contrast, each factor's discriminant validity was assessed by computing the heterotrait-monotrait ratio of correlations (HTMT; Henseler et al., 2015). HTMT is a novel approach to determine discriminant validity that has demonstrated higher performance compared to the Fornell-Larcker criterion (1981) and the assessment of (partial) cross-loadings (Franke & Sarstedt, 2019). According to Henseler et al. (2015), discriminant validity can be established when the HTMT value is inferior to .85. EFA The Bartlett's sphericity test was significant, thus confirming homogeneity of variance [χ 2 (210) = 1212, p < .05)]. The KMO measure of sampling adequacy was .84, demonstrating the existence of a highly adequate sample for analysis. When the PAF with Oblique Rotation was performed, the anti-image diagonal revealed values above .50, as expected. After analyzing the scree plot and observing initial eigenvalues (above one), we obtained an initial six-factor solution that explained 46.6% of the total variance. The pattern matrix showed items 3, 10, and 17 cross-loaded in more than one factor. Following Boateng et al. (2018) suggestions, we decided to eliminate those items and run a new EFA, to verify if this procedure improved and refined the scale's factorial structure. This was confirmed, leading to a new five-factor structure, with an acceptable 42.6% total variance explained (Hair et al., 2018). Nevertheless, items 13 and 18 did not load in any factor, whereas item 20 cross-loaded in two factors. Therefore, they were removed from the analysis, after which we re-ran the EFA. This time, a fourfactor solution emerged (with a 42.4% total variance explained). Still, item 4 had no factor loadings. We proceeded by dropping it and performed another EFA. Although explaining 43.8% of the total variance, the fourfactor solution obtained showed factor four had only two items. We chose to remove items 1 and 6 because factors should have a minimum of three items to ensure theoretical significance and validity (Froman, 2001;Hair et al., 2018). The subsequent EFA presented a three-factor factorial matrix, with all items loading only in one factor. However, the commonalities analysis of this solution displayed a commonality value (after extraction) of .15 for item 2. Considering a very low commonality value indicates the factor provides an insufficient explanation of the items' variance -with consequences in the scale's overall validity and reliability-, so we decided to discard it. Our final EFA provided a factorial matrix of 11 items, distributed by three factors (Table 2), with 42.4% of the total variance explained (see Appendices Table B). Factor one refers to fluency and comprises 4 items (e.g., item 2: 'Quando estamos a brincar sou o primeiro a dizer um jogo para jogarmos', which translates to "When we are playing, I am the first to say which game to play"). Factor two, also with 4 items, relates to elaboration, displaying the highest loading item of the whole scale (item 6: 'Consigo criar histórias a partir de sonhos que tive', which translates to "I can tell a new story from dreams I've had"). The third and final factor has 3 items linked to personality characteristics associated with creativity (e.g., 'Adoro inventar jogos' -translating into "I love creating games"). Factors means', standard deviations, and correlations can be observed in Table 3. Reliability analysis using McDonald's ωt indicated overall good reliability (ωt = .78). Internal consistency for each factor was either good (for factor one-fluency, and factor three-personality) or acceptable (for factor two-elaboration) (Nunnally & Berstein, 1994), as shown in Table 4. I make up new stories faster than my friends. .58 12. When we are playing, I am the first to say which game to play. .75 19. When I have to invent the end of a story, I think of many possible endings. When I want to tell a new story, I think of the ones I've heard. I can tell a new story from dreams I've had. .66 Personality 14. I can do a puzzle, even when it's hard. .53 15. I can learn how to build something (e.g., a toy, a LEGO) on my own. .67 I still enjoy playing with something (e.g., a toy, a LEGO) even after spending an entire afternoon playing with it. .38 the factor loadings, we observed item 8 had a loading of .40, which can be considered too low. Since it was included in the elaboration factor (with 4 items), we decided to withdraw item 8 and re-ran the CFA. .05] = .11). Table 5 displays the final nine item-solution for the scale, item distribution per factor, and each item's standardized regression weights. The final CFA model can be found in the Appendices, Figure 1. Common Method Variance The Harman single-factor test of all the scale's items identified various factors, the largest of which accounted for only 25.2% of the total variance extracted, not show-ing clear evidence of common method bias. Regarding the CFA model test where all items loaded in one single factor, it critically failed the overall fit test (χ 2 /df = 3.29; CFI=.78; GFI=.91; RMSEA=.11; furtheremore P [RMSEA ≤ .05] = .00), giving grounds to consider common method bias was not a significant problem of our model. When we observed the correlations between the CASES constructs', results showed fairly small values (the highest was r = .41), implying low commonality among them. Reliability Composite reliability for the overall scale was found to be good (CR = .82) (Hair et al., 2017). Considering the scale's exploratory nature, CR for the three sub-scales was acceptable, as can be observed in Table 6. CASES final CFA Model -item distribution and standardized coefficients Convergent validity was assessed by calculating AVE, which can be considered satisfactory regarding the scale's initial phase of development (Fornell & Larcker, 1981). Factorial validity was also found, since most items displayed loadings above .50 (Hair et al., 2018), underlining an adequate item's specification and distribution in the scale's structure. Although items 7 and 21 had loadings under .50, we kept them in the model. This decision is anchored in their relevance for scale's overall factorial structure and its internal consistency and validity. Removing them could theoretically compromise the scale's significance, resulting in a majority of scale dimensions constituted only by two-items (Froman, 2001). Discussion The CASES provides a CSE scale specifically designed to grasp a broad and diverse developmental span, namely preschool and primary school children, and adolescents up to the age of 16. As far as we know, this is novel in the field and has the potential to broaden the understanding of the development of CSE. The final CFA analysis of the scale confirmed a multidimensional, three-factor structure, with overall good scale reliability (CR = .82). Common method variance was tested. While we cannot completely remove the possibility of such bias, our results suggest that, if present, it is fairly limited and unlikely to confound the interpretation of our results. The CASES consists of nine items, evenly distributed by each of the factors: fluency, elaboration, and personality. This factorial structure confirms the scale's multidimensionality and is in line with previous CSE studies (Abbott, 2010;Karwowski et al., 2012). Furthermore, the scale's dimensions also seem to manifest the balance between cognitive and personality spheres, which has consistently been stated in creativity research (Benedek et al., 2018;Frith et al., 2020;Puryear et al., 2019). Fluency and elaboration compose the cognitive facet of the CASES and represent two well-known dimensions of divergent thinking associated with creativity (Vally et al., 2019). Even though we aimed to develop a CSE scale that could structurally reflect divergent and convergent thinking dimensions of creativity, our findings seem to strengthen the relevance of the link between CSE and divergent thinking features (Puente-Díaz & Cavazos-Arroyo, 2018). However, there is also the possibility that, in spite of our effort to specifically elicit CSE beliefs, this result may be displaying the predominance of a divergent thinking definition of creativity among our participants. In the future, research should control this aspect by scanning the participant's implicit theories of creativity. The personality dimension, on the other hand, refers to individual characteristics associated with creativity (e.g., autonomy, resilience) and seeks to assess the relevance of certain personality characteristics to the development of CSE, rather than gauging the significance of creativity for the person's identity . Thus, the CASES can be thought of as a more holistic and developmentally oriented psychometric instrument that enables a perspective of CSE as an element of a dynamic, multidimensional, and complex matrix of creativity. Overall, the scale's final factorial structure showed good psychometric properties with no overlapping factors, notwithstanding somewhat low values of CR and AVE for its dimensions. If fluency and elaboration fit the threshold for acceptable results (Hair et al., 2018), personality indices are below those guidelines. Despite this, we believe that lower results are understandable in an exploratory and early stage of a psychometric instrument development, such as it happens with the CASES. In this sense, keeping the three-factor structure was a more coherent option in this regard, because removing a theoretically relevant dimension such as personality could jeopardize the scale's nomological validity (Hagger et al., 2017). Another point worth discussing is that, given our developmental and ecological approach to CSE, we expected to find a dimension referring to the sociocultural influences permeating and shaping CSE beliefs. As we mentioned above, school and educational settings have a significant influence in the development of creativity in child and adolescents, with potentially significant interferences in CSE. Even though the items we developed to address that dimension were not robust enough to withstand the EFA and CFA analysis, their absence cannot be overlooked and should be explored in the future. Yet, the reduction in item number had a positive side effect: a final nine-item structure reduced completion time to a maximum of ten minutes, increasing the scale's adequacy to our sample's average attentional levels. Aside from the research directions outlined above, it could also be fruitful to explore further the role of elaboration in the development of CSE. Elaboration (i.e., the ability to detail ideas) is a dimension where creative complexity can emerge. However, it remains understudied in the CSE research. From a developmental viewpoint, understanding this intersection can enlighten the implications of creativity and CSE in enriching psychological development trajectories. Future efforts should also consider testing the scale's reliability results over time and its relationship with other CSE and divergent thinking abilities measures. As a whole, CASES can be envisioned as a potentially relevant addition to the array of instruments assessing CSE, with implications amid educational settings. By gaining insight over child and adolescents CSE, parents, educators, educational psychologists, researchers, and policymakers can improve curriculum and interventions designed to nurture and enhance creative potential, opening new developmental possibilities of greater psychological complexity.
2022-06-25T15:23:00.468Z
2022-05-20T00:00:00.000
{ "year": 2022, "sha1": "190600dc82ee85799ac7be9fea15516c370d65a9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "0779b013807c54a5e18e915df18b0fb906713a22", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
388999
pes2o/s2orc
v3-fos-license
Punctuation effects in English and Esperanto texts A statistical physics study of punctuation effects on sentence lengths is presented for written texts: {\it Alice in wonderland} and {\it Through a looking glass}. The translation of the first text into esperanto is also considered as a test for the role of punctuation in defining a style, and for contrasting natural and artificial, but written, languages. Several log-log plots of the sentence length-rank relationship are presented for the major punctuation marks. Different power laws are observed with characteristic exponents. The exponent can take a value much less than unity ($ca.$ 0.50 or 0.30) depending on how a sentence is defined. The texts are also mapped into time series based on the word frequencies. The quantitative differences between the original and translated texts are very minutes, at the exponent level. It is argued that sentences seem to be more reliable than word distributions in discussing an author style. Debates exist whether a few texts are sufficiently representative of a language and how big a lexicon must be before it becomes significant. This caveat presented, it is fair to say that it seems that several specific features of written texts have not been studied in detail. The role of punctuation on the structure of texts is one of these. According to wikipedia the first inscription with punctuation mark is the Mesha Stele (9thBC); see http : //en.wikipedia.org/wiki/M esha − Stele. A long time ago Greeks and Romans adopted a few punctuation marks (the dot and combinations, essentially) in order to mark pauses in texts, to be played. Other historical details on the creation, dissemination, use and types of punctuations in various languages can be found in http : //en.wikipedia.org/wiki/P unctuation, and http : //grammar.ccc.commnet.edu/grammar/marks/marks.htm. Through these e-references, it can be learned that punctuation marks are symbols that indicate the structure and organization of a written text in a specific language, for readability, as much as for suggesting intonation and pauses when reading aloud. In written English, punctuation is vital to disambiguate the meaning of sentences, though this does not go without problems [25,26]. Notice that some modern writers have attempted to go in some sense backward. As far as 1895, Crane published The Black Riders and Other Lines [27] in capital letters: the poems appearing without punctuation, an unusual typographical presentation for the time, -a style system considered as garbage by the critics. In another language, e.g. french, Apollinaire [28] published one of his major pieces Alcools without punctuation. Thereafter, Similarly, the french surrealists and dadaists scorned punctuation, like Aragon [29] who avoided any in most of his poems and prose for/about Elsa Triolet. That followed from the para-psychological theory put forward by Breton [30] in The Manifesto, containing new/practical recipes for enhancing the Magic Surrealist Art, such as: "...Punctuation of course necessarily hinders the stream of absolute continuity which preoccupies us ... ". This was recently "poetically" reformulated by Hahn [31] in The Pity of Punctuation poem. Some "maximum" was likely reached by Joyce [32]. In U lysses symbolically conserving the structure of Homers The Odyssey, where there is no punctuation, Joyce omits punctuation entirely, in the last chapter of the novel, -consisting of eight long paragraphs, in order to mimic the uninterrupted flow of naked thoughts. Thus punctuation could be avoided. Indeed there is some redundance, since a capital letter can indicate to the reader a new sentence. One major difficulty nevertheless occurs in text analysis: it is more easy to observe a punctuation sign on a text than a capital letter. However, fundamentally, in literature, the marks are strongly depending on the writer choice [29,30,32], but also on the editor [33,34]. A question can thus be raised about the relationship, if any, between an author and the use of punctuation marks, for defining his/her style. A few text studies seem to exist along these lines in the recent literature, i.e. having considered structures at the sentence level in english [1,7,8], in german [2], in chinese [8], in japanese [11], sometimes strangely neglecting the punctuation role as in [3,6]. In order to propose further studies on the matter, it is attempted here to discuss well known written texts. Moreover, as in [3,21] such considerations are extended to some translation of texts. The texts here below chosen are freely available from the web [35], i.e. Alice in wonderland (AWL) [36] and Through a looking glass (TLG) [37]. They are representative of a well known mathematician Lewis Caroll. Such a choice will allow one to discuss whether the differences between two single author english texts, having appeared at different times (1865 and 1871), contain different structures. The first text is also available in esperanto [38,39]. It seems in order to observe whether some style or structural change has been made between a text and its translation, thus whether the translation observes similar statistical rules as the original text from the punctuation point of view. Previous work on the english AWL version should be here mentioned [24]. In Sect. 2, the methodology is briefly exposed. It is recalled that one can map texts into (word) length time series (LTS) or into a (word) frequency time series (FTS). In the present case one adopts the length time series approach in order to count the number of characters (and blanks) in a sentence, i.e. defining a time interval ending by some punctuation mark Some test with a FTS will be made. In Sect. 3, the results are presented through log-log plots of the sentence length-rank relationship and along a Zipf analysis for the word distribution. A conclusion with statistical and linguistics comments is found in Sect. 4. Data and Methdology For the present considerations two texts here above mentioned and one translation have been selected and downloaded from a freely available site [35], resulting obviously into three files. The chapter heads are not considered. All analyses are carried out over this reduced file for each text. As indicated in the introduction, one can look at the length of sentences, or bits of sentences, Fig. 1. log-log plot of the rank-sentence lengths, as separated by (a) dots, (b) commas, in the three texts of interest AWL eng , AWL esp and TLG eng . The η = 0.33 exponent of the corresponding rank law is indicated as a guide to the eye taking into account relevant separators: "." (dot) and "," (comma), colon, semi-colon, exclamation point, question mark, i.e., ":", ";", "!", "?". By analogy with the original Zipf analysis method or technique which gives a rank R to the words according to their frequency f and make a log-log frequency-rank plot, one ranks here the sentences according to their length l (to be defined) and one searches for l(R). Usually, for many languages, written texts, one has f ∼ R −ζ , such that one roughly sees a straight line going through the data, o a log-log plot, interestingly with a slope −1 [40]. A large set of references on Zipf's law(s) in natural languages can be found in [41]. Thus, one considers that there is a one-to-one relationship between rank and frequency. This is strictly true if there is no ambiguity in the ranking; sometimes two (or more) words have the same frequency. Their rank has been attributed according to their chronological appearance in the text, apparently without much loss in information content. As previously mentioned, this Zipf law and others have been mainly considered at the word distribution level; it is fair to reemphasize related work at the sentence level, defined through separation dots, e.g. on german [1,2], on irish [6,7] and on japanese [11] texts. Results In the present case, one considers the length l of sentences defined between various punctuation marks, counting characters rather than words. The punctuation marks define time intervals or bits of sentences. The result of the LTS rank-like analysis, a length-rank relationship, for the three main texts is shown in Figs. 1-3, for the different punctuation marks, as mentioned in the figure captions. For pointing out the cases, it is seen that the longest "sentence" contains 1669, 825, and 864 characters, in AWL eng , AWL esp and TLG eng respectively, when the sentence ends with a dot, Fig. 1(a). Similarly the longest sentence contains 6323 , 5581 and 5212 characters when ending with question mark, in the respective texts. Several orders of magnitude in the maximum rank and in the lengths immediately distinguish the cases. There about 2000, 200 and 500 ranks, i.e. different lengths, depending on the punctuation marks, grouped as in Figs. 1-3. On the other hand the length can vary much: from about 300 to 12000 depending on the cases. At once it is observed that TLG eng slightly differs from others, when the sentences end with a semi-colon, Fig. 2(a). Interestingly let it be observed that the Rank =1 length for the esperanto text is often higher than for the english texts. This might be argued to originate from the number of available words to make any sentence. Each log-log plot roughly indicates a simple power law relationship, for ca. R ≤ 500, R ≤ 50, R ≤ 100, i.e. l ∼ R −η . This corresponds to a break length value ca. 100, 1000 and 1000. Some curvature is found for all texts below R ∼ 5 where a so called discontinuity exists. It can be understood in lines of comments by [24] on word frequency plots. For the latter case, this is due to a transition between colloquial ("common") small and "distinctive" words; one can be easily convinced of the analogy when forming and studying sentences. This weak change in curvature at low rank value feature is also explained in Mandelbrot [44][45][46] using arguments based on f ractal ideas, applied to the structure of lexical trees. Some marked break, or change in slope, looking like a distribution truncation is also found for R large. Some discussion for the latter case in discussing word distributions can be read in [24]. By analogy this behavior is thought to arise from the scarcity of long sentences, i.e. there is much difference in the number of characters for the long sentences, not so much for the small ones. In some physics-like sense one would attribute the result to the polycrystalline nature of the sample, made of few big crystals and many tiny ones. The most drastic difference occurs between the cases of the first group of punctuation marks, Fig. 1, where the slope indicates that the exponent η is rather close to 1/3, and the latest four cases, Fig. 2-3, where the slope is closer to 1/2. Notice that the number of punctuation marks is relatively equivalent in all texts, as estimated from the integral of the distributions [47], but there are many more dots and commas than other punctuation marks, as is expected indeed, -a factor of ten. Therefore one might expect some stronger finite size effect influencing the exponent value in the latter cases. In conclusion of this section, let it be accepted that the quality of the power-law fits are less impressive in the case of the length of sentences (Figs. 1-3) than in the case of the frequency of words (Fig.4). This observation possibly indicates that the existence of a cut-off is more likely, for the length of sentences, because of grammatical and readability constraints. One could alternatively present the data in a log-normal plot, and observe whether a stretched exponential can be considered. However the shortness of the data range is in this case a handicap as well. Further "theoretical" discussion on the values of the exponents as found below in Sec. 4 would not be more agreeable. The possible stretched exponential is thus below briefly discussed. Conclusion The occurrence of such a power law for word distributions has already been suggested [48] to originate in the "hierarchical structure" of texts as well as to the presence of long range correlations (sentences, and logical structures therein). Some ad hoc thought has been presented based on constrained correlations [49,50]. A value of η smaller than unity indicates a wide, flat distribution thus a more homogeneous repartition of the variables (lengths of sentences, here). Gabaix [51], looking at city growths, claims that two causes can lead to a value less than 1.0, i.e. either (i) the growth process deviates from Gibrat's law [52] which assumes that the mean growth rate is independent of the size, or/and (ii) the variance of the growth process is size-dependent. Recall that one does not examine the "growth" of the text at this stage yet, nor have any model for doing so presently. However, one can imagine the way L. Caroll (and other authors) function. After writing a first draft of some chapter, the author adds, removes, modifies words and sentences, introducing different "grams" leading to a modified story development and text structure. Modifying again the text after a second reading, etc. The process is kinetic indeed and basically a growth process, somewhat similar to city growth; Thus it is a priori hard to say whether the causes (i) or (ii) or both are influencing the exponent values. One can nevertheless debate whether the sample size is relevant for estimating a (small) η value on so few rank decades [47]. The same can be considered if the data would seem to fit a stretched exponential. If so it might be argued that an external constraint must be envisaged, as if the writer was influenced by e.g. the size of the paper sheet on which he/she is writing. The present author wishes not to enter into such considerations, though further studies might be of interest nowadays as when studying blogs and RSS [53] and other (electronic or not) reports which are strictly limited in size or in the number of allowed characters. According to a widespread conception, quantitative linguistics will eventually be able to explain such empirical quantitative findings (such as Zipf law) by deriving them from highly general stochastic linguistic laws that are assumed to be part of a general theory of human language [54,55] for a summary of possible theoretical positions). In [56], Meyer argues that on close inspection such claims turn out to be highly problematic, both on linguistic and on sciencetheoretical grounds. It has also been argued that it is possible to discriminate between human writings [57] and stochastic versions of texts precisely by looking at statistical properties of words. In contrast I argue here above that this statement can be extended to sentence statistics. The meaning of the results is admittedly still somewhat elusive, even though the length distribution of text segments between certain types of punctuation marks is new empirical data. It is fair to mention here a reviewer suggestion [58] encouraging investigations of the length distribution of symbol sequences commonly regarded as a 'unit of thought' or a proper sequence, that is not distinguishing between periods, semicolons, question and exclamation marks. However to test if such statistical measures can indeed be used to classify a text, e.g., to distinguish authorship, a much larger set of texts should be used. Similarly, from this single example the similarity between the Esperanto translation and the original text may not point to the quality of translation, since perhaps any natural text exhibit similar frequency distribution [58], and might be due to other external constraints, as hinted at the end of a previous paragraph. Last but not least as on comparing AWL eng , AWL esp , and TLG eng , it seems that the texts are qualitatively similar, which indicates ... the quality of the translator. In this spirit, it would be interesting to compare with results originating from text obtained through a machine translation, as recently studied in [59]. It is of huge interest to see whether a machine is more flexible with vocabulary and grammar than a human translator, -see also [60]! Finally, in summary, it is sufficient here to stress that punctuation marks are an essential part and a long lasting feature of indo-european languages, with a great variety of signs and in their use. At first sight, a time series of a single variable appears to provide a limited amount of information if texts and authorships. FTS and LTS result from a dynamical process, which is usually first characterized by its fractal dimension. The first approach should contain a mere statistical analysis of the output, as done through a Ranklike analysis here. It has been found that analytical forms, like power laws with different characteristic exponents for the ranking properties exist. The exponent can take values ca. 1.0, 0.50 or 0.30, depending on how a sentence is defined. This non-universality, or even another law, could be further examined in order to find whether there is a measure of the author style hidden in such statistics/fits. Moreover one on-going challenge is to sort out the laws of sentence statistics in texts, written or produced by many authors, like scientific papers, thereby discriminating the percentage of truly personal contribution in the writing. Another apparently more simple investigation which is in direct line with previously mentioned studies [53] is the characterization of sentence statistics in online dynamic media, such as Blogs or RSS feeds, which are usually single author texts.
2010-04-27T09:13:19.000Z
2010-04-27T00:00:00.000
{ "year": 2010, "sha1": "884ed18b59247b63784e0e3309f553142a4b11ba", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1004.4848", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "884ed18b59247b63784e0e3309f553142a4b11ba", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Physics" ] }
254404390
pes2o/s2orc
v3-fos-license
Visible-Light Activation of Persulfate or H2O2 by Fe2O3/TiO2 Immobilized on Glass Support for Photocatalytic Removal of Amoxicillin: Mechanism, Transformation Products, and Toxicity Assessment Fe2O3/TiO2 nanocomposites were fabricated via a facile impregnation/calcination technique employing different amounts iron (III) nitrate onto commercial TiO2 (P25 Aeroxide). The as-prepared Fe2O3/TiO2 nanocomposites were characterized by X-ray diffraction (XRD), Raman spectroscopy (RS), scanning electron microscopy/energy-dispersive spectroscopy (SEM/EDXS), X-ray photoelectron spectroscopy (XPS), Brunauer–Emmett–Teller analysis (BET), electron impedance spectroscopy (EIS), photoluminescence spectroscopy (PL), and diffuse reflectance spectroscopy (DRS). As a result, 5% (w/w) Fe2O3/TiO2 achieved the highest photocatalytic activity in the slurry system and was successfully immobilized on glass support. Photocatalytic activity under visible-light irradiation was assessed by treating pharmaceutical amoxicillin (AMX) in the presence and absence of additional oxidants: hydrogen peroxide (H2O2) and persulfate salts (PS). The influence of pH and PS concentration on AMX conversion rate was established by means of statistical planning and response surface modeling. Results revealed optimum conditions of [S2O82−] = 1.873 mM and pH = 4.808; these were also utilized in presence of H2O2 instead of PS in long-term tests. The fastest AMX conversion possessing a zero-order rate constant of 1.51 × 10−7 M·min−1 was achieved with the photocatalysis + PS system. The AMX conversion pathway was established, and the evolution/conversion of formed intermediates was correlated with the changes in toxicity toward Vibrio fischeri. Reactive oxygen species (ROS) scavenging was also utilized to investigate the AMX conversion mechanism, revealing the major contribution of photogenerated h+ in all processes. Introduction Semiconductor-based photocatalysis has emerged as a promising technology for water purification. Among photocatalysts studied, titanium dioxide (TiO 2 ) has been regarded as the "benchmark photocatalyst" due to its chemical and thermal stability, biological inertness, suitable mechanical properties, low cost, and nontoxicity [1][2][3]. However, TiO 2 The appropriate amount of TiO 2 -P25 (0.300 g) was dispersed in 80 mL of EtOH under sonication (Bandelin Sonorex RK 510 H, Berlin, Germany) for 5 min. Then, the appropriate amount of Fe(NO 3 ) 3 ·9H 2 O dissolved in 20 mL of EtOH was slowly added dropwise to the TiO 2 -P25 suspension whilst under sonication. After the sonication process was performed for 30 min, a brownish-white suspension was observed. The suspension was then continuously stirred for 6 h at room temperature, before drying at 60 • C for 12 h. The collected powder was calcined at 350 • C for 2 h in air using a muffle furnace (LP-08, Instrumentaria, Zagreb, Croatia) to obtain the final product. Different contents of Fe(NO 3 ) 3 ·9H 2 O were added to form final Fe 2 O 3 /TiO 2 nanocomposites with a theoretical content (w/w) of 1%, 3%, 5%, 10%, and 20% (Fe 2 O 3 to TiO 2 -P25). Pure α-Fe 2 O 3 was obtained by performing the same procedure without the presence of TiO 2 -P25. Images of the prepared nanocomposites are shown in Figure S1. The selected photocatalyst nanocomposites were immobilized using a low-temperature method [16]. The procedure involved the preparation of silica sol and titania sol. The silica sol was prepared via the hydrolysis of TEOS in water catalyzed by HCl, performed under vigorous stirring until a clear sol was obtained. Titania sol was prepared via the hydrolysis of TTIP in EtOH catalyzed by HClO 4 , conducted under reflux conditions at 100 • C for 48 h. Subsequently, the obtained silica sol, titania sol, EtOH, and Levasil 200/30 were mixed to form a binder sol in which 1.0 g of obtained photocatalyst was added. The mixture was homogenized in an ultrasonic bath for 10 min prior to the coating of round glass substrates (r = 37.5 mm) by spin coating at 1500 rpm for 30 s using a KW-4A spin coater (Chemat Technology, Los Angeles, CA, USA). The plates were thereafter heat-treated in an oven UN-55 (Memmert, Schawabach, Germany) at 200 • C for 2 h. The same procedure was repeated to prepare three catalyst layers, while the heating cycles (200 • C for 2 h) were applied between coatings of layers. Characterization of Fe 2 O 3 /TiO 2 Nanocomposites X-ray diffractograms (XRD) of the prepared nanocomposites were recorded using an X-ray diffractometer MiniFlex 600 (Rigaku, Tokyo, Japan), using Cu Kα1 (λ = 1.54059 Å) radiation from 3 • to 70 • with a step width of 0.02 • and scan speed of 2.00 • /min. Raman spectroscopy were measured using an Alpha300 (Oxford Instruments-Witec, Ulm, Germany) equipped with a microscope and attached atomic force microscope (AFM). The excitation source wavelength was set to 532 nm, while the integration time was set to 5 s with an average of 20 scans taken. Scanning electron microscopy (SEM) images were obtained using an Ultra Plus SEM (Zeiss, Jena, Germany). Energy-dispersive spectroscopy spectra (EDS) were recorded with an X-max silicon drift detector (Oxford Instruments, Abingdon, UK). X-ray photoelectron spectroscopy (XPS) measurements were performed using a PHI VersaProbe III (Version AD) (PHI, Chanhassen, MI, USA) equipped with a hemispherical analyzer and a monochromatic Al Kα X-ray source. Survey spectra were measured using a pass energy of 224 eV and step of 0.8 eV, while Fe 2p core-level spectra were measured at a pass energy of 27 eV and step of 0.1 eV. The data were acquired using ESCApe 1.4 software. Fitting of Fe and Ti 2p core-level spectra were performed using CasaXPS software. Diffuse reflectance spectra (DRS) of the prepared nanocomposites were measured using a UV-2600i UV/Vis spectrophotometer (Shimadzu, Kyoto, Japan), equipped with an integrating sphere. The obtained reflectance versus wavelength spectra of pure components and nanocomposites were transformed into the Kubelka-Munk function (KM) versus photon energy (hν) in order to calculate bandgap (E g ) values. The bandgap (E g ) values of studied photocatalytic materials were calculated from the onsets of the absorption edge using the formula presented in Equation (1) [20]. where λ g is the bandgap wavelength. Photoluminescence (PL) spectra were recorded at room temperature using a Varian Cary Eclipse fluorescence spectrophotometer (Agilent, Sta.Clara, CA, USA) with an excitation wavelength of 325 nm. The Brunauer-Emmett-Teller (BET) single-point and multipoint surface area was determined from N 2 adsorption/desorption isotherms using Gemini 2380 instrument (Micrometrics, Norcross, GA, USA). The nanocomposites were characterized in powdered form in all above-stated characterization techniques. Photoelectrochemical (PEC) Measurements Prepared nanocomposites were immobilized on 1 cm 2 area of fluorine-doped tin oxide (FTO, Sigma-Aldrich, St. Louis, MO, USA) glass (2.2 mm thick; resistivity of 7 Ω/sq; overall dimension: 2 cm × 1 cm) using the method described by Elbakkay et al. [21]. Prior to coating, FTO glass slides were sonicated for 10 min sequentially in EtOH, acetone, and ultrapure water and then dried at room temperature. Thereafter, 2 mg of prepared nanocomposite was dispersed in 400 µL of 2-propanol and 10 µL of Nafion (Sigma-Aldrich, 5 wt.%) under sonication for 30 min. Finally, 30 µL of catalyst suspension was immediately drop-casted on 1 cm 2 area of clean FTO glass and then dried in an oven at 80 • C for 30 min to form a working electrode. Transient photocurrent responses and electrochemical impedance spectra (EIS) were obtained using a potentiostat/galvanostat PalmSens4 (PalmSensBV, Houten, The Netherlands) equipped with a standard three-electrode system and an LED light source (spectrum shown in Figure S2). Ag/AgCl electrode, Pt wire, as-prepared nanocomposite-coated FTO glass (1 cm 2 ), and 0.1 M Na 2 SO 4 solution were used as the reference electrode, counter electrode, working electrode, and electrolyte solution, respectively. Photocatalytic Activity Evaluation Photocatalytic treatment experiments with 0.05 mM AMX water solution were carried out in a water-jacketed (V = 0.09 L, T = 25.0 ± 0.2 • C) batch photoreactor illuminated by a simulated solar irradiation produced by Oriel Arc source (Newport; 450 W Xe lamp, Osram, Irvine, CA, USA), which was equipped with a collimator and airmass filter (AM Nanomaterials 2022, 12, 4328 5 of 26 1.5 G), as well as an additional UV cutoff filter (λ > 400 nm) to provide only visiblelight illumination [17]. In preliminary experiments a slurry system was used; 0.045 g of photocatalyst powder was dispersed with AMX solution (natural pH = 5.5) under constant stirring (300 rpm). The solution was continuously mixed for 30 min in the dark in order to achieve adsorption/desorption, denoted as (−30), and thereafter exposed to visiblelight illumination. The onset of illumination is denoted as (0). During the experiments, 700 µL aliquots of samples were collected at designated time intervals (15,30,45,60,75, and 90 min), filtered through a 0.45 µM Chromafil XTRA RC (Macherey-Nagel, Duren, Germany) syringe filter, and immediately quenched with 100 µL of MeOH prior to HPLC analysis, as described in Section 2.6. The photocatalyst powder which possessed the highest photocatalytic activity was selected for immobilization onto glass plates as described in Section 2.2. The glass plates with immobilized photocatalytic material were placed at the bottom of the reactor in contact with AMX solution under constant mixing (90 rpm) by an orbital shaker DOS-20 (NeoLab, Heidelberg, Germany) and were subjected to a similar treatment procedure as described above for the slurry system, except for the illumination time intervals (15,30,45,60,75,90,120, and 150 min). A full factorial design (FFD) was utilized to study the effect of initial pH and PS concentration on AMX degradation (Tables 1 and S1). The coded parameters X 1 and X 2 represent pH (ranging from 4 to 8) and concentration of PS (ranging from 500 µM to 3000 µM), respectively. The chosen minimum and maximum concentrations of PS corresponded to AMX:PS molar ratios of 1:10 to 1:60, respectively. The obtained optimal conditions for the degradation of AMX based on FFD experiments and response surface modeling were utilized as the basis for H 2 O 2 conditions, which were later used and compared for the investigation of toxicity, transformation byproducts, and scavenging studies. Identification of reactive oxidizing species (ROS) was carried out using t-BuOH (5 mM), FA (5 mM), BQ (0.5 mM), and MeOH (5 mM 2.6. Analytical Methods pH measurements were performed using a Handylab pH/LF portable pH-meter (Schott Instruments GmbH, Mainz, Germany). AMX concentration was monitored using an HPLC, Series 10, (Shimadzu, Kyoto, Japan) equipped with a UV-DAD detector (SPD-M10A VP , Shimadzu) and a reversed-phase (RP) C18 column (250 mm × 4.6 mm, 5 µm, Macherey-Nagel Nucleosil, Duren, Germany). Isocratic elution was carried out with a mobile phase consisting of 90% aqueous 50 mM FA and 10% acetonitrile at an overall flow of 1 mL·min −1 , whereas AMX was monitored at 272 nm. AMX transformation products (TPs) were analyzed using an ultrahigh-performance chromatograph (Thermo Scientific Vanquish system) in tandem with a high-resolution mass spectrometer (Orbitrap Exploris TM 120, Thermo Scientific, Waltham, MA, USA), in positive and negative ionization mode. The samples were diluted fivefold with HPLC-grade water prior to the injection. Chromatographic separation of AMX and its transformation products was achieved on an RP C18 column (50 mm × 2.1 mm Hypersil GOLD TM , pore size 1.9 µm, Thermo Scientific, Vilnius, Lithuania). Gradient elution of water with 0.1% FA (A phase) and acetonitrile (B phase) was utilized, at a flow rate of 0.400 mL·min −1 , under the following gradient program: 0-0.200 min, 2% B; 0.200-4.750 min, 98% B; 98% B maintained for 1.250 min (4.750-6.000 min); back to the initial mobile phase composition 3 min post run time (98% A/2% B). Ammonium acetate was used for negative mode instead of FA. The conditions for high-resolution mass spectrometry with an electrospray ionization source were the following: capillary, 3500 V; ion transfer tube temperature, 325 • C; vaporizer temperature, 350 • C; sheath gas pressure (Arb), 50; auxiliary gas pressure (Arb), 10; scan modes, full MS (resolution 60,000) and ddMS 2 (resolution 15,000); scan range, m/z 100-1000. Raw MS data files of the control, blank matrix, and AMX samples were imported into Compound Discoverer TM (v.3.3 SP1 Thermo Scientific, Waltham, MA, USA) software for transformation product identification. Fragment ion search (FISh) coverage function in Compound Discoverer TM was utilized for structure elucidation and chemical transformations involved for each chromatographic peak. Expected compounds were measured within ±2 ppm of mass error; with maximum area ≥10 5 and FISh coverage score ≥43.50. The aquatic toxicity of treated samples was evaluated using a commercial bioassay, based on inhibition of the luminescence emitted by Vibrio fischeri (VF) according to ISO 11348-3:2007 measured on a BiofixLumi-10 luminometer (Macherey-Nagel, Duren, Germany). Luminescence inhibition after 15 min exposure was taken as the endpoint. The results were expressed as effective concentrations causing a 50% reduction in bioluminescence (EC 50 ) and converted into toxicity units (TU = 100/EC 50 ). Calculations Response surface methodology (RSM) was utilized to determine the effectiveness of visible-light-driven photocatalytic treatment of AMX dependent on initial pH and PS concentration. The values of process parameters are represented by independent variables: X 1 and X 2 ( Table 1). Experimental space was described using a 3 2 full factorial design (FFD) for the vis-(5% Fe 2 O 3 /TiO 2 )/PS system, selected as the best according to preliminary results obtained in the slurry system (Table S1). The AMX conversion rate constants after a 150 min treatment period were chosen as process responses. The combined influence of studied parameters on process performance was described by a quadratic polynomial equation representing the RSM model, which was evaluated using a standard statistical test, i.e., analysis of variance (ANOVA), considering the following statistical parameters: Fisher F-test value (F), its probability value (p), regression coefficients (pure: R 2 ; adjusted: R adj 2 ), and t-test value. Moreover, graphical-based analysis was conducted on the so-called "residual diagnostic" (RD) using a normal probability test, Levene's test, and a constant variance test. The calculations were performed using the Statistica 13.5 (Tibco, Palo Alto, CA, USA) and Design-Expert 10.0 (StatEase, Minneapolis, MN, USA) software packages. Material Characterization The crystalline structures of the as-prepared photocatalytic materials were investigated using XRD. In Figure 1a [24,25]. Partial magnification around the (104) plane (Figure 1b) of hematite revealed that only 20% (w/w) Fe 2 O 3 / TiO 2 provided a noticeable additional peak, confirming the successful inclusion of α-Fe 2 O 3 , while no traces of hematite were detected in the remaining nanocomposites due to XRD detection limits [23]. In Figure 1c, partial magnification around 25.30 • ((101), anatase plane), revealed a peak shift to a lower angle upon increasing the addition of Fe 2 O 3 , which is attributed to lattice distortion on the TiO 2 surface [23]. revealed that only 20% (w/w) Fe2O3/ TiO2 provided a noticeable additional peak, confirming the successful inclusion of α-Fe2O3, while no traces of hematite were detected in the remaining nanocomposites due to XRD detection limits [23]. In Figure 1c, partial magnification around 25.30° ((101), anatase plane), revealed a peak shift to a lower angle upon increasing the addition of Fe2O3, which is attributed to lattice distortion on the TiO2 surface [23]. Raman spectra of the prepared nanocomposites and pure α-Fe 2 O 3 are shown in Figure 2. All of the prepared nanocomposites showed distinct phonon modes of TiO 2 such as E g (143, 196, and 641 cm −1 ), A 1g (516 cm −1 ), and B 1g (396 cm −1 ) [26,27]. Meanwhile, α-Fe 2 O 3 showed two A 1g phonon modes (227 and 496 cm −1 ) and four E g phonon modes (245, 294, 410, and 613 cm −1 ) [24,[28][29][30][31]. No vibrational modes of other iron-related species (i.e., maghemite or magnetite) were detected, which indicates the high purity of the obtained α-Fe 2 O 3 . It must be noted that only 10% and 20% (w/w) Fe 2 O 3 /TiO 2 provided noticeable α-Fe 2 O 3 vibrational modes (A 1g (227 cm −1 ), E g (294 cm −1 )), confirming the successful inclusion of α-Fe 2 O 3 in the composite, which is also in agreement with the XRD results. showed two A1g phonon modes (227 and 496 cm −1 ) and four Eg phonon modes (245, 410, and 613 cm −1 ) [24,[28][29][30][31]. No vibrational modes of other iron-related species maghemite or magnetite) were detected, which indicates the high purity of the obta α-Fe2O3. It must be noted that only 10% and 20% (w/w) Fe2O3/TiO2 provided noticeab Fe2O3 vibrational modes (A1g (227 cm −1 ), Eg (294 cm −1 )), confirming the successful inclu of α-Fe2O3 in the composite, which is also in agreement with the XRD results. Scanning electron microscopy (SEM) images and EDX spectra of the prep nanocomposite photocatalysts are shown in Figure 3. The formation of agglomer TiO2-P25 (Aeroxide) particles is a consequence of the impregnation/calcination met It must be noted that Fe2O3 content loading was low and did not cause any distortio the overall appearance of the nanocomposite. As such, it can be derived that small F particles were formed around TiO2-P25 to promote a heterojunction between semiconductors (i.e., TiO2 and Fe2O3), which may improve charge transfer mobility i overall nanocomposite [23]. EDX spectra revealed the presence of small Fe amount am Scanning electron microscopy (SEM) images and EDX spectra of the prepared nanocomposite photocatalysts are shown in Figure 3. The formation of agglomerated TiO 2 -P25 (Aeroxide) particles is a consequence of the impregnation/calcination method. It must be noted that Fe 2 O 3 content loading was low and did not cause any distortion of the overall appearance of the nanocomposite. As such, it can be derived that small Fe 2 O 3 particles were formed around TiO 2 -P25 to promote a heterojunction between the semiconductors (i.e., TiO 2 and Fe 2 O 3 ), which may improve charge transfer mobility in the overall nanocomposite [23]. EDX spectra revealed the presence of small Fe amount among the prepared nanocomposites, which later proved the incorporation of Fe 2 O 3 . These results are in agreement with the obtained XRD and Raman results, as discussed above. the prepared nanocomposites, which later proved the incorporation of Fe2O3. These results are in agreement with the obtained XRD and Raman results, as discussed above. X-ray photoelectron spectroscopy (XPS) was further used to determine the surface chemical composition and oxidation states of 5% Fe2O3/TiO2 nanocomposites. The XPS full survey spectrum (Figure 4a) showed distinct signals of Fe 2p, Ti 2p, and O 1s, confirming the successful inclusion of α-Fe2O3 on the surface of TiO2 [32], while the C 1s peak was attributed to adventitious carbon contamination originating from air exposure of the samples [33]. In Figure 4b, the core-level XPS spectrum of Fe 2p showed two peaks at binding energy (BE) values of 723.50 and 709.85 eV, corresponding to Fe 2p1/2 and Fe 2p3/2, respectively, and a satellite signal at around 715 eV, which are all characteristic of Fe 3+ in Fe2O3 [23,32,34]. Moreover, the difference in core energy level of Fe 2p, Δ(BE) = (2p1/2 − 2p3/2) = 13.65 eV also proved the presence to α-Fe2O3 [32,34]. In Figure 4c, the core-level XPS spectrum of Ti 2p showed Ti 4+ characteristic peaks at BE values of 464.33 and 458.53 eV, corresponding to Ti 2p1/2 and Ti 2p3/2, respectively [23,32]. Similarly, Ti 2p, Δ(BE) = X-ray photoelectron spectroscopy (XPS) was further used to determine the surface chemical composition and oxidation states of 5% Fe 2 O 3 /TiO 2 nanocomposites. The XPS full survey spectrum (Figure 4a) showed distinct signals of Fe 2p, Ti 2p, and O 1s, confirming the successful inclusion of α-Fe 2 O 3 on the surface of TiO 2 [32], while the C 1s peak was attributed to adventitious carbon contamination originating from air exposure of the samples [33]. In Figure 4b, the core-level XPS spectrum of Fe 2p showed two peaks at binding energy (BE) values of 723.50 and 709.85 eV, corresponding to Fe 2p 1/2 and Fe 2p 3/2 , respectively, and a satellite signal at around 715 eV, which are all characteristic of Fe 3+ in Fe 2 O 3 [23,32,34]. Moreover, the difference in core energy level of Fe 2p, ∆(BE) = (2p 1/2 − 2p 3/2 ) = 13.65 eV also proved the presence to α-Fe 2 O 3 [32,34]. In Figure 4c, the core-level XPS spectrum of Ti 2p showed Ti 4+ characteristic peaks at BE values of 464.33 and 458.53 eV, corresponding to Ti 2p 1/2 and Ti 2p 3/2 , respectively [23,32]. Similarly, Ti 2p, ∆(BE) = (2p 1/2 − 2p 3/2 ) = 5.8 eV, indicated the normal state of Ti 4+ in TiO 2 -anatase, which is similar to the results reported in the literature [33,35,36]. (2p1/2 − 2p3/2) = 5.8 eV, indicated the normal state of Ti 4+ in TiO2-anatase, which is similar to the results reported in the literature [33,35,36]. The UV diffuse reflectance spectra of pure components and prepared nanocomposites are shown in Figure 5a, whereas the Kubelka-Munk transformed spectra for the calculation of bandgap values are presented in Figure 5b. As shown in Table 2, calculated bandgap values of TiO2-P25 and α−Fe2O3 powders are in agreement with the values provided in the literature [37,38]. An increase in visible-light absorption ( Figure 5a) and an overall decrease in bandgap values (Table 2) of the Fe2O3/TiO2 nanocomposites were observed upon increasing Fe2O3 content. The UV diffuse reflectance spectra of pure components and prepared nanocomposites are shown in Figure 5a, whereas the Kubelka-Munk transformed spectra for the calculation of bandgap values are presented in Figure 5b. As shown in Table 2, calculated bandgap values of TiO 2 -P25 and α−Fe 2 O 3 powders are in agreement with the values provided in the literature [37,38]. An increase in visible-light absorption ( Figure 5a) and an overall decrease in bandgap values (Table 2) of the Fe 2 O 3 /TiO 2 nanocomposites were observed upon increasing Fe 2 O 3 content. 87 Photoluminescence (PL) spectroscopy was used to study the separation of photogenerated e − /h + pairs in the as-prepared nanocomposites. As can be seen in Figure 6, all Fe2O3/TiO2 nanocomposites showed a specific emission peak at around 444 nm, as similarly reported by Sayed et al. [39], albeit with different intensities. Materials containing 1% and 3% (w/w) Fe2O3 exhibited higher PL intensity compared to pristine TiO2. Such low Fe2O3 loading (i.e., 1 and 3% (w/w)) may suppress the defect concentration, thus promoting an increase in e − /h + recombination rate [40,41]. Similarly, a further increase in Fe2O3 loading (i.e., 20% (w/w)) exhibited the highest PL intensity among all the prepared nanocomposites, higher than pristine TiO2. As such, an optimal level of 5% Fe2O3 loading exhibited the lowest PL intensity, suggesting a strongly suppressed e − /h + recombination rate [42], which could be considered as having the highest photocatalytic activity among all the prepared nanocomposites. Photoluminescence (PL) spectroscopy was used to study the separation of photogenerated e − /h + pairs in the as-prepared nanocomposites. As can be seen in Figure 6, all Fe 2 O 3 /TiO 2 nanocomposites showed a specific emission peak at around 444 nm, as similarly reported by Sayed et al. [39], albeit with different intensities. Materials containing 1% and 3% (w/w) Fe 2 O 3 exhibited higher PL intensity compared to pristine TiO 2 . Such low Fe 2 O 3 loading (i.e., 1 and 3% (w/w)) may suppress the defect concentration, thus promoting an increase in e − /h + recombination rate [40,41]. Similarly, a further increase in Fe 2 O 3 loading (i.e., 20% (w/w)) exhibited the highest PL intensity among all the prepared nanocomposites, higher than pristine TiO 2 . As such, an optimal level of 5% Fe 2 O 3 loading exhibited the lowest PL intensity, suggesting a strongly suppressed e − /h + recombination rate [42], which could be considered as having the highest photocatalytic activity among all the prepared nanocomposites. 87 Photoluminescence (PL) spectroscopy was used to study the separation of photogenerated e − /h + pairs in the as-prepared nanocomposites. As can be seen in Figure 6, all Fe2O3/TiO2 nanocomposites showed a specific emission peak at around 444 nm, as similarly reported by Sayed et al. [39], albeit with different intensities. Materials containing 1% and 3% (w/w) Fe2O3 exhibited higher PL intensity compared to pristine TiO2. Such low Fe2O3 loading (i.e., 1 and 3% (w/w)) may suppress the defect concentration, thus promoting an increase in e − /h + recombination rate [40,41]. Similarly, a further increase in Fe2O3 loading (i.e., 20% (w/w)) exhibited the highest PL intensity among all the prepared nanocomposites, higher than pristine TiO2. As such, an optimal level of 5% Fe2O3 loading exhibited the lowest PL intensity, suggesting a strongly suppressed e − /h + recombination rate [42], which could be considered as having the highest photocatalytic activity among all the prepared nanocomposites. To further explore the photogenerated charge carrier separation efficiency of the prepared nanocomposite, photoelectrochemical studies (i.e., transient photocurrent responses and EIS) were conducted. The photocurrent density responses of a photocatalyst are directly related to its photocatalytic activity [43,44]. Transient photocurrent responses of TiO 2 , α−Fe 2 O 3 , and 5% Fe 2 O 3 /TiO 2 are shown in Figure 7a. Specifically, 5% Fe 2 O 3 /TiO 2 exhibited the highest response (0.55 µA·cm 2 ) compared to individual parts of the composite (i.e., TiO 2 and Fe 2 O 3 ). The improved separation efficiency was attributed to successful heterojunction formation. It must be noted that the photocurrent density of 5% Fe 2 O 3 /TiO 2 was reduced in the second cycle (light on/light off) to 0.45 µA·cm 2 , which may be attributed to the leaching of Fe 2 O 3 [44]. Electron impedance spectroscopy (EIS) was used to study the interfacial charge transfer mechanism in the prepared samples [45]. As shown in Figure 7b, EIS Nyquist plots of pure TiO 2 and 5% Fe 2 O 3 /TiO 2 were measured under dark and light irradiation. In EIS, the radius of the semicircle corresponds to the overall charge transfer resistance [44][45][46]. Under visible-light irradiation, all samples showed less charge transfer resistance than in the dark, with 5% Fe 2 O 3 /TiO 2 having a smaller radius than pure TiO 2 , indicating an efficient charge transfer mechanism between Fe 2 O 3 and TiO 2 due to successful heterojunction formation. To further explore the photogenerated charge carrier separation efficiency of the prepared nanocomposite, photoelectrochemical studies (i.e., transient photocurrent responses and EIS) were conducted. The photocurrent density responses of a photocatalyst are directly related to its photocatalytic activity [43,44]. Transient photocurrent responses of TiO2, α−Fe2O3, and 5% Fe2O3/TiO2 are shown in Figure 7a. Specifically, 5% Fe2O3/TiO2 exhibited the highest response (0.55 µA·cm 2 ) compared to individual parts of the composite (i.e., TiO2 and Fe2O3). The improved separation efficiency was attributed to successful heterojunction formation. It must be noted that the photocurrent density of 5% Fe2O3/TiO2 was reduced in the second cycle (light on/light off) to 0.45 µA·cm 2 , which may be attributed to the leaching of Fe2O3 [44]. Electron impedance spectroscopy (EIS) was used to study the interfacial charge transfer mechanism in the prepared samples [45]. As shown in Figure 7b, EIS Nyquist plots of pure TiO2 and 5% Fe2O3/TiO2 were measured under dark and light irradiation. In EIS, the radius of the semicircle corresponds to the overall charge transfer resistance [44][45][46]. Under visiblelight irradiation, all samples showed less charge transfer resistance than in the dark, with 5% Fe2O3/TiO2 having a smaller radius than pure TiO2, indicating an efficient charge transfer mechanism between Fe2O3 and TiO2 due to successful heterojunction formation. Photocatalytic Activity Tests Preliminary experiments revealed a negligible effect of hydrolysis and photolysis on AMX concentration within the 90 min period (Figure 8a). Initial adsorption extents of AMX onto the prepared photocatalysts during the dark period (−30 to 0 min) were found to be infinitesimally small (<1.5%); thus, the observed removal extents of AMX during photocatalytic treatment were mainly approximated to the conversion extents. Such results were ascribed to the pKa values of AMX (pKa1 = 2.4, pKa2 = 7.4, and pKa3 = 9.6) [47] and the points of zero charge of TiO2-P25 (pHPZC = 6.5-6.7) [48][49][50], α-Fe2O3 (pHPZC = 6.2) [51], and Fe2O3/TiO2 (pHPZC = 5.8-6.8) [39,52,53]. Hence, at pH 5.5, AMX is mostly present in its neutral form/zwitterionic form (pKa1 (2.4) < pH < pKa2 = 7.4 [47], and the net surface charge of all prepared photocatalysts is positive, thus leading to less interaction between two moieties. Single-and multipoint BET surface areas of the prepared photocatalysts are presented in Table 3. Incorporation of α-Fe2O3 with TiO2-P25 generally decreased the surface area of the prepared nanocomposites. However, such changes in surface area did not greatly affect much the adsorption behavior of the prepared photocatalysts since electrostatic interaction (i.e., pKa and pHPZC) played a major role in this scenario. Photocatalytic Activity Tests Preliminary experiments revealed a negligible effect of hydrolysis and photolysis on AMX concentration within the 90 min period (Figure 8a). Initial adsorption extents of AMX onto the prepared photocatalysts during the dark period (−30 to 0 min) were found to be infinitesimally small (<1.5%); thus, the observed removal extents of AMX during photocatalytic treatment were mainly approximated to the conversion extents. Such results were ascribed to the pKa values of AMX (pKa 1 = 2.4, pKa 2 = 7.4, and pKa 3 = 9.6) [47] and the points of zero charge of TiO 2 -P25 (pH PZC = 6.5-6.7) [48][49][50], α-Fe 2 O 3 (pH PZC = 6.2) [51], and Fe 2 O 3 /TiO 2 (pH PZC = 5.8-6.8) [39,52,53]. Hence, at pH 5.5, AMX is mostly present in its neutral form/zwitterionic form (pKa 1 (2.4) < pH < pKa 2 = 7.4 [47], and the net surface charge of all prepared photocatalysts is positive, thus leading to less interaction between two moieties. Single-and multipoint BET surface areas of the prepared photocatalysts are presented in Table 3. Incorporation of α-Fe 2 O 3 with TiO 2 -P25 generally decreased the surface area of the prepared nanocomposites. However, such changes in surface area did not greatly affect much the adsorption behavior of the prepared photocatalysts since electrostatic interaction (i.e., pKa and pH PZC ) played a major role in this scenario. The highest photocatalytic activity was achieved by 5% Fe2O3/TiO2, exhibiting 16.3% AMX conversion within the 90 min period, which was significantly higher compared to any of the nanocomposites and pure components (i.e., TiO2-P25 and α-Fe2O3) (Figure 8a). Such an improvement in photocatalytic activity was ascribed to the suppression of recombination of photogenerated e − /h + within the composite, as also proven and supported by PL spectroscopy (Figure 6) and photoelectrochemical experiments ( Figure 7). Accordingly, 5% Fe2O3/TiO2 was selected as the photocatalyst to be immobilized onto glass support due to its superior photocatalytic activity to other prepared nanocomposites. In Figure 8b, the presence of [PS] = 0.3 mM with 5% Fe2O3/TiO2 led to a significant increase in AMX conversion (35%). Such results are ascribed to additional SO4 •− (and potentially HO • ) produced from PS, which serve as the electron acceptor and suppressor for e − /h + recombination [9]. The determination of excess [PS] is shown in Figure S3. For further optimization, 5% Fe2O3/TiO2 was immobilized on the glass support ( Figure S4), and RSM modeling was applied to avoid the obtention of misleading information from the conventional "one-parameter-at-time" approach [1]. As can be seen from Figure S5, i.e., the kinetic profiles of AMX conversions for the vis-(5% Fe2O3/TiO2)/PS system operated in conditions set by 3 2 FFD (Tables 1 and S1), the obtained results obeyed zeroorder kinetics. Accordingly, AMX conversion rate constants (kobs) for the period of treatment under visible irradiation were calculated using Equation (2), representing the functional dependence of AMX conversion versus treatment time, implying a surface reaction mechanism for activation of PS [54][55][56]. Such calculated kobs values were used as system responses in RSM. The highest photocatalytic activity was achieved by 5% Fe 2 O 3 /TiO 2 , exhibiting 16.3% AMX conversion within the 90 min period, which was significantly higher compared to any of the nanocomposites and pure components (i.e., TiO 2 -P25 and α-Fe 2 O 3 ) (Figure 8a). Such an improvement in photocatalytic activity was ascribed to the suppression of recombination of photogenerated e − /h + within the composite, as also proven and supported by PL spectroscopy ( Figure 6) and photoelectrochemical experiments (Figure 7). Accordingly, 5% Fe 2 O 3 /TiO 2 was selected as the photocatalyst to be immobilized onto glass support due to its superior photocatalytic activity to other prepared nanocomposites. In Figure 8b, the presence of [PS] = 0.3 mM with 5% Fe 2 O 3 /TiO 2 led to a significant increase in AMX conversion (35%). Such results are ascribed to additional SO 4 •− (and potentially HO • ) produced from PS, which serve as the electron acceptor and suppressor for e − /h + recombination [9]. The determination of excess [PS] is shown in Figure S3. For further optimization, 5% Fe 2 O 3 /TiO 2 was immobilized on the glass support ( Figure S4), and RSM modeling was applied to avoid the obtention of misleading information from the conventional "one-parameter-at-time" approach [1]. As can be seen from Figure S5, i.e., the kinetic profiles of AMX conversions for the vis-(5% Fe 2 O 3 /TiO 2 )/PS system operated in conditions set by 3 2 FFD (Tables 1 and S1), the obtained results obeyed zero-order kinetics. Accordingly, AMX conversion rate constants (kobs) for the period of treatment under visible irradiation were calculated using Equation (2), representing the functional dependence of AMX conversion versus treatment time, implying a surface reaction mechanism for activation of PS [54][55][56]. Such calculated k obs values were used as system responses in RSM. It must be noted that all photocatalytic experiments included a 30 min dark period to ensure adsorption/desorption equilibrium ( Figure S5). For pH 4 and 6, the net surface charge of 5% Fe 2 O 3 /TiO 2 was positive, while AMX mostly existed in neutral form; as a result, the absorbed amount of AMX was less than 1.5%, which is a consequence of less attraction between two moieties. For pH 8, it is expected that the absorbed amount of AMX would be less as well, since the net charges of 5% Fe 2 O 3 /TiO 2 and AMX would both be negative, and repulsion of negative charges is expected to be dominant. However, AMX removal was observed to be 37-40% within the 30 min dark period, which can be associated with the base activation of persulfate [57,58]. In this case, the base-catalyzed hydrolysis of persulfate yields hydroperoxide anions and sulfate ions (Equation (3)). Thereafter, additional persulfate ion reacts with hydroperoxide anion to yield sulfate radicals and superoxide radicals (Equation (4)). Lastly, sulfate radicals can react with hydroxide ions to produce hydroxyl radicals (Equation (5)) [57,58]. Hence, it must be noted that the AMX removal associated with base-catalyzed persulfate was not included in RSM modeling since its process was characterized as a nonphotochemical reaction. As such, only the photocatalytic treatment (i.e., 0 to 150 min) was included, expressed as the AMX conversion rate constant, (kobs). (3) Accordingly, multiple regression analysis was applied on the FFD matrix and AMX (kobs) values calculated for the treatment period under visible-light irradiation (Table S1), yielding a polynomial equation for the RSM model, Equation (6). The obtained model was characterized by ANOVA (Table S2) and RD tools ( Figure S6), and it was found to be significant (p = 0.0010) and accurate (R 2 = 0.9956 and R adj 2 = 0.9883). On the other hand, RD revealed that (i) there were no violations in the assumptions that errors were normally distributed and independent of each other, (ii) the error variances were homogeneous, and (iii) the residuals were independent. ANOVA analysis also revealed that model terms corresponding to both process parameters (i.e., pH and [PS]) were significant, (p ≤ 0.05). (Table S2). Therefore, this model can be used as a tool to clearly discuss the influence of studied parameters on AMX conversion. The 3D surface and contour representations of the influence of initial pH and [PS] on AMX conversion rate (kobs), are shown in Figure 9. As can be observed from Figure 9, an acidic pH (pH 4 to 6) was favorable for AMX conversion, which was associated with a high concentration of SO 4 •− (E o = 2.5-3.1 V vs. NHE), consisting of a higher oxidation potential than HO • (E o = 2.5-3.1 V vs. NHE) [59]. In addition, sulfate radicals are also dominant in acidic pH (pH 4 to 6) as described by Equations (7) and (8) [60,61]. An increase in pH toward basic range would lead to a decrease in the AMX conversion rate, which can be described by Equation (5) [62]. An increase in PS concentration was directly proportional to an enhancement of the AMX conversion rate up to the point where a further increase promoted a negative effect. Such a decrease in AMX conversion rate can be attributed to excess PS concentration, which promotes scavenging and terminates the formed radical species, as described in Equations (9)-(12) [63]. On the basis of the results presented in Figure 9, the optimum conditions for AMX conversion were pH 4.808 and a PS concentration of approximately 1873 µM, which were accurately calculated by maximizing the polynomial equation in Equation (6); thus, the predicted AMX conversion rate was 1.51 × 10 −7 M·min −1 . Accordingly, the obtained optimum conditions were further used as the basis for H 2 O 2 -assisted photoconversion experiments, which were later compared for the investigation of the AMX conversion mechanism, transformation byproducts, and toxicity studies. As can be observed from Figure 9, an acidic pH (pH 4 to 6) was favorable for AMX conversion, which was associated with a high concentration of SO4 •− (Eo = 2.5-3.1 V vs. NHE), consisting of a higher oxidation potential than HO • (Eo = 2.5-3.1 V vs. NHE) [59]. In addition, sulfate radicals are also dominant in acidic pH (pH 4 to 6) as described by Equations (7) and (8) [60,61]. An increase in pH toward basic range would lead to a decrease in the AMX conversion rate, which can be described by Equation (5) [62]. An increase in PS concentration was directly proportional to an enhancement of the AMX conversion rate up to the point where a further increase promoted a negative effect. Such a decrease in AMX conversion rate can be attributed to excess PS concentration, which promotes scavenging and terminates the formed radical species, as described in Equations (9)-(12) [63] As shown in Figure 10, three photocatalytic processes (i.e., photocatalysis, photocatalysis + H 2 O 2 , and photocatalysis + PS) were compared on the basis of their AMX conversion profiles upon reaching <99%. Photocatalysis + PS was shown to be the fastest, reaching the full %AMX conversion within 380 min. Photocatalysis + H 2 O 2 also showed improved full AMX conversion (within 720 min) compared to photocatalysis alone (3900 min). Photocatalysis only relies on photogenerated h + , O 2 •− , and HO • as ROS for AMX conversion (Equations (13)-(16)). Accordingly, 5% Fe 2 O 3 /TiO 2 can be excited using visible light to yield photogenerated e − /h + (Equation (13)). Thereafter, photogenerated e − reacts with O 2 (dissolved in water) to form O 2 •− (Equation (14)) [13,64,65]. Photogenerated h + accumulated in the valence band (VB) of Fe 2 O 3 may react with OH − to form HO • (Equation 15) [64], and photogenerated h + may directly react with AMX (adsorbed at the catalyst surface), thereby producing transformation byproducts (Equation (16)). Mechanism The AMX conversion mechanisms via photocatalysis, photocatalysi photocatalysis + PS systems were studied in the presence of ROS scavenge FA was used for scavenging photogenerated h + , while BQ was used to scav (0.9-1.0) × 10 9 M −1 ·s −1 ) [67,68]. MeOH and t-BuOH were used to differentiate tions of SO4 •− and HO • . In such a case, MeOH reacts with both SO4 •− and HO The improved AMX conversion via photocatalytic processes with oxidants can be ascribed the reactions of photogenerated e − with H 2 O 2 and PS to form HO • and SO 4 •− , respectively (Equations (17) and (18)) [66]. Mechanism The AMX conversion mechanisms via photocatalysis, photocatalysis + H 2 O 2 , and photocatalysis + PS systems were studied in the presence of ROS scavengers ( Figure 11). FA was used for scavenging photogenerated h + , while BQ was used to scavenge O 2 •− (k = (0.9-1.0) × 10 9 M −1 ·s −1 ) [67,68]. MeOH and t-BuOH were used to differentiate the contributions of SO 4 •− and HO • . In such a case, MeOH reacts with both SO 4 •− and HO • (k = 1.1 × 10 7 M −1 ·s −1 and k = 9.7 × 10 8 M −1 ·s −1 , respectively) [69,70]. Conversely, t-BuOH reacts three-orders-of-magnitude higher with HO • (k = 9.7 × 10 8 M −1 ·s −1 , than with SO 4 •− k = (4.0 − 9.1) × 10 5 M −1 ·s −1 [66]), thus making t-BuOH as an efficient scavenger for HO • . The AMX conversion and kinetic profiles achieved by photocatalysis in the presenc of ROS scavengers are shown in Figure 11a and d, respectively. The highest inhibition AMX conversion occurred in the presence of FA, resulting in only 12% AMX degradatio (comparing to 35% obtained in the absence of any scavenger). This indicated that phot generated h + plays the main role in AMX photocatalytic conversion. Similarly, Zhu et a reported that the Fe2O3-TiO2/fly ash cenosphere composite's main active species for de radation of methylene blue were also photogenerated h + [71]. Furthermore, it was o served that AMX conversion was reduced to 31% and 26% in the presence of BQ and BuOH, respectively. Such results indicated that HO • plays a more significant role tha O2 •− . Hence, the order of ROS in decreasing contribution under the photocatalysis proce is as follows: h + > HO • > O2 •− . The AMX conversion and kinetic profiles achieved by photocatalysis in the presence of ROS scavengers are shown in Figure 11a,d, respectively. The highest inhibition of AMX conversion occurred in the presence of FA, resulting in only 12% AMX degradation (comparing to 35% obtained in the absence of any scavenger). This indicated that photogenerated h + plays the main role in AMX photocatalytic conversion. Similarly, Zhu et al. reported that the Fe 2 O 3 -TiO 2 /fly ash cenosphere composite's main active species for degradation of methylene blue were also photogenerated h + [71]. Furthermore, it was observed that AMX conversion was reduced to 31% and 26% in the presence of BQ and t-BuOH, respectively. Such results indicated that HO • plays a more significant role than O 2 •− . Hence, the order of ROS in decreasing contribution under the photocatalysis process is as follows: •− . The AMX conversion and kinetic profiles achieved by photocatalysis + H 2 O 2 in the presence of ROS scavengers are shown in Figure 11b,e, respectively. The highest inhibition of AMX conversion occurred in presence of FA, resulting in an 8% reduction compared to the case without scavengers (40% and 48% AMX degradation, respectively). This indicates that photogenerated h + plays a major role in AMX conversion. Similarly, Monteagudo et al. reported the dominant role of h + in the solar-TiO 2 /H 2 O 2 system for degradation of aniline [66]. AMX conversion in presence of t-BuOH was reduced to 44%. It is important to note that, even though h + plays the major role, the HO • contribution is nearly the same, as shown by the comparison of their rate constants (Figure 11e). Lastly, the presence of BQ reduced AMX conversion only to 46%, showing that superoxide radical plays a minor role in the overall process. Hence, the order of ROS in decreasing contribution in the photocatalysis + H 2 O 2 process is as follows: h + ≥ HO • > O 2 •− . The AMX conversion and kinetic profiles achieved with photocatalysis + PS in the presence of ROS scavengers are shown in Figure 11c,f, respectively. FA promotes the greatest inhibition among all scavengers used, yielding an AMX conversion of only 13% (compared to 55% in the case with no scavenger), implying that photogenerated h + plays a major role in AMX conversion. Similar results were obtained upon performing persulfate activation-related processes such as solar/TiO 2 /S 2 O 8 2− [63], solar/TiO 2 -Fe 2 O 3 /PS [9], and vis-TiO 2 /FeOCl/PS [72], which all reported that photogenerated h + was the main oxidative species. On the other hand, AMX conversion was reduced to 20% and 45%, in the presence of MeOH and t-BuOH, respectively. Accordingly, SO 4 •− plays a more significant role than HO • , as expected due to the acidic conditions applied. The presence of BQ resulted in rather low inhibition, up to 47.5% of AMX degraded, suggesting that O 2 •− only contributes a minor role. Therefore, the overall order of ROS in decreasing contribution by photocatalysis + PS is as follows: h + > SO 4 •− > HO • > O 2 •− . The combined mechanism of the three photocatalytic systems is shown in Figure 12. The combination of TiO 2 and Fe 2 O 3 leads to the formation of a Type 1 heterojunction [5], where the valence band (VB) and conduction band (CB) of Fe 2 O 3 are in between the VB and CB of TiO 2 , (Figure 12, before contact). However, such a heterojunction formation is unfavorable for the effective separation of photogenerated charges (e − /h + ) due to the migration/accumulation to Fe 2 O 3 . Xia et al. [64], Liu et al. [65], and Mei et al. [44] proposed that, in order to achieve greater charge separation between Fe 2 O 3 and TiO 2 , the fermi level of each semiconductor must be equalized. Thereafter, photogenerated electrons can flow from the CB of The AMX conversion and kinetic profiles achieved by photocatalysis + H2O2 in the presence of ROS scavengers are shown in Figure 11b and e, respectively. The highest inhibition of AMX conversion occurred in presence of FA, resulting in an 8% reduction compared to the case without scavengers (40% and 48% AMX degradation, respectively). This indicates that photogenerated h + plays a major role in AMX conversion. Similarly, Monteagudo et al. reported the dominant role of h + in the solar-TiO2/H2O2 system for degradation of aniline [66]. AMX conversion in presence of t-BuOH was reduced to 44%. It is important to note that, even though h + plays the major role, the HO • contribution is nearly the same, as shown by the comparison of their rate constants (Figure 11e). Lastly, the presence of BQ reduced AMX conversion only to 46%, showing that superoxide radical plays a minor role in the overall process. Hence, the order of ROS in decreasing contribution in the photocatalysis + H2O2 process is as follows: The AMX conversion and kinetic profiles achieved with photocatalysis + PS in the presence of ROS scavengers are shown in Figure 11c and f, respectively. FA promotes the greatest inhibition among all scavengers used, yielding an AMX conversion of only 13% (compared to 55% in the case with no scavenger), implying that photogenerated h + plays a major role in AMX conversion. Similar results were obtained upon performing persulfate activation-related processes such as solar/TiO2/S2O8 2− [63], solar/TiO2-Fe2O3/PS [9], and vis-TiO2/FeOCl/PS [72], which all reported that photogenerated h + was the main oxidative species. On the other hand, AMX conversion was reduced to 20% and 45%, in the presence of MeOH and t-BuOH, respectively. Accordingly, SO4 •− plays a more significant role than HO • , as expected due to the acidic conditions applied. The presence of BQ resulted in rather low inhibition, up to 47.5% of AMX degraded, suggesting that O2 •− only contributes a minor role. Therefore, the overall order of ROS in decreasing contribution by photocatalysis + PS is as follows: The combined mechanism of the three photocatalytic systems is shown in Figure 12. The combination of TiO2 and Fe2O3 leads to the formation of a Type 1 heterojunction [5], where the valence band (VB) and conduction band (CB) of Fe2O3 are in between the VB and CB of TiO2, (Figure 12, before contact). However, such a heterojunction formation is unfavorable for the effective separation of photogenerated charges (e − /h + ) due to the migration/accumulation to Fe2O3. Xia et al. [64], Liu et al. [65], and Mei et al. [44] proposed that, in order to achieve greater charge separation between Fe2O3 and TiO2, the fermi level of each semiconductor must be equalized. Thereafter, photogenerated electrons can flow from the CB of Fe2O3 to the CB of TiO2 under visible-light irradiation ( Figure 12, After Contact). Additionally, photogenerated e − can react with O2, H2O2, and S2O8 2− , yielding O2 •− , HO • , and SO4 •− , respectively, while photogenerated holes react directly with AMX and HO − , forming HO • . AMX Transformation Byproducts and Toxicity Evaluation The transformation products (TPs) of AMX in photocatalysis, photocatalysis + H 2 O 2 , and photocatalysis + PS systems were investigated and identified using LC-HRMS-orbitrap in positive and negative modes. The TPs detected and their corresponding mass spectra are presented in Table S3 and Figures S7-S14, respectively. The annotated ∆mass (error) between the experimental mass-to-charge ratio (m/z) and theoretical values (m/z) values of all proposed chemical formula was less than ± 2 ppm with an FISh coverage score ≥43.50, which allows accuracy in the assignment of elemental composition and fragment ion elucidation, respectively. It must be noted that only results from positive modes were elucidated, since all results from negative modes showed FISh coverage ≤40%. As shown in Figure 13, three TPs (TP 384 (H1), TP 384 (H2), and TP 366) were detected in all processes studied. TP 384 (H1) and TP 384 (H2) correspond to penicilloic acid (C 16 H 21 N 3 O 6 S) (i.e., the hydrolysis byproduct of AMX), which is formed via the reaction of H 2 O molecule with the strained four-membered β-lactam ring of AMX [73,74]. TP 366 corresponds to amoxicillin 2 ,5 -diketopiperazine (C 16 H 19 N 3 O 5 S), which is formed via the loss of H 2 O and then further condensation of TP 384 (H1) or TP 384 (H2) [75]. AMX Transformation Byproducts and Toxicity Evaluation The transformation products (TPs) of AMX in photocatalysis, photocatalysis + H2O2, and photocatalysis + PS systems were investigated and identified using LC-HRMS-orbitrap in positive and negative modes. The TPs detected and their corresponding mass spectra are presented in Table S3 and Figures S7-S14, respectively. The annotated Δmass (error) between the experimental mass-to-charge ratio (m/z) and theoretical values (m/z) values of all proposed chemical formula was less than ± 2 ppm with an FISh coverage score ≥43.50, which allows accuracy in the assignment of elemental composition and fragment ion elucidation, respectively. It must be noted that only results from positive modes were elucidated, since all results from negative modes showed FISh coverage ≤40%. As shown in Figure 13, three TPs (TP 384 (H1), TP 384 (H2), and TP 366) were detected in all processes studied. TP 384 (H1) and TP 384 (H2) correspond to penicilloic acid (C16H21N3O6S) (i.e., the hydrolysis byproduct of AMX), which is formed via the reaction of H2O molecule with the strained four-membered β-lactam ring of AMX [73,74]. TP 366 corresponds to amoxicillin 2′,5′-diketopiperazine (C16H19N3O5S), which is formed via the loss of H2O and then further condensation of TP 384 (H1) or TP 384 (H2) [75]. TP 367 was detected in both photocatalysis and photocatalysis + H2O2 treatments, which can be attributed to two-step successive transformation (i.e., (1) oxidative deamination, and (2) reduction to alcohol) of AMX ( Figure S15). Oxidative deamination byproducts formation of β-lactam derivatives is ascribed to the abstraction of α-hydrogen atoms, leading to the formation of a carbonyl derivative [76]. In such a case, the >CH-NH2 moiety TP 367 was detected in both photocatalysis and photocatalysis + H 2 O 2 treatments, which can be attributed to two-step successive transformation (i.e., (1) oxidative deamination, and (2) reduction to alcohol) of AMX ( Figure S15). Oxidative deamination byproducts formation of β-lactam derivatives is ascribed to the abstraction of α-hydrogen atoms, leading to the formation of a carbonyl derivative [76]. In such a case, the >CH-NH 2 moiety of AMX can be transformed into an imine moiety >CH=NH; then, further cleavage of the carbon-nitrogen double bond occurs, yielding a C=O moiety, TP (m/z) = 365. However, it must be noted that the intermediate TP (m/z) = 365 was not detected in any of the photocatalytic processes studied since its carbonyl moiety is further reduced to alcohol, forming the detected derivative, TP 367. The involved reduction reaction may be attributed to photocatalytic hydrogenation of TP 365 with the assistance of AMX as a "self" hydrogen donor (H + ) and sacrificial agent. Similarly, Wei et al. reported simultaneous hydrogen production and degradation of AMX using Bi spheres-g-C 3 N 4 [77] and MoS 2 @Zn x Cd 1−x S [78], supporting the assumption that persistent organic pollutants can be used as sacrificial electron donors. Conventionally, low C-atom alcohols (i.e., methanol, ethanol, isopropanol, triethanolamine, etc.) and low C-atom carboxylic acids (i.e., lactic acid) are used as sacrificial electron donors for photocatalytic hydrogenation and H 2 production [5,79]. In this case, it can be assumed that AMX and its byproducts (i.e., low C-atom species) mimic the role of lower C-atom alcohols in photocatalytic hydrogenation/hydrogen-forming reactions. Three oxidation TPs (TP 382 (S-O), TP 382 (E1), and TP 382 (E2)) were detected in both photocatalysis + H 2 O 2 and photocatalysis + PS treatment processes. Accordingly, TP 382 (S-O) was formed via attack of SO 4 •− and/or HO • on the sulfur atom of the thioether moiety via an electron transfer mechanism, as confirmed by molecular orbital calculations [74]. TP 382 (E1) and TP 382 (E2) are ascribed to monohydroxylation of AMX. The AMX reaction centers that are susceptible to HO • attack are illustrated in Figure 13. According to the MS2 results, hydroxylation on the methyl groups (C 3a and C 3b ) and aromatic ring (C [11][12][13][14] was ruled out due to detection of fragments (m/z) 131.01610, and 107.04916, respectively ( Figure S8). Moreover, the fragment proposed by Trovo et al., C 7 H 13 N 2 O 3 S (m/z = 189.0686), and other related fragments [73], which account for hydroxylation at the N -8 position (Figure 13), were not detected in this study. Instead, the m/z = 189.06583 fragment was detected, which was ascribed to C 10 H 9 N 2 O 2 , as proposed by Compound Discoverer TM ( Figure S8). Both SO 4 •− and HO • are expected to attack the sulfur atom of AMX to generate a sulfur-centered radical cation via an electron transfer mechanism [74]. Thereafter, this radical cation can be deprotonated to generate the α-thioether radical, which is susceptible to hydroxylation ( Figure 14). As such, TP 382 (E1) and TP 382 (E2) are proposed since hydroxylation can occur on the positive/negative lobe of the α-thioether radical's vacant p-orbital [80]. of AMX can be transformed into an imine moiety >CH=NH; then, further cleavage of the carbon-nitrogen double bond occurs, yielding a C=O moiety, TP (m/z) = 365. However, it must be noted that the intermediate TP (m/z) = 365 was not detected in any of the photocatalytic processes studied since its carbonyl moiety is further reduced to alcohol, forming the detected derivative, TP 367. The involved reduction reaction may be attributed to photocatalytic hydrogenation of TP 365 with the assistance of AMX as a "self" hydrogen donor (H + ) and sacrificial agent. Similarly, Wei et al. reported simultaneous hydrogen production and degradation of AMX using Bi spheres-g-C3N4 [77] and MoS2@ZnxCd1−xS [78], supporting the assumption that persistent organic pollutants can be used as sacrificial electron donors. Conventionally, low C-atom alcohols (i.e., methanol, ethanol, isopropanol, triethanolamine, etc.) and low C-atom carboxylic acids (i.e., lactic acid) are used as sacrificial electron donors for photocatalytic hydrogenation and H2 production [5,79]. In this case, it can be assumed that AMX and its byproducts (i.e., low C-atom species) mimic the role of lower C-atom alcohols in photocatalytic hydrogenation/hydrogen-forming reactions. Three oxidation TPs (TP 382 (S-O), TP 382 (E1), and TP 382 (E2)) were detected in both photocatalysis + H2O2 and photocatalysis + PS treatment processes. Accordingly, TP 382 (S-O) was formed via attack of SO4 •− and/or HO • on the sulfur atom of the thioether moiety via an electron transfer mechanism, as confirmed by molecular orbital calculations [74]. TP 382 (E1) and TP 382 (E2) are ascribed to monohydroxylation of AMX. The AMX reaction centers that are susceptible to HO • attack are illustrated in Figure 13. According to the MS2 results, hydroxylation on the methyl groups (C3a and C3b) and aromatic ring (C11-14) was ruled out due to detection of fragments (m/z) 131.01610, and 107.04916, respectively ( Figure S8). Moreover, the fragment proposed by Trovo et al., C7H13N2O3S (m/z = 189.0686), and other related fragments [73], which account for hydroxylation at the N-8 position (Figure 13), were not detected in this study. Instead, the m/z = 189.06583 fragment was detected, which was ascribed to C10H9N2O2, as proposed by Compound Discoverer TM ( Figure S8). Both SO4 •− and HO • are expected to attack the sulfur atom of AMX to generate a sulfur-centered radical cation via an electron transfer mechanism [74]. Thereafter, this radical cation can be deprotonated to generate the α-thioether radical, which is susceptible to hydroxylation ( Figure 14). As such, TP 382 (E1) and TP 382 (E2) are proposed since hydroxylation can occur on the positive/negative lobe of the α-thioether radical's vacant p-orbital [80]. The evolution and conversion profiles of TPs obtained from three different photocatalytic processes are presented in Figure 15a-c and correlated with toxicity profiles in Figure 15d-f, respectively. As can be seen in Figure 15a (photocatalysis), four byproducts were detected: TP 366, TP367, and the hydrolysis byproducts TP 384 (H1) and (H2). As compared to process toxicity profile (Figure 15d), it can observed that the sample reached the maximum 4.15 toxicity units (more toxic than initial level) at 25% AMX conversion. This result can be ascribed to TP 366 evolution, which also reached its maximum area at the same point (i.e., 25% AMX conversion). Specifically, TP 366 is amoxicillin 2 ,5 -diketopiperazine, a known rearranged hydrolysis product of AMX, which has already been detected in Israel water effluents [75] and Spain river water samples [81]. Nevertheless, it must be noted that toxicity units dropped to 1.12 after reaching 50% AMX conversion, which also coincides with the decrease in TP 366 concentration. Although TP 384 (H2) is the dominant byproduct in the photocatalysis process, it had a minor contribution to the overall toxicity. TP 367 also had a minor contribution to overall toxicity, despite increased formation (50-99% AMX conversions extents). Clearly, the spike in toxicity units is directly linked to TP 366 formation. The evolution and conversion profiles of TPs obtained from three different photocatalytic processes are presented in Figures 15a-c and correlated with toxicity profiles in Figure 15d-f, respectively. As can be seen in Figure 15a (photocatalysis), four byproducts were detected: TP 366, TP367, and the hydrolysis byproducts TP 384 (H1) and (H2). As compared to process toxicity profile (Figure 15d), it can observed that the sample reached the maximum 4.15 toxicity units (more toxic than initial level) at 25% AMX conversion. This result can be ascribed to TP 366 evolution, which also reached its maximum area at the same point (i.e., 25% AMX conversion). Specifically, TP 366 is amoxicillin 2′,5′-diketopiperazine, a known rearranged hydrolysis product of AMX, which has already been detected in Israel water effluents [75] and Spain river water samples [81]. Nevertheless, it must be noted that toxicity units dropped to 1.12 after reaching 50% AMX conversion, which also coincides with the decrease in TP 366 concentration. Although TP 384 (H2) is the dominant byproduct in the photocatalysis process, it had a minor contribution to the overall toxicity. TP 367 also had a minor contribution to overall toxicity, despite increased formation (50-99% AMX conversions extents). Clearly, the spike in toxicity units is directly linked to TP 366 formation. As shown in Figure 15b (photocatalysis+ H2O2), seven byproducts were detected: TP 366, TP 367, TP 382 (E1 and E2), TP 382 (S-O), and TP 384 (H1 and H2). As compared to the process toxicity profile (Figure 15e), it can be observed that the sample reached the maximum of 3.01 toxicity units (more toxic than initial level) at 10% AMX conversion. This result can be ascribed to combined toxicity of TP 382 (S-O) with TP 366 and TP 384 (H1). It must be noted that TP 382 (S-O) also reached its maximum area at the same point (i.e., 10% AMX conversion). As reported in the literature, TP 382 (S-O) was found to be a contributor to the overall toxicity on persulfate-treated AMX aqueous solution [9]. Accordingly, toxicity units dropped to 1.52 upon reaching 25% AMX conversion, which coincides As shown in Figure 15b (photocatalysis+ H 2 O 2 ), seven byproducts were detected: TP 366, TP 367, TP 382 (E1 and E2), TP 382 (S-O), and TP 384 (H1 and H2). As compared to the process toxicity profile (Figure 15e), it can be observed that the sample reached the maximum of 3.01 toxicity units (more toxic than initial level) at 10% AMX conversion. This result can be ascribed to combined toxicity of TP 382 (S-O) with TP 366 and TP 384 (H1). It must be noted that TP 382 (S-O) also reached its maximum area at the same point (i.e., 10% AMX conversion). As reported in the literature, TP 382 (S-O) was found to be a contributor to the overall toxicity on persulfate-treated AMX aqueous solution [9]. Accordingly, toxicity units dropped to 1.52 upon reaching 25% AMX conversion, which coincides with the decrease in TP 382 (S-O) concentration. The maximum of TP 366 was reached at 50% AMX conversion, exhibiting no abrupt effect on the toxicity of the sample. Such results may be ascribed to the "antagonistic" effect of other TPs, such as the presence TP 384 (H1), which may have eventually led to the reduced toxicity of TP 366. In Figure 15c (photocatalysis + PS), six byproducts were detected: TP 366, TP 382 (E1) and (E2), TP 382 (S-O), and TP 384 (H1 and H2). As compared to the process toxicity profile (Figure 15f), it can observed that the sample reached the maximum of 2.53 toxicity units (more toxic than initial level) at <99% AMX conversion. This result can be ascribed to the increased formation of TP 382 (E1) and (E2), as well as TP 382 (S-O), which also reached their maximum concentrations at the same point (i.e., <99% AMX conversion). All remaining TPs (i.e., TP 366, TP 367, and TP 384 (H1 and H2)) showed no synergistic and/or antagonistic effect on the overall toxicity. Stability Test Stability tests were performed for three consecutive cycles using the immobilized 5% Fe 2 O 3 /TiO 2 photocatalyst with the optimum conditions obtained in Section 3.2. As shown in Figure 16, AMX conversion of <99% was achieved in three consecutive cycles of photocatalytic experiments containing PS and H 2 O 2 . However, 95% and 85% AMX conversions were achieved in the second and third cycles, respectively, of the sole photocatalysis process. The loss of activity of the immobilized photocatalyst during photocatalysis (without oxidant) in consecutive cycles was mainly due to overexposure (3900 min/cycle) compared to other processes containing PS and H 2 O 2 (380 and 720 min/cycle, respectively). sults may be ascribed to the "antagonistic" effect of other TPs, such as the presence TP 384 (H1), which may have eventually led to the reduced toxicity of TP 366. In Figure 15c (photocatalysis + PS), six byproducts were detected: TP 366, TP 382 (E1) and (E2), TP 382 (S-O), and TP 384 (H1 and H2). As compared to the process toxicity profile (Figure 15f), it can observed that the sample reached the maximum of 2.53 toxicity units (more toxic than initial level) at <99% AMX conversion. This result can be ascribed to the increased formation of TP 382 (E1) and (E2), as well as TP 382 (S-O), which also reached their maximum concentrations at the same point (i.e., <99% AMX conversion). All remaining TPs (i.e., TP 366, TP 367, and TP 384 (H1 and H2)) showed no synergistic and/or antagonistic effect on the overall toxicity. Stability Test Stability tests were performed for three consecutive cycles using the immobilized 5% Fe2O3/TiO2 photocatalyst with the optimum conditions obtained in Section 3.2. As shown in Figure 16, AMX conversion of <99% was achieved in three consecutive cycles of photocatalytic experiments containing PS and H2O2. However, 95% and 85% AMX conversions were achieved in the second and third cycles, respectively, of the sole photocatalysis process. The loss of activity of the immobilized photocatalyst during photocatalysis (without oxidant) in consecutive cycles was mainly due to overexposure (3900 min/cycle) compared to other processes containing PS and H2O2 (380 and 720 min/cycle, respectively). Conclusions Fe2O3/TiO2 nanocomposites were successfully prepared using an impregnation/calcination technique of TiO2-P25 and Fe(NO3)3·9H2O. XRD and RS analyses revealed that the obtained iron oxide was hematite, α-Fe2O3. Moreover, XRD, RS, XPS, and SEM/EDXS showed successful incorporation of α-Fe2O3 with TiO2. DRS results showed improved visible-light absorption and a decrease in overall bandgap values of Fe2O3/TiO2 nanocomposites upon increasing α-Fe2O3 content. Electrochemical experiments (EIS and photocurrent responses) revealed improved charge separation (e − /h + ) of the obtained nanocomposite compared to its individual components (i.e., TiO2 and α-Fe2O3). Specifically, 5% (w/w) Fe2O3/TiO2 showed the highest photocatalytic activity based on preliminary photocatalytic experiments, as well as on the PL spectroscopy results. The results obtained from Conclusions Fe 2 O 3 /TiO 2 nanocomposites were successfully prepared using an impregnation/calcination technique of TiO 2 -P25 and Fe(NO 3 ) 3 ·9H 2 O. XRD and RS analyses revealed that the obtained iron oxide was hematite, α-Fe 2 O 3 . Moreover, XRD, RS, XPS, and SEM/EDXS showed successful incorporation of α-Fe 2 O 3 with TiO 2 . DRS results showed improved visible-light absorption and a decrease in overall bandgap values of Fe 2 O 3 /TiO 2 nanocomposites upon increasing α-Fe 2 O 3 content. Electrochemical experiments (EIS and photocurrent responses) revealed improved charge separation (e − /h + ) of the obtained nanocomposite compared to its individual components (i.e., TiO 2 and α-Fe 2 O 3 ). Specifically, 5% (w/w) Fe 2 O 3 /TiO 2 showed the highest photocatalytic activity based on preliminary photocatalytic experiments, as well as on the PL spectroscopy results. The results obtained from RSM modeling showed optimum conditions of [PS] = 1.873 mM and pH 4.808. Photocatalysis + PS achieved fastest AMX conversion, possessing a higher zero-order rate constant (k = 1.51 × 10 −7 M·min −1 ) compared to photocatalysis + H 2 O 2 (k = 1.11 × 10 −7 M·min −1 ) and photocatalysis only (k = 0.35 × 10 −7 M·min −1 ). ROS scavenging showed that photogenerated h + played the major role for AMX conversion in all processes. Toxicity changes of AMX solution were associated with TP 366 during photocatalysis, TP 382 (S-O) during photocatalysis + H 2 O 2 , and hydroxylated TPs (i.e., TP 382 (S-O) and TP 382 (E1 and E2)) during photocatalysis + PS. It is important to note that these AMX TPs greatly affected the toxicity of AMX solution during treatment in general. Data Availability Statement: The data presented in this study are available upon reasonable request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-12-08T16:13:49.740Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "1bdc9f8a7b5c364f8e4dc466edc9a2488793829a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/23/4328/pdf?version=1670252954", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bbad7f68479df392c94eb258ce1eb5743dfc71a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
35456557
pes2o/s2orc
v3-fos-license
Transcription Factor STE12α Has Distinct Roles in Morphogenesis, Virulence, and Ecological Fitness of the Primary Pathogenic Yeast Cryptococcus gattii ABSTRACT Cryptococcus gattii is a primary pathogenic yeast, increasingly important in public health, but factors responsible for its host predilection and geographical distribution remain largely unknown. We have characterized C. gattii STE12α to probe its role in biology and pathogenesis because this transcription factor has been linked to virulence in many human and plant pathogenic fungi. A full-length STE12α gene was cloned by colony hybridization and sequenced using primer walk and 3′ rapid amplification of cDNA ends strategies, and a ste12αΔ gene knockout mutant was created by URA5 insertion at the homologous site. A semiquantitative analysis revealed delayed and poor mating in ste12αΔ mutant; this defect was not reversed by exogenous cyclic AMP. C. gattii parent and mutant strains showed robust haploid fruiting. Among putative virulence factors tested, the laccase transcript and enzymatic activity were down regulated in the ste12αΔ mutant, with diminished production of melanin. However, capsule, superoxide dismutase, phospholipase, and urease were unaffected. Similarly, Ste12 deficiency did not cause any auxotrophy, assimilation defects, or sensitivity to a large panel of chemicals and antifungals. The ste12αΔ mutant was markedly attenuated in virulence in both BALB/c and A/Jcr mice models of meningoencephalitis, and it also exhibited significant in vivo growth reduction and was highly susceptible to in vitro killing by human neutrophils (polymorphonuclear leukocytes). In tests designed to simulate the C. gattii natural habitat, the ste12αΔ mutant was poorly pigmented on wood agar prepared from two tree species and showed poor survival and multiplication in wood blocks. Thus, STE12α plays distinct roles in C. gattii morphogenesis, virulence, and ecological fitness. immunodeficiency virus-AIDS patients from Thailand, parts of Africa, and South and Central America (4,37,42,71). Similarly, our analysis of a collection of Cryptococcus isolates from AIDS patients in Southern California revealed about 12% C. gattii strains (12). More recently, an ongoing C. gattii outbreak in healthy humans and animals on Vancouver Island (British Columbia, Canada) has caused multiple fatalities; this is the first documented outbreak in North America (43,83). Thus, C. gattii continues to pose serious public health problems for immunocompromised and healthy individuals worldwide. Concomitant reports of severe and often fatal disease in animals also raise important concerns for the health of pets and wildlife. Along similar lines, a number of recent reports described the natural isolations of C. gattii from multiple tree species other than the Eucalyptus in Canada, Brazil, and India, thereby expanding the known geographical range and ecological niche of this pathogen around the globe (50,51,73). Still, the mechanisms behind host predilection and geographical distribution of C. gattii remain largely unknown, in part due to a paucity of systematic investigations. STE12 was first identified among a group of Saccharomyces cerevisiae sterile mutants defective in sexual conjugation and related processes (30). Subsequent studies have shown that STE12 is a transcription factor downstream of the mitogenactivated protein kinase (MAPK) cascade that controls mating, filamentation, and cell wall integrity (25,29). The STE12 is activated by two MAPKS, KSS1 and FUS1; has a transcription partner, TEC1; and has two negative regulators, DIG1 and DIG2 (59). More recently, STE12 was reported to directly regulate expression of over 29 yeast genes that mediate a range of cellular processes, cell cycle, mating projections, cell fusion, polarized growth and budding, stress and/or starvation, and signal transduction (31,75). The roles of STE12 homologues in various biological processes have thus so far been characterized in few fungi other than S. cerevisiae. In the filamentous model fungus Aspergillus nidulans, the steA mutant is sterile without ascogenous tissue and fruiting body, but there are no effects on either the sexual cycle-specific Hülle cells or a number of asexual developmental programs (87). Among the few pathogenic fungi studied, STE12 homologues have been implicated in the pathogenic process itself. In the rice blast pathogen Magnaporthe grisea, MST12 disruption caused a serious loss of virulence, as the mutant failed to infect rice leaves or onion epidermal cells, even through wound sites (68,69). The mutation of the STE12 homologue CST1 in Colletotrichum lagenarium, the causal agent of anthracnose disease of cucumber, led to failure of infectious hyphal production from the penetration structure appressoria. This caused a nonpathogenic phenotype on intact leaves but produced disease on wounded leaves (86). The STE12 homologue CPH1 from the common human yeast pathogen Candida albicans was found to be only partially involved in hyphal formation, when a mutant strain was tested on solid medium. However, a double mutation in CPH1 and another gene, PHD1, caused complete loss of filamentation as well as attenuation of virulence in mice (56,58). In contrast, STE12 from the haploid yeast Candida glabrata, which is increasingly implicated in drug-resistant candidiasis, was found necessary for nitrogen starvation-induced filamentation and for a wild-type level of virulence in mice (6). In the opportunistic human pathogen Penicillium marneffei, stlA gene mutation caused no defects in growth, asexual division, or dimorphic switching, while CLS12, the STE12 homologue in Candida (Clavispora) lusitaniae, was required for mating but dispensable for filamentation (5,94). In attempts to understand hyphal formation, a C. neoformans var. neoformans STE12␣ homolog of the S. cerevisiae STE12 gene was identified. Its overexpression caused hyphal projections and induction of the MF␣ pheromone gene, important for mating reaction, and CnLAC1, the gene that encodes the important virulence factor laccase; the possibility is therefore raised that STE12␣ provides a bridge between C. neoformans mating type and virulence via melanin production (90). Subsequent study of C. neoformans var. grubii by gene knockout revealed that STE12␣ was essential for haploid fruiting but was not essential for mating and virulence (95). The results of an STE12␣ overexpression study by Chang and coworkers were subsequently verified by gene knockout in C. neoformans var. neoformans to show that STE12␣ was essential for haploid fruiting, melanin production, and virulence but was dispensable for mating (11). A second homolog of C. neoformans var. neoformans STE12 was subsequently identified as MATa specific; it was essential for mating and virulence (10). Interestingly, C. neoformans var. grubii STE12␣ was later shown to be involved in reversal of hypervirulence of a crg1 mutant, which encodes the regulator of G protein signaling (89). Taken together, these findings suggested that STE12 is co-opted to perform distinct regulatory control functions be-tween closely related C. neoformans varieties. Therefore, we surmised that C. gattii STE12␣ is potentially a valuable target in studies aimed at unraveling the genetic basis for functional divergence among pathogenic Cryptococcus species. C. gattii STE12␣ gene characterization and mutation. Primers V564 and V565 were used to PCR amplify a 420-bp fragment of the STE12␣ gene from the genomic DNA of C. gattii NIH444 (see Table S1 in the supplemental material). The resulting 420-bp fragment was used as a probe for colony hybridization analysis of the C. gattii cosmid library (75). After selection of positive clones, a primer walk strategy was used to obtain the complete sequence of STE12␣. This sequence was deposited in GenBank (see "Nucleotide sequence accession number" below). To obtain the amino acid sequence of C. gattii Ste12␣p, we confirmed the C terminus of Ste12␣p by establishing the polyadenylation site of the STE12␣ transcript by 3Ј rapid amplification of cDNA ends. The resulting 1.8-kb product of PCR rapid amplification of cDNA ends was ligated into the TOPO-2.1 TA vector (Invitrogen) and was sequenced. Phylogenetic analysis of deduced Ste12␣p amino acid sequences was done with the PAUP v4.0b4a program (84), using a bootstrap method with a neighborjoining or maximum parsimony search. The ste12␣⌬ mutant was generated by homologous recombination of the ste12␣::URA5 disruption cassette (see Fig. S1A in the supplemental material). The transformants were selected on complete synthetic medium without uracil medium and confirmed for gene deletion at the homologous site by PCR and Southern blotting. The reconstituted strain was generated by homologous integration of the 5.3-kb fragment containing the full-length STE12␣ gene. Transformants were selected on 5-FOA medium to produce uracil auxotrophic strains resulting from replacement of URA5 at the ste12␣ locus with the wild-type copy of STE12␣. The homologous reconstitution of STE12␣ in the ste12␣⌬ mutant was again confirmed by PCR and Southern blotting (see Fig. S1B and C in the supplemental material). The original ura5 mutated by selection on 5-FOA of the reconstituted STE12␣ strain was restored as described previously (62). The reverse transcription (RT)-PCR was performed to confirm the absence and presence of the STE12␣ message in mutant and reconstituted strains (see Fig. S1D in the supplemental material). Phenotypic characterization. Capsule production was assayed by incubating cultures in YPD broth overnight at both 30°C and 37°C or in LIM medium for 7 days at 30°C with shaking (180 rpm). The size of the capsule was assessed qualitatively under the light microscope using India ink mounts. The urease, phospholipase B and Cu,Zn superoxide dismutase (SOD) enzyme activities of the test strains were determined as described previously (22,23,61). In vitro growth assessment of all the strains were determined by growing yeast cells in the YPD broth at 30°and 37°C with shaking (180 rpm). An aliquot of cell suspension was withdrawn at 3-h intervals, and the A 600 was recorded. A commercial yeast identification system was used to compare assimilation patterns (API 20C AUX; BioMerieux). Additionally, the YT MicroPlate (BiOLOG) was used for the comparison of 94 biochemical tests. The API ZYM test (BioMerieux) was performed per the manufacturer's instructions to compare various enzymatic activities of the test strains. Melanin production and laccase expression and regulation. Melanin production by the WT, ste12␣⌬ mutant, and ste12␣⌬ plus STE12␣ strains was assayed on Niger seed agar (46). Five microliters of cell suspension (10 7 cells/ml) from each strain was spotted on the agar surface, and cultures were incubated at 30°C for 3 to 6 days. Copper or cyclic AMP (cAMP)-mediated reversal of melanin pigmentation was tested on Niger seed agar supplemented with either 10 M to 200 M CuSO 4 (97) or 2 mM to 50 mM cAMP (3). The laccase activity in the test strains was determined by a previously published method (95). In brief, equal numbers of glucose-starved cells of each strain were used to determine the oxidation of the diphenolic substrate 2,2Ј-azinobis(3-ethylbenzthiazolinesulfonic acid) (ABTS) (IU of activity ϭ 0.01 A 420 absorbance unit in 30 min). The transcription of LAC1 was estimated with semiquantitative RT-PCR using the primer pair V1380 and V1381 (78). Mating and haploid fruiting. The WT, ste12␣⌬, and ste12␣⌬ plus STE12␣ strains were mixed with the compatible mating strain C. gattii NIH198 (MATa, serotype B) on V8 juice agar (pH 7.0) and were incubated at room temperature in the dark for up to 14 days (45,48). Plates were periodically checked for the appearance of hyphae at the edge of the fungal growth. These edges were examined under the light microscope for the presence of characteristic basidia and basidiospores. Fifty nonoverlapping areas were selected to estimate the extent of mating. V8 juice agar plates supplemented with 2 mM to 50 mM cAMP (Sigma) were used to rescue any mating defect in test strains (1). To further characterize the mating reactions, we used a scanning electron microscope (SEM). The petri plates with mating cultures were flooded with 2% glutaraldehyde in 0.1 M sodium cacodylate buffer (pH 7.4) and were allowed to stand for 3 to 4 h. The fixed cell cultures were flooded with 0.1 M sodium cacodylate buffer once, and blocks, about 4 mm 3 in size, of the agar showing hyphal growth on the surface were cut out. These blocks were dehydrated using a graded ethanol series, and they were then critical point dried using liquid carbon dioxide. The dried agar blocks were sputter coated with pure gold and viewed in the SEM. For the haploid fruiting assay, test strains were patched individually on dry, nitrogen-limited filament agar and incubated for 3 weeks at room temperature in the dark. The edges of fungal growth were photographed under the light microscope (91). Virulence assays. The pathogenic potentials of the WT, ste12␣⌬, and ste12␣⌬ plus STE12␣ strains were compared in mice models of meningoencephalitis (16,61). BALB/c and A/Jcr mice (male, 6 to 8 weeks) were procured from Charles River Laboratories, Inc. All procedures for safe and pain-free handling of animals were followed, per the recommendations of the Institutional Animal Care and Use Committee. The log-phase cultures of test strains were resuspended in sterile phosphate-buffered saline, pH 7.4, at a concentration of 1 ϫ 10 7 cells/ml. Groups of five or six mice were injected intravenously (i.v.) with 10 6 cells of each strain. The animals were given food and water ad libitum and were observed twice daily for any sign of distress. Mice that appeared moribund or in pain were sacrificed using CO 2 inhalation and cervical dislocation. Survival data from the mouse experiments were analyzed by Kaplan-Meyer survival curves using the SAS system (SAS Institute, Inc.). Histological lesions were examined in groups of three mice infected with 1 ϫ 10 5 cells (16). The mice were sacrificed on day 7 postinfection, and the whole brains were removed and fixed in Bouin's fixative. After 24 h of fixation, tissues were sectioned, and brain slices were washed in distilled water for several hours. Processing was done in a vacuum infiltration processor, the Tissue-Tek VIP 5 (Sakura Finetech), starting with 70% alcohol and proceeding through a series of dehydrating alcohols and xylenes into paraffin, for 15 min per station. Tissues were then embedded in paraffin blocks and sectioned at a 7-m thickness. Sections were stained with hematoxylin and eosin and mucicarmine (Richard Allen Scientific). To determine the in vivo capsule size of the test strains, 2 mice per strain were injected with 10 6 cells i.v. At 3 days postinfection, mice were sacrificed using CO 2 inhalation, and the brain tissues were smeared on a glass slide, digested with KOH, and examined under a light microscope using Nomarski optics (Olympus). Neutrophil (PMN) fungicidal activity. Polymorphonuclear leukocytes (PMNs) were isolated from the peripheral blood of healthy human volunteers by Ficoll-Paque (Pharmacia LKB Biotechnology) centrifugation, as described previously (17). One-hundred-microliter aliquots of PMN, 100 l of viable yeasts (5 ϫ 10 3 ), and 20 1 of pooled human serum were added to the wells of a 96-well tissue culture plate and incubated at 37°C in 5% CO 2 -95% air. After incubation for 4 h, the plates were centrifuged at 2,000 rpm for 10 min, and the supernatants were carefully aspirated through 27-gauge needles. The PMN were lysed by the addition of 100 l of 0.05% Triton X-100, and the yeasts were serially diluted and plated on YPD agar. The YPD agar plates were incubated at 30°C for 2 to 4 days for quantitation of CFU. The results were expressed as the percentage of C. gattii killed [ϭ 1 Ϫ (CFU of experiment ϫ CFU of inoculum) ϫ 100]. Values of zero indicated no killing. The study involving human subjects was performed under the guidance of a protocol approved by the Institutional Review Board. Assessment of pigmentation and growth on wood-based media. The pigmentation and growth of the test strains were assessed on both wood chip and wood extract agar media. Black cherry (Prunus serotina Ehrh) chips were received courtesy of Roger Dziengeleski, Finch, Pruyn & Co., Inc., Glens Falls, NY. Chips were also prepared from Juniperus virginiana (red cedar), Tsuga canadensis (eastern hemlock), Populus tremuloides (trembling aspen), and Acer saccharum (sugar maple) collected from a local forest. These materials were not pretreated with any physical or chemical processes. The wood agar was prepared by autoclaving 2.0 g of chips at 110°C for 30 min, followed by mixing with 100 ml of autoclaved 2% agar solution. Twentyfive-milliliter aliquots were dispensed in sterile petri plates and were designated wood agar. Wood extract agar was prepared by mixing of 5.0 g chips in 250 ml of water. This mixture was stirred with magnetic stirrer for 3 h in the cold room, followed by centrifugation at 13,000 rpm for 30 min at 4°C. It was filter sterilized through a 0.22-m membrane, mixed with 250 ml of autoclaved water containing 4% agar, and dispensed in 25-ml aliquots into sterile petri dishes. Both wood agar and wood chip agar plates were inoculated with a 5-l suspension of yeast cell suspensions (10 7 cells/ml), were incubated for 10 days at 30°C, and were observed periodically for growth and pigmentation. Wood section microscopy. Black cherry wood blocks (ϳ1-cm cubes) were prepared from freshly pruned branches. The blocks were autoclaved at 110°C for 30 min. Sterile blocks were set on 2% sterile water agar in petri plates so that the vessel elements were vertically oriented. The blocks were inoculated with 5 l of cell suspension (optical density at 600 nm [OD 600 ] ϭ 1.0), incubated at 30°C, and observe for up to 8 weeks. Adequate moisture was maintained throughout the incubation period. Finally, the blocks with fungal growth were removed, fixed in formalin for 24 h, and embedded in paraffin, and tangential sections (5 to 7 m) were cut on a rotary microtome. Sections were deparaffinized, covered with glass coverslips, and examined without any staining under the light microscope. Nucleotide sequence accession number. The complete sequence of STE12␣ was deposited in GenBank under accession number AY168185. RESULTS STE12␣ gene characterization. The cloned C. gattii STE12␣ gene was a single copy with approximately 85% nucleotide identity to that of STE12␣ from C. neoformans var. grubii and C. neoformans var. neoformans (GenBank accession no. AAD 44111 and AAN 75715, respectively). The phylogenetic analysis of protein alignment of the C. gattii STE12␣ sequence with that of STE12␣ from C. neoformans var. neoformans, C. neoformans var. grubii, and 11 other fungal species revealed that this protein is segregated into distinct clades of ascomycetous yeasts, molds, and basidiomycetous yeasts, with good bootstrap support. Among basidiomycetes, C. neoformans var. grubii and C. neoformans var. neoformans formed one sister clade distinct from C. gattii. Interestingly, this clustering pattern was not evident for STE12a homologs (Fig. 1). Detailed characterization of the C. gattii STE12␣ protein revealed STE (amino acids 91 to 203) and C 2 H 2 -Zn 2ϩ motifs (amino acids 543 to 565) (see Fig. S2 in the supplemental material). The STE motif is common in STE12 homologs reported from all fungi, while two C 2 H 2 -Zn 2ϩ motifs have been reported only from STE12 homologs of ascomycetes (6,30,53,66,86). In silico analyses with deduced amino acid sequences using the PROSITE program (79) found a number of putative phosphorylation sites, including protein kinase C phosphorylation (11 sites), casein kinase II (19 sites), and cAMP-and cGMP-dependent protein kinase (3 sites). Transcriptional activation of STE12 by phosphorylation has been extensively studied in S. cerevisiae (64,80,93). The presence of phosphorylation sites has also been reported by bioinformatic analyses of STE12 of Magnaporthe grisea and Colletotrichum lagenarium (68,69,86). STE12␣ is required for efficient mating but not for haploid fruiting. Mating among compatible strains is an important attribute of C. neoformans strains, with potential relevance in epidemiology and virulence (32,33,45,47,48). Although a good mating reaction between NIH444 (MAT␣) and NIH198 (MATa) is generally seen in 4 to 5 days, we observed all mating tests for up to 14 days, to include any poor or delayed reaction on V8 juice agar. Both WT and ste12␣⌬ plus STE12␣ strains showed mycelial elements at the edge of the colonies as early as 5 days postincubation ( Fig. 2A, G). Under the SEM, these elements revealed well-formed basidia with chains of basidiospores (Fig. 2B, H). In contrast, mating reactions of ste12␣⌬ did not show any mycelial formation at the edges until 11 days postincubation under light microscopy (Fig. 2C). However, analysis under SEM revealed isolated formation of hyphae with basidia and basidiospores (Fig. 2D), which were clearly visible at 11 days postincubation (Fig. 2E and F). Semiquantitative analysis using random examination of edges revealed 2 of 50 edges with positive mating reactions in ste12␣⌬ strains compared to 42 to 45 of 50 edges positive for mating reactions in WT and ste12␣⌬ plus STE12␣ strains. C. neoformans var. grubii mating is regulated by both G-protein alpha subunit Gpa1-cAMP and Gpb1-MAPK (39). However, the addition of cAMP to V8 juice agar did not alter the mating reaction of the ste12␣⌬ mutant (Fig. 3). This indicated that either STE12␣ is required for efficient mating of C. gattii through the pathway downstream of cAMP and/or independent of cAMP. C. gattii STE12 was not involved in the haploid fruiting, as all the strains were positive in the morphological assay (Fig. 4). The appearances of rough edges with aerial hyphae were visible in light microscopy of three strains on filament agar. Further examination of these structures with SEM showed hyphae, basidia, and basidiospores. This observation is in contrast to the reported essential role of STE12␣ in haploid fruiting of C. neoformans var. grubii and C. neoformans var. neoformans (10,83). STE12␣ is not required for several cellular functions. Capsule formation is an important C. neoformans virulence attribute, and the ste12␣⌬ mutants of C. neoformans var. grubii and C. neoformans var. neoformans revealed smaller capsules than did the corresponding WT strains (9,11,95). Interestingly, the capsule size examined by microscopic mounts of India ink preparations of the C. gattii ste12␣⌬ mutant was identical to that of WT and ste12␣⌬ plus STE12␣ strains when grown in YPD broth at 30°C or at 37°C or in capsule induction medium (LIM) at 30°C, indicating that C. gattii STE12␣ is dispensable for capsule growth. Similarly, C. gattii ste12␣⌬ did not exhibit any defect in the expression of other virulence factors, including urease, phospholipase, and Cu,Zn SOD (data not shown). In contrast, the C. neoformans var. neoformans ste12␣⌬ showed impairment of phospholipase and Cu,Zn SOD enzyme activities (11,22,23). To unmask any unexpected phenotype, we used the API 20C AUX, API ZYM, and YT MicroPlate systems to compare assimilation and enzymatic activities of the C. gattii WT, ste12␣⌬, and ste12␣⌬ plus STE12␣ strains, but no differences were observed (data not shown). Similarly, the C. gattii ste12␣⌬ mutant did not show any defect in growth in the presence of chemicals used as C. neoformans cell wall or cell membrane inhibitors (44). Alto- gether, these results indicated that C. gattii STE12␣ did not regulate several of the putative virulence factors assayed in this fungal pathogen. STE12␣ is required for melanin pigmentation and laccase expression. Melanization via C. neoformans laccase is an important virulence attribute, and it is down regulated in C. neoformans var. neoformans ste12␣⌬ mutants (10,11,76,92). Interestingly, the C. gattii ste12␣⌬ mutant was also defective in melanization and showed weak yellowish pigmentation compared to the dark brown pigmentation formed by WT and ste12␣⌬ plus STE12␣ strains on Niger seed agar at 3 days postincubation (Fig. 5A). Supplementation of Niger seed agar with CuSO 4 , a metal important in laccase induction, did not restore melanin pigmentation in the mutant strain (data not shown). However, supplementation of the medium with cAMP, important in transcriptional regulation of laccase via FIG. 4. STE12␣ is not required for C. gattii haploid fruiting. The light microscopic (left panel; magnification, ϫ100) and SEM (right panel; magnification, ϫ6,000) analyses of haploid fruiting of the WT, ste12␣⌬, and ste12␣⌬ plus STE12␣ strains are shown. The C. gattii strains were patched on dry, nitrogen-limited filament agar medium. The plates were incubated for 3 weeks at room temperature in the dark. The morphological assay showed characteristic filaments, basidia, and basidiospores. VOL. 5, 2006 CRYPTOCOCCUS GATTII STE12␣ 1071 parallel or multipath pathways (39), restored melanin pigmentation in the C. gattii ste12␣⌬ mutant (Fig. 5A). The reduced melanin pigmentation in the C. gattii ste12␣⌬ mutant was caused by defects in laccase enzyme activity and laccase transcript levels under glucose repression condition ( Fig. 5B and C). These results indicated that STE12␣ regulates the expression of laccase in C. gattii. STE12␣ is required for a wild-type level of virulence in mice. As C. gattii causes disease in both immunocompromised and healthy individuals, we utilized immunocompetent BALB/c and C5-deficient A/Jcr mice strains for comparison of pathogenicity. The Kaplan-Meyer survival curves for mice infected with three strains are shown in Fig. 6. A/Jcr mice infected with the wild-type strain survived for 4 days, while similarly infected BALB/c mice survived for 10 days. The survival pattern after infection by the reconstituted strain was 11 days for A/Jcr and 14 days for BALB/c mice. In contrast, mice of both strains infected with the C. gattii ste12␣ mutant strain survived as long as 29 and 30 days, respectively (P Ͻ 0.0005). Thus, a significant reduction in the virulence of the ste12␣ mutant strain was observed for both complement-deficient A/Jcr mice and immunocompetent BALB/c mice. The ste12␣⌬ mutant produced much smaller cysts in the brains of both BALB/c and A/Jcr mice, and these cysts contained fewer yeast cells. On the contrary, WT and ste12␣⌬ plus STE12␣ strains produced large cysts containing several to many yeasts. It was interesting to note that all strains tested divided relatively rapidly and produced more lesions in the brains of infected A/Jcr mice than in those of the infected BALB/c mice (Table 1), indicating that A/Jcr mice are relatively more susceptible than BALB/c to C. gattii infection. Light microscopy of brain smears revealed that all test strains FIG. 5. STE12␣ is required for melanin production, laccase enzyme activity, and LAC1 transcript expression. (A) C. gattii strains grown in YPD broth overnight were washed with phosphate-buffered saline, and a 5-l suspension of 10 7 cells/ml was spotted on Niger seed agar with or without cAMP and incubated for 3 days at 30°C for melanin production. (B) Laccase enzyme activity from equal numbers of glucose-starved cells (5 h) was determined by measuring the oxidation of the diphenolic substrate ABTS (IU of activity ϭ 0.01 A 420 absorbance unit in 30 min). Results are the means Ϯ standard deviations from three individual experiments. The absence of laccase activity in the mutant is consistent with the initial loss of melanization in this strain. (C) Semiquantitative RT-PCR to examine the expression of the LAC1 gene in all of the test strains. Total RNA from each strain was isolated and reverse transcribed (cDNA) with 100-ng aliquots in 1:5 serial dilutions. Actin was used as a control. 1072 REN ET AL. EUKARYOT. CELL produced large capsules in the infected mice (Fig. 6B). This observation was consistent with the large capsule size observed in vitro. Interestingly, the ste12␣⌬ mutant was significantly more susceptible to in vitro PMN killing than the WT and ste12␣⌬ plus STE12␣ strains (Fig. 7). However, in vitro H 2 O 2 sensitivity was similar in all three strains, which raised the possibility that highly reactive oxygen intermediates may be responsible for enhanced killing of the ste12␣⌬ mutant strain by human PMN. Collectively, these results indicated that functional STE12␣ is critical for C. gattii survival and multiplication against mammalian host defense mechanisms. STE12␣ is involved in wood utilization. Many reports have described natural isolations of C. gattii from wood hollow and detritus of trees from around the world (28, 36, 50, 51, 73, 88). Additionally, experimental inoculations of almond tree seedlings with C. gattii showed that fungus remained viable and could be recovered from plant tissues 100 days postinfection, indicating its potential for survival in planta (38). It has been previously proposed that C. neoformans laccase plays an important role in lignin degradation in wood hollows (67,72,96). It was interesting that the ste12␣⌬ mutant was defective in pigmentation and appeared almost white, as opposed to the dark brown to red pigment produced by wild-type and reconstituted strains on water agar containing wood chips from black cherry and eastern hemlock trees (Fig. 8A). The loss of pigmentation in the mutant was not due to poor growth, as serial dilutions of all the strains on water agar containing either 2% wood chips or 2% wood chip extract exhibited similar growth patterns (Fig. 8B). It is important that the pigmentations ob-FIG. 6. STE12␣ is required for virulence. (A) C. gattii strains were grown overnight in YPD broth, washed with phosphate-buffered saline, and counted, and 100 l of suspension containing 10 6 cells was injected intravenously into 5 (each) BALB/c and A/Jcr mice. Mice were monitored twice daily until moribund. (B) Two mice infected i.v. with 10 6 cells were sacrificed at 3 days postinfection, and brain tissues were smeared on microscope slides and examined under a microscope using Nomarski optics (magnification, ϫ380). VOL. 5, 2006 CRYPTOCOCCUS GATTII STE12␣ served on the wood chips from black cherry and eastern hemlock were not a universal phenomenon, as wood chips from red oak, balsam fir, sugar maple, or trembling aspen did not induce pigmentation in any of the strains tested. This indicated that these tree species either lacked specific substrate(s) required for pigmentation or the condition for pigmentation was not appropriate in the laboratory setting. Next, we examined the role of C. gattii STE12␣ in tree colonization by using simulated conditions. Sections of black cherry wood blocks surface inoculated with WT and ste12␣⌬ plus STE12␣ strains showed abundant yeast cells in the vessels with heavy pigmentation deep within the wood block. A few budding cells were also seen ( Fig. 9A and B). Similar wood block sections inoculated with the ste12␣⌬ mutant showed very few yeast cells, with negligible brown pigment (Fig. 9C). Furthermore, the total numbers of yeast cells, counted from 10 representative sections, were approximately 40 Ϯ 16 to 19 for WT and ste12␣⌬ plus STE12␣ cells and 13 Ϯ 10 for ste12␣⌬ mutant cells (P Ͻ 0.05). These results indicated that pigmentation might be crucial for C. gattii survival, multiplication, and possibly, invasion in the face of plant host defense mechanism(s). Further experimental studies are imperative to test this possibility. DISCUSSION We have carried out a detailed characterization of the transcription factor STE12␣ in C. gattii biology. Currently, this pathogen is the cause of serious public health concerns due to its infectivity for AIDS patients and healthy individuals, its fatal outbreaks involving humans and animals, and its expanding geographical range, especially in North and South America (8,12,43). Experimental studies to determine what makes C. gattii such a potent pathogen are sparse. STE12␣ is among the very few transcription factors studied by the use of knockout mutants in both saprobic and pathogenic fungi. Therefore, STE12 is a valuable target for molecular pathogenesis studies to define C. gattii properties that are either distinctive or shared with other pathogens. Previously, it was reported that STE12␣ was located within the MAT␣ locus of the C. gattii genome, along with elements of the MAPK signaling cascade, pheromones, receptors involved in mating, housekeeping genes, and putative open reading frames of unknown function (31,75). The overall organization of the mating locus in C. gattii is quite similar to that in C. neoformans var. grubii and in C. neoformans var. neoformans (31,41,75). However, the order and orientation of STE12␣ vis-à-vis other genes in the locus are not conserved in these fungi, raising the strong possibility that changes in STE12␣ expression can have occurred due to the positional effect of neighboring genes, as has been reported for other eukaryotic genomes (2,20,35,85). C. gattii STE12␣ was required for robust mating, as the ste12␣⌬ mutant showed poor mating compared to the wildtype and reconstituted strains. Thus, unlike S. cerevisiae and other ascomycetes, the C. gattii ste12␣⌬ mutant was not sterile but merely mating impaired; it shared this property with similar mutants from C. neoformans var. grubii and C. neoformans var. neoformans (11,95). Taken together, our study and the two studies just cited provide a unique demonstration of a gene function conserved across all three pathogenic Cryptococcus forms ( Table 2). Future studies with a congenic pair of C. gattii strains (MAT␣ and MATa) and genetic epistasis experiments are clearly warranted to define the extent of the conserved role of STE12␣ in C. neoformans and C. gattii. Conventionally, mating studies in C. neoformans are carried out over a week, a length of time which, in our experience, proved insufficient for the knockout strain. However, when the observation period was extended to 2 weeks and when SEM was used for closer scrutiny of cells at edges of cocultures, a few spots with sparse mating reactions were found in mutant strain. Almost all fungal STE12 knockout mutants to date are reported to have mating or morphogenetic defects, a fact which affirms the central role of this transcriptional regulator in developmental programs. Our current data, taken together with the published literature from C. neoformans var. grubii and C. neoformans var. neoformans, do not identify the mechanism behind the observed partial defect in mating in the mutant strains. Perhaps other transcription factors play an important role in this process, either in association with or independent of STE12. Another Cryptococcus developmental program, termed haploid or monokaryotic fruiting, involves filamentation and sporulation by MAT␣ strains; it is reported to be defective in ste12␣⌬ mutants of C. neoformans var. grubii and C. neoformans var. neoformans (11,95). This observation is similar to FIG. 7. STE12␣ is required for fungicidal activity of human neutrophils (PMNs). Human PMN at the effector-to-target cell ratio of 10:1 were inoculated with opsonized C. gattii strains (5 ϫ 10 3 ) for 4 h at 37°C in 5% CO 2 -95% air. The percent C. gattii killed was determined by using the following equation: 1 Ϫ (CFU of experiment ϫ CFU of control) ϫ 100. Results are means Ϯ standard deviations from PMNs of three individual donors. The asterisk denotes a P value of Ͻ0.05 compared to wild-type and reconstituted strains. a Mean Ϯ standard deviation of foci counted in 9 brain sections of 3 mice in each group. b Mean Ϯ standard deviation of yeast cells counted in all of the foci divided by the number of foci. the complete or partial inhibition of filamentation seen in S. cerevisiae and Candida albicans ste12⌬ mutants (55,56). Interestingly, the C. gattii ste12␣⌬ mutant showed normal haploid fruiting, suggesting either that STE12␣ is redundant or that there is some independent regulatory control of this pathway. Thus, unlike the role of STE12␣ in mating, the functional role of this gene in filamentation differs between C. gattii and C. neoformans and thus represents a gene function that has diverged in these closely related pathogens. The importance of this observation is difficult to assess at present, as we do not know how widespread and relevant the phenomenon of haploid fruiting is for C. gattii biology and virulence. Recently, it has been suggested that C. neoformans var. grubii haploid fruiting represents a self-fertilizing form of sexual reproduction in the absence of the out-crossing mode of reproduction due to the rarity of the occurrence of MAT␣ mating partners (54). The profound defects in laccase expression, enzyme activity, and melanin production in the C. gattii ste12␣⌬ mutant strain are consistent with earlier reports from C. neoformans var. neoformans on down regulation or up regulation of laccase levels due to knockout or overexpression of C. neoformans var. neoformans STE12␣ (11,90). In contrast, melanin production was reported to be unchanged in the C. neoformans var. grubii ste12␣⌬ mutant (95). The latter observation highlights another instance in which gene regulation has been conserved between C. gattii and C. neoformans var. neoformans but not in C. neoformans var. grubii, even though C. neoformans var. neoformans and C. neoformans var. grubii are closely related phylogenetically. The results with ste12␣⌬ mutants in C. gattii and C. neoformans var. neoformans raise the strong possibility that as-yet-undefined activators and/or suppressors of melanin participate in direct or indirect interactions with STE12␣. Importantly, the addition of cAMP reversed the melanin defect in the C. gattii ste12␣⌬ mutant strain; a possible interpretation is that cAMP acts to rescue laccase repression by STE12␣ via parallel or multipath pathways (39). An important role for the cAMP signaling pathway in regulation of C. neoformans melanin production has already been established by means of a number of gene knockout mutants (3,72). Overall, there is a strong possibility of cross talk between elements of the MAPK and cAMP pathways, which are known to play important roles in mating. A recent discovery of C. neoformans DEAD box RNA helicase VAD1 as a regulator of multiple virulence-associated genes is also highly relevant in this context (67). The vad1⌬ mutant was melanin deficient due to up-regulation of NOT1 (CDC39); the latter acted as an intermediary transcriptional repressor of laccase. NOT1 is part of the Ccr4-Not complex involved in global control of gene expression (21). The involvement of NOT1 in the transcriptional regulation of laccase raised the possibility of direct or indirect interactions with STE12 because some earlier studies have described involvement of CDC36 and CDC39 as negative regulators of the pheromone response in yeast (24,63). Thus, multiple lines of evidence suggest that regulatory elements in the pheromone response pathway are involved in regulation of laccase expression. Further examinations will be imperative to define the regulatory network for C. gattii laccase. The evidence for other regulatory controls of melanization in fungi comes from the plant pathogens Colletotrichum lagenarium and Magnaporthe grisea. Their respective ste12⌬ mutants have no defects in melanin, which is uniquely produced via polyketide synthesis (34). Melanin biosynthesis is controlled in these two organisms via novel transcription factors containing a Zn(II) 2 Cys 6 binuclear cluster motif and a Cys 2 His 2 zinc finger similar to motifs present in STE12 (86). The C. gattii ste12␣⌬ mutant was significantly less virulent than the wild-type and reconstituted strains in both immunocompetent (BALB/c) and C5-deficient (A/Jcr) mice. This observation was intriguing because we did not observe a down regulation of SOD, phospholipase, and urease or a smaller capsule size, all prominent characteristics of the less-virulent C. neoformans var. neoformans ste12␣⌬ mutant, although the C. gattii ste12␣⌬ mutant had a pronounced melanin defect (10,11). Even though the virulence defect was not seen in the C. neoformans var. grubii ste12␣ mutant, this mutant also pro-duced a smaller capsule; capsule size is a very important virulence factor (95). The reduced virulence of C. gattii ste12␣⌬ was one more instance in our study when C. gattii and C. neoformans var. neoformans display an important role for STE12␣ in animal pathogenesis, functionally diverged from C. neoformans var. grubii. Additionally, we noticed that the C. gattii ste12␣⌬ strain was severely defective in replication and in cyst formation in the brains of infected mice. The mutant also showed heightened sensitivity to oxidative and nonoxidative killing by phagocytic cells, as evident from experiments with purified human PMN. A common theme in all of the virulencerelated observations was the reduced laccase and melanin levels in the C. gattii ste12␣⌬ mutant in vitro. Laccase in C. neoformans var. neoformans and C. neoformans var. grubii plays important roles in resistance to stress, phagocytic killing, and conversion of dopamine to immunomodulatory products in the host brain (57,65). By implication, laccase and melanin are likely to play similar roles in C. gattii. The fact that albino strains are not usually seen in clinical cases of cryptococcosis is further evidence for an essential role of melanin in vivo. We strongly suspected that environmental/nutritional sensing by STE12␣ by itself is an important fitness attribute missing from ste12␣⌬ strains, perhaps sufficient to account for their poor pathogenic potential. Support for this assumption comes from mutants of the human pathogen Candida glabrata and the plant pathogens Colletotrichum lagenarium and Magnaporthe grisea, which are all less pathogenic than their wild-type counterparts, in a melanin-independent manner (6,68,69,86). We FIG. 9. STE12␣ is involved in C. gattii utilization of wood. The black cherry wood blocks (ϳ1-cm cubes) were surface inoculated with a 5-l suspension of 10 8 cells/ml of each C. gattii strain. These blocks were placed on water agar in petri plates and incubated for 8 weeks at 30°C. Adequate moisture was maintained throughout the incubation period. Blocks with fungal growth were removed and processed for sectioning as illustrated in the accompanying schematics. The diagonal sectioning and selection of wood sections away from the inoculated surfaces allowed an estimation of fungal invasion (schematics). (a, b) WT-inoculated wood sections showing several pigmented yeast cells; (c, d) ste12␣⌬ mutantinoculated wood section with few yeast cells with negligible brown pigment; (e, f) wood section inoculated with the ste12⌬␣ plus STE12␣ strain showing abundant brown-pigmented yeast cells. V, vessels; P, parenchyma; F, fiber; R, rays. Both rays and vessels showed fungal growth. VOL. 5, 2006 CRYPTOCOCCUS GATTII STE12␣ carried out extensive phenotypic tests to find additional fitness defects in the mutant strain, but these efforts were not successful, possibly due to reliance on laboratory media that are unlikely to reveal subtle STE12␣ regulatory control over functional genes in vivo. More refined studies are needed to identify diverse gene functions regulated by C. gattii STE12␣, as was achieved for S. cerevisiae STE12 by means of an exquisite genome-wide location and expression profiling approach (74). A remarkable observation in this study was the inability of the ste12␣⌬ mutant strain to produce robust pigmentation on wood chips simulating C. gattii's environmental niche. To our knowledge, this is the first instance in which a gene function has been directly related to the specialized environmental niche of C. gattii or C. neoformans. Our wood chip agar model for testing environmental fitness was validated by the fact that the wild-type and reconstituted strains exhibited robust survival and multiplication on these substrates and produced copious amounts of melanin-like pigment. The poor pigmentation of the ste12␣⌬ mutant strain could be due to its impaired ability to produce laccase. It may be relevant to recall that filamentous basidiomycetes are the major cause of wood damage in nature (white rot). This process involves breakdown of ligninocellulose by laccase working in concert with other fungal enzymes, mediators, and toxic radicals (52). The essential role of laccase in lignin degradation has also been confirmed by the use of laccase-deficient mutants of the white rot fungus Pycnoporus cinnabarinus (27). A corollary of these destructive fungal activities is represented by the industrial applications of purified laccase in bioremediation (40,60). The results of our experiments suggest a model in which C. gattii STE12␣ senses appropriate nutrient/environment cues on the wood surface and responds by up regulation of laccase and melanization for cooperative utilization of substrate. This response was suboptimal in the mutant, due to the absence of Ste12p. Additionally, ste12␣⌬ mutant cells appeared defective in penetration and proliferation inside wood blocks compared to the WT and ste12␣⌬ plus STE12␣⌬ strains. We interpreted this observation as another manifestation of the ste12⌬-associated fitness defect, paralleling the observed defect in colonization and proliferation within mouse tissues (Fig. 9). Thus, STE12␣ appears to be an important regulatory gene for control of both the saprobic and the pathogenic aspects of the life cycle of Cryptococcus gattii.
2018-04-03T02:54:30.218Z
2006-07-01T00:00:00.000
{ "year": 2006, "sha1": "a3ea21a13e6d4272730dfede77a0ffc2121da378", "oa_license": null, "oa_url": "https://ec.asm.org/content/eukcell/5/7/1065.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "c69b1f86c0531a64fb541c789cb959cf65b0d607", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
271174250
pes2o/s2orc
v3-fos-license
Assessment and Rehabilitation Intervention of Feeding and Swallowing Skills in Children with Down Syndrome Using the Global Intensive Feeding Therapy (GIFT) Background: Children with Down syndrome (DS) experience more difficulties with oral motor skills, including chewing, drinking, and swallowing. The present study attempts to measure the preliminary effectiveness of Global Intensive Feeding Therapy (GIFT) in DS. GIFT is a new rehabilitation program addressing the specific difficulties and needs of each child, focusing on sensory and motor oral abilities. It follows an intensive schedule comprising 15 sessions over 5 consecutive days, with 3 sessions per day. The principles of GIFT are applied with specific objectives for DS. Methods: GIFT was preliminarily implemented among 20 children diagnosed with DS. To measure the efficacy of GIFT, the Karaduman Chewing Performance Scale (KCPS), the International Dysphagia Diet Standardization Initiative (IDDSI), and the Pediatric Screening–Priority Evaluation Dysphagia (PS–PED) were used. Data were analyzed using the Wilcoxon signed-rank test before (T0) and after intervention (T1) and at one-month follow-up (T2). The effect size was also measured for specific outcomes, using Kendall’s W. Results: Our findings revealed that children with DS showed no risk of dysphagia according to the PS–PED (mean score 2.80). Furthermore, statistically significant improvements in chewing performance were observed, as measured by the KCPS (p < 0.01), as well as in texture acceptance and modification, as measured by the IDDSI post-intervention (p < 0.01). For both the KCPS and IDDSI, a large effect size was found (Kendall’s W value > 0.8). Parents/caregivers continued using GIFT at home, and this allowed for a positive outcome at the one-month follow-up. Conclusions: GIFT proved to be effective in the rehabilitation of feeding and swallowing disorders in children with DS, as well as for food acceptance. Introduction Down syndrome (DS) is a genetic disorder caused by the partial or complete presence of an extra copy of chromosome 21 [1].In Europe, between 2011 and 2015, it is estimated that 8031 annual live births of children had a diagnosis of DS, with a prevalence of 10.1 per 10,000 live births [2].Children with DS are at a heightened risk of developing various clinical comorbidities that necessitate regular medical monitoring.These conditions include congenital heart defects, thyroid abnormalities, respiratory issues, obstructive sleep apnea, dysphagia, and gastroesophageal reflux disease [3][4][5][6]. Specific anatomical and functional features are commonly observed, including nasal bone depression; a flat facial profile; a high palate; hypotonic perioral muscles; a relatively large, protruding, and hypotonic tongue; a narrowed oropharynx; inclined palpebral fissures; strabismus; and dental anomalies in both number and shape [7].The presence of reduced mandibular development, which favors lingual protrusion, makes lip occlusion difficult and often causes dental problems such as agenesis or the presence of supernumerary teeth [1].Bruxism is more prevalent in DS individuals than in the general pediatric population [8]. Oral feeding difficulties have frequently been included in the literature among the characteristics of the syndrome [9,10].The effects of feeding and swallowing difficulties in children may lead to further medical problems (dehydration, malnutrition, growth retardation, reduction in muscle strength, weakening of the immune system), worsening the state of health and reducing potential rehabilitation [11], while increasing the hospitalization rate in the first 3 years of life of the DS population [9,12,13]. Some studies have demonstrated delayed food acceptance in DS children, with poorly coordinated movement in food management from oral to pharyngeal, difficulty in managing solid consistency, and overall reduced jaw control [14][15][16].Multiple factors contribute to impaired masticatory performance, including the extent of tooth contact; occlusal force; and the coordination of lip, cheek, tongue, and jaw movements.Children with DS often exhibit underdeveloped midfaces and malocclusion.The absence or weakness of tongue lateralization movements can hinder the transportation of food under the teeth and the formation of a cohesive food bolus within the oral cavity [17]. The characteristics described above lead to a delay in the development of oral motor skills in feeding and swallowing.In the literature, there is little evidence regarding the treatment of oral and sensory motor functions of feeding and swallowing in children with DS.The objective of this research is to measure the effectiveness of Global Intensive Feeding Therapy (GIFT) [18] in a group of children with DS. Materials and Methods A pilot study was carried out during the period from September 2022 to June 2023 at the Bambino Gesù Children's Hospital Eating and Swallowing Disorders Service. Participants This study included a convenience sample of children with a diagnoses of DS who had different functional limitations concerning oral feeding abilities.To be included in the study, participants met the following inclusion criteria: age 1-18, confirmed diagnoses with genetical examination, clients of Dysphagia Unit of the Bambino Gesù Children's Hospital IRCCS.Children with other concomitant syndromes or neurological deficits that could explain food aversion were excluded.Children with acquired or congenital deficits to the oral cavity, pharynx, or larynx were also excluded. Global Intensive Feeding Therapy (GIFT) GIFT [18] is an intensive rehabilitation program founded on the principles of neuroplasticity [19], individualized to address the unique challenges and needs of each child.Following clinical assessment, the child participated in a rigorous rehabilitation regimen consisting of 15 sessions over 5 consecutive days, with 3 sessions per day, corresponding to breakfast, morning snack, and lunch.This intensive training was conducted by a speech-language pathologist (SLP) in collaboration with the child's parents or caregiver.This allowed them to repeat the techniques at home in order to help the child generalize the skills learned with the SLP during the intensive treatment and to develop new ones.The main goals of GIFT are (a) to reduce oral, perioral, and upper limb hypersensitivity; (b) to promote the development of adequate chewing and swallowing skills; (c) to expand the food repertoire (in terms of quantity and variety); and (d) to reduce dysfunctional mealtime behavior. In this study, GIFT has focused on (a) stabilizing the correct postural alignment; (b) systematic desensitization and gradual exposure; (c) food texture adaptation; (d) chewing and swallowing abilities; (e) reducing and eliminating the use of infant devices (pacifier and bottle); (f) reducing inappropriate mealtime behaviors; and (g) educating and training caregivers.The GIFT protocol can be summarized as follows: Stabilize correct postural alignment.Before promoting oral sensory and motor skills, it is essential to provide the child with correct postural alignment, which includes proper alignment of the head, neck, and pelvis stabilization.Motor anomalies, which are more common in DS, are correlated with hypotonia, joint hypermobility, and ligamentous laxity.Specifically, bony anomalies of the cervical spine can produce atlanto-occipital and cervical instability.All these characteristics often lead to the development of abnormal postural control, with consequent instability and reduced performance [20,21]. Systematic desensitization and gradual exposure.This study centers on sensory challenges, which are crucial for food acceptance.The goal is to acclimate children to new foods by familiarizing them with various attributes such as smell, taste, shape, color, and texture.Through a structured series of sensory experiences, children will develop confidence with food.This process involves a hierarchical approach, starting with visual acceptance, then moving to interaction, smell, and touch-beginning with parts of the body furthest from the mouth and gradually advancing to the perioral area, tasting, and finally, eating the food [22][23][24].The protocol for introducing new foods consists of the following stages: (1) ensuring the child can visually tolerate the food when it is placed directly in front of them; (2) encouraging the child to touch, pick up, and transfer the food to another plate; (3) prompting the child to pick up, smell, and then place the food on another plate; (4) having the child pick up, kiss, and transfer the food to another plate; (5) getting the child to lick the food, making tongue contact; and (6) encouraging the child to touch the food with their teeth.The amount of food given is gradually increased with chewing guidance until the child can complete the entire meal, including both main courses. Food texture adaptation.The textures of the foods proposed to the child are coherent with starting the oral sensory and motor abilities of feeding and swallowing.The following rehabilitation process is recommended for children who eat only pureed foods: (a) substitution of industrial homogenized food with fresh mixed food; (b) division of the meal into two courses with a first and second course pureed; (c) gradual increase in texture (from pureed to minced food); (d) presentation of soft solid foods with the technique of guided chewing training; and (e) administration of a whole meal with the guided chewing training. Chewing and Swallowing Abilities.Subsequently, a chewing training regimen is implemented to consolidate or develop the child's oral motor abilities through guided chewing practice trough practice and repetition [25,26].Distinguishing tongue movements from jaw movements is crucial for developing tongue lateralization and posteriorizing skills.This practice enhances muscle tone and improves the child's ability to handle food and saliva within the oral cavity [27,28]. Reduce and eliminate the use of infant devices (pacifier and bottle).It is necessary for the correct development of the oro-facial structures and oral motor skills of feeding to gradually reduce, until eliminated, the use of a pacifier, bottle, straws, or bottles with rigid spouts and to introduce the use of a glass and administer pureed foods with a spoon in order to reduce dysfunctional tongue movement.The use of infant devices compromises sensory and motor oral experiences and the typical development of feeding abilities [29,30]. Behavioral problems during mealtime.After a first observation of behavioral issues, the child's behavioral problems during the meal are regulated by positive and negative reinforcement, such as using devices and/or interesting toys [30]. Caregiver's education and training.The integration of parents and caregivers into the rehabilitation intervention is considered a best practice in the literature for early intervention in pediatric rehabilitation [31].Initially, caregivers observe these methods, then gain supervised hands-on experience with their child.This allows them to learn the techniques and confidently apply them at home.Caregivers receive guidance on managing meals and performing rehabilitation techniques, which are continued after the intensive training period concludes. Assessment Tools and Outcome Measures Pediatric Screening-Priority Evaluation Dysphagia (PS-PED).The PS-PED is a screening tool used to assess the risk of dysphagia in pediatric patients.It consists of 14 items divided into 3 main domains: medical history, health status, and feeding condition. Patients who obtain a score from 0 to 6 present a low risk of dysphagia; patients who score from 7 to 14 are at medium-high risk of dysphagia [32]. Karaduman Chewing Performance Scale (KCPS).The KCPS is a functional instrument that is valid, reliable, quick, and easy to use clinically for assessing chewing function in children.It evaluates chewing function with the following scoring system: 0 indicates normal chewing function; 1 indicates the child chews but has some difficulty forming a food bolus; 2 indicates the child starts to chew but cannot keep the food in the molar area; 3 indicates the child bites but cannot chew; and 4 indicates the child cannot bite or chew.As the scale had not been translated and validated in Italian, an independent translation was conducted using the English version from the validation study by the original developers [33]. International Dysphagia Diet Standardization Initiative (IDDSI).The IDDSI is an entity which aims to determine the number of food texture and drink thickness levels for international standardized use [34].The IDDSI framework consists of a continuum of eight levels (0-7) in which drinks are measured from level 0 to 4, while foods are measured from level 3 to 7, as follows: level 3 liquidized/moderately thick; level 4 pureed; level 5 minced & moist; level 6 soft & bite-sized; and level 7 regular.The IDDSI ratings are intended to confirm the textural characteristics of food and liquid at the time of testing [35]. Data Analysis Sociodemographic data and clinical characteristics were analyzed using frequency distributions, mean (SD), and median values.The KCPS, PS-PED, and IDDSI data were assessed at three time points: before treatment (T0), after treatment (T1), and during followup (T2).First, to measure variance in scoring, the Friedman test was used.The Friedman test compares the difference between more than two related groups, such as comparing the difference between three time points.The null hypothesis is that the distribution is the same across repeated measures.Kendall's W coefficient was used to measure the Friedman test effect size, which uses Cohen's interpretation guidelines of 0.1-<0.3(small effect), 0.3-0.5 (moderate effect), and >0.5 (large effect).However, to measure where the difference is, we need to measure the post-hoc test.The Wilcoxon signed-rank test, a common nonparametric test for paired data involving pre-and post-treatment measurements from independent units of analysis, was employed to investigate the differences among median values.The significance level was set at alpha < 0.05. Results The study lasted for six months involving a total of 20 children (8 F and 12 M) with a mean (SD) age of 4.85 (2.43).We first used the PS-PED as a screening tool to evaluate the risk of dysphagia, and we found no risk of dysphagia (mean score 2.8); therefore, it was possible to approach GIFT without any risk of aspiration and penetration.The whole sample participated at each session during the training.Sample characteristics are summarized in Table 1.Despite the sample showing no risk of dysphagia, we decided to report the main results of each item of the PS-PED, highlighting that the items that mainly contributed to the PS-PED score were # 4, 11, 13 and 3. Results are synthetized in Table 2. Concerning chewing abilities, the Friedman test demonstrated statistically significant differences with a p < 0.001 with a large effect size (Kendall's W value 0.89); the Wilcoxon signed-rank test and the KCPS showed statistically significant differences (p < 0.001) in scoring at different timings of administration.Table 3 synthetizes mean (SD) and median (IQR) scores for the KCPS.Regarding chewing performance, the KCPS revealed a significant difference for the total score for both T0 and T1 (p < 0.01) and T0 and T2 (p < 0.01) and between T1 and T2 (p < 0.05). Regarding the modification of textures accepted by the child, the Friedman test revealed statistically significant differences in score variability with a p < 0.001 with a large effect size (Kendall's W value 0.91), while the Wilcoxon signed-rank test of the IDDSI showed a significant difference was found immediately post-training, as well as between T0 and T2 with a p < 0.01.No differences were found between T1 and T2 (p = 0.10).Table 4 reports both the mean (SD) and median (IQR) scores for the IDDSI, as well as the frequency of IDDSI levels across different times of the intervention.Additionally, while behavioral issues were not specifically investigated, it is notable that food refusal, anger, and crying gradually reduced during the treatment.By addressing both the sensory and motor aspects of the oral phase, the child improved their skills and was able to complete the tasks.Consistent sensory adaptation and chewing training are crucial for achieving better results and reducing problem behaviors.In the study by Ferrari and colleagues [27], an inverse relationship is observed between acceptance and exhibited behaviors.To maintain acceptance during a chewing intervention, it is beneficial to find the appropriate texture for the child's current skill level, as foods with higher consistency tend to trigger problem behaviors.As oral sensory and motor skills improve, the occurrence of dysfunctional behaviors decreases.Maladaptive behavior, or problem behavior, generally includes oppositional behaviors such as tantrums, aggression, and disobedience, which interfere with optimal functioning and engagement with the environment.Although individuals with DS generally exhibit less maladaptive behavior compared to those with other developmental disorders, it is estimated that about one-third of individuals with DS have significant levels of maladaptive behavior [36].Children with DS often display low-level aggressive behaviors and food-avoidant eating behaviors, such as fussiness and slowness in eating.Sensory processing disorders likely impact the maladaptive behavior profile in this population [37]. Discussion This investigation provides preliminary evidence of the effectiveness of GIFT in children with Down syndrome (DS), concerning improvements in chewing performance (KCPS p < 001), texture acceptance and modification (IDDSI p < 0.001), both with a large effect size (Kendall's W value > 0.8).A large effect size means that our research findings have a practical significance. Risk of Dysphagia.One of the primary findings of our study is that children with DS demonstrated a low risk of dysphagia according to the PSPED score, with all participants scoring between 0 and 5, indicating a low risk [32].The items that most contributed to the PSPED score were #4, 11, 13, and 3. Specifically, nearly 25% of the sample responded positively to item #4, which relates to respiratory and/or swallowing system alterations or malformations.The study by Jackson and colleagues [9] suggests that children with DS are more likely to have dysphagia due to anatomical and pulmonary characteristics.Consequently, parents often opt for safer foods to avoid dysphagia.However, childhood is a critical period for developing oral feeding skills, and feeding and swallowing behaviors can be influenced by both anatomical conditions and lack of experience.Additionally, 65% of the sample had gastrointestinal tract disorders (item #11).As described by previous studies [12,13], feeding problems and gastrointestinal disorders are common in individuals with DS, with abnormalities being either anatomical or functional.This study reports that gastroesophageal reflux disease (GERD) affects 47% of children with sleep apnea, constipation affects 19%, and obesity affects 32%.GERD is often pathological; however, pain is rarely expressed.Frequent GERD episodes can lead to difficulties in learning taste and oral hypersensitivity, which can alter taste buds.Moreover, 75% of the subjects responded positively to item #13, "Intake of food with a consistency not appropriate for age."This finding aligns with the current literature [16], indicating that anatomical and physiological characteristics, such as disorders of neuromotor coordination and craniofacial anomalies, frequently interfere with the acquisition of effective sensorimotor skills, leading to potential feeding and swallowing problems.Families often favor blended or semi-solid foods, avoiding solid foods during the critical period of skill development [12].Lastly, 40% of the investigated population had a history of cardiopathy (item #3).According to Lagan [6], congenital heart disease is frequently diagnosed in newborns with DS and often requires prompt surgical intervention.This results in numerous hospitalizations, specific drug therapies, and surgical procedures that can delay the development of oral sensory and motor feeding skills. Sensorial issues.The IDDSI scale was used to assess the accepted consistencies of the participants, finding that 65% of the sample is at level 4, which involves the intake of very thick creamy foods administered by spoon.This result is in line with the literature, which suggests that the population with DS has a high propensity to present difficulties both in the introduction of new flavors and in the introduction of new textures to their usual diet [1].In fact, sensory difficulties can have a negative impact on family routine, leading to different dietary needs from the rest of the family and dysfunctional mealtime behaviors.The study by Stein Duker et al. [38] reports that children with DS have difficulties in sensory processing, distinguishing two categories of sensory response: hyposensitivity and hypersensitivity to food stimuli presented in the oral cavity.Oral hypersensitivity leads to poor tolerance even in oral hygiene.The lack of oral hygiene in turn leads to the development of dental diseases that further hinder the development of chewing skills.After treatment, working in prerequisites and on the development of oral sensory and motor issues has led to a change in the accepted food textures.In fact, the entire study population has positively changed the accepted texture: 45% is at level 5 (minced); 30% at level 6 (chopped); and 25% at level 7 (normal).Thanks to the constant work of the caregivers, the entire sample at follow-up maintained the skills achieved at the end of the intensive treatment.The correct use of feeding tools, such as a glass and spoon, has improved tongue movements and reduced tongue thrust during food manipulation and swallowing.The correct alignment of the cup with mandibular support allows for greater stabilization of the jaw, reducing liquid loss from the oral cavity.Regulating the amount of solid and liquid food inserted into the oral cavity has allowed the child to develop greater awareness of hypo/hypersensitivity and oral motor skills of chewing and swallowing. Chewing abilities.Our study revealed that children with DS exhibit chewing difficulties; in fact, a recent study by [27] suggests the importance of a more specific assessment of chewing skills that goes beyond examining only jaw movements but also includes functional lateral tongue movements, the initiation of rotary or vertical chewing, and the timing from food insertion into the oral cavity to swallowing. The results of our study suggest that, at the first assessment, 80% of the sample is at level 4 of the KCPS scale (the child is unable to bite or chew).At the end of the rehabilitation program, 60% of the sample is at level 3 of the KCPS scale (they are able to bite and hold food between their teeth independently, but the subsequent steps of chewing are absent).At follow-up, 50% have maintained the skill and are at level 3.However, as suggested by the IDDSI results, the children are still able to achieve an increase in textures, reaching level 7 of the IDDSI scale (solid food diet), as the solid food is administered with guided chewing training.Through functional guided chewing training, the child trains to develop oral motor skills.In fact, a study by [39] suggests that the components involved in chewing only develop when children are exposed to foods of a higher consistency. The importance of parent involvement.The GIFT treatment involves the presence of the caregiver so that they can observe, learn the rehabilitation techniques, and apply them in daily life.Following the GIFT intervention, there is a monitoring phase and subsequent follow-up after treatment completion.Optimal outcomes during follow-up are achieved through consistent home training.Once actively engaged in intensive training, parents must sustain their efforts.Caregivers should practice the learned techniques daily and apply them in various settings to reinforce and generalize newly acquired skills.Consistent with the existing literature, caregivers hold a crucial role in child feeding due to their firsthand experience with feeding behaviors, knowledge of food preferences and aversions, and understanding of communicative behaviors during meals [16].The study by Stele and colleagues [40] reports that challenging feeding behaviors or feeding difficulties, commonly present in children with DS, can amplify perceived stress in caregivers.In particular, the study found that feeding difficulty is a significant stressor for caregivers of children with DS, especially during the transition to table food; however, as caregivers develop a variety of strategies for managing mealtime, their stress related to feeding difficulties decreases, and their sense of self-efficacy improves. Difference between GIFT in the ASD and DS.As reported in the article by Cerchiari and colleagues [18], the GIFT program is effective for oral sensory motor disorders in children with ASD.This study demonstrates that GIFT is also effective in treating oral sensory and motor skills in children with DS and can be applied with similar objectives in both populations.Compared to its application in children with ASD, the GIFT program for children with DS also addresses the postural aspects due to the syndrome's anatomical and physiological characteristics, such as generalized hypotonia, ligamentous laxity, and atlanto-occipital instability.Ensuring correct head-trunk-pelvis alignment is crucial for the treatment's success.Additionally, during guided chewing training, proper mandibular support must be provided due to DS-related anatomical and physiological characteristics that negatively impact masticatory performance, including tongue thrust, mandibular instability, Class III malocclusion, hypotonia of the orofacial muscles, and oral breathing. This study found that the children maintained the skills learned but did not improve their independent chewing ability during the home-based intervention period.Therefore, it can be hypothesized that achieving significant changes requires consistency and repetitiveness of functional exercises with food, intensive rehabilitation training with the GIFT protocol repeated over time, and daily exercises performed by parents at home.In the ASD population, age-appropriate feeding skills can often be achieved with a single intensive rehabilitation training session.The GIFT program for the ASD population spans two weeks with 30 feeding sessions, as the characteristics of the syndrome necessitate longer adaptation times to the setting.In contrast, the GIFT program for the DS population involves 15 feeding sessions over one week. Study Limitations.Despite promising findings, it is important to acknowledge several limitations.The sample size is too small to generalize the results to the entire population with DS.Furthermore, no standardized tests or questionnaires were administered to assess mealtime behaviors, which were only described qualitatively.The follow-up results are strongly dependent on family collaboration in the home environment.Overall, these limitations suggest that the results should be interpreted with caution and that further research with a larger, more representative sample is needed to confirm the effectiveness of the proposed speech therapy treatment.Furthermore, it would be important, considering behavioral problems in this target population, to use specific assessment tools in stratifying the main characteristics to consider for compliance in rehabilitation and risks.In the end, we did not investigate the relationship between dysphagia and body mass index, neck circumference, as well as health comorbidity distribution and drugs used, while all these aspects can influence feeding and swallowing disorders and increase the risk of developing dysphagia. Conclusions GIFT improves chewing ability, food acceptance, and increased food textures.It helps modulate behavioral problems at mealtime and involves families.In conclusion, it can be stated that GIFT seems to be an effective approach for children with DS, as it presents an individualized rehabilitation treatment that focuses on improving functions and specific limitations of children with DS. Table 1 . Socio-demographic and clinical characteristics of the sample (total 20). Table 2 . Frequency and % of each item of the PS-PED. Table 3 . Differences in scoring of the KCPS. Table 4 . Differences in scoring of the IDDSI.
2024-07-15T15:23:39.538Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "d77c2d3d9839e2eae8feedf18fd3e9a916c3100f", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "becc4e1aefc22983af38a5d3e99918e23443946a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16223905
pes2o/s2orc
v3-fos-license
On the Existence of certain Quantum Algorithms We investigate the question if quantum algorithms exist that compute the maximum of a set of conjugated elements of a given number field in quantum polynomial time. We will relate the existence of these algorithms for a certain family of number fields to an open conjecture from elementary number theory. Introduction Let Q(Γ )/Q be a Galois extension with Galois group G and define Γ max := max σ { |Γ σ | }, with σ ∈ G. We ask if there exists an algorithm that, given some description of Γ , efficiently computes a ϕ ∈ G, such that |Γ ϕ | = Γ max . The first problem we encounter in this general setting is that two conjugated elements might not be efficiently distinguishable, i. e., for σ, ρ ∈ G, the difference ||Γ σ | − |Γ ρ || may become very small. We will avoid this problem by defining, for a positive integer t ∈ N, the set: and ask for an efficient computation of an element ϕ ∈ MAXΓ |G|,t , say, the number of steps being a polynomial in log |G|. In this article, we will relate the existence of these algorithms for a certain family of number fields to an open conjecture from elementary number theory in a sense that either algorithms of this kind exist or the conjecture is true or both. To state the conjecture, we introduce the Fermat quotient q p (k), where p is an odd prime and k ∈ Z with (k, p) = 1, to be the smallest integer greater or equal to 0 that satisfies the equation Here, we are interested in the number of the first consecutive zeros of this quotient, that is For a long time this integer was closely related to first case of Fermats Last Theorem, but we will not go into this here (see [2] and the references given there). It has been shown in [2] that and it is still an open question, whether this bound is tight. The following Conjecture lowers this bound for infinitely many primes: Conjecture 1 For all ǫ > 0 there exist infinitely many primes p with κ p < ǫ √ p. Now, let p be again an odd prime and ζ p 2 := e 2πi/p 2 a primitve p 2 -th root of unity. Further, denote by Q(Γ p )/Q the real subfield of Q(ζ p 2 ) of degree p, where Γ p is given by The aim of this article is to prove the following theorem: Theorem 1 At least one of the following is true: 1. For all positive integers t ∈ N, there exist a constant c t and a quantum algorithm that, given an odd prime p, computes in (log p) ct steps and with a probability close to 1 an element of the set MAXΓ p,t . The paper is organized as follows. In the next section, a quantum algorithm is presented that attempts to compute an element of the set MAXΓ p,t in quantum polynomial time, at least if Conjecture 1 is false. Then, after recalling some basic facts from number theory, we will state the proof of the Theorem. The Algorithm In the following let p be an odd prime. To present the algorithm, we define a polynomial time computable function f : where q p (x) denotes the Fermat quotient of the integer x, defined in the last section. (i) For the quantum part of the algorithm, we start with the state 1 p and (ii) apply the Quantum Fourier Transform (QFT) to the first register, which leads to (iii) We now measure the system and obtain the state |a |s with probability (iv) If a ≡ 0 mod p and s = p, we note down the smallest nonnegative integer σ ′ that satisfies the equation σ ′ ≡ q p (a) + s mod p. For a constant c, to be specified later, we repeat the whole process (log p) c times and output the integer σ ′ that has occured most frequently (in case of a tie we choose one of the "leaders" by random). Analysis For the analysis of the algorithm, we will begin with two Lemmata stating the probabilities of the possible outcomes of step (iv) of the quantum subroutine from the last section: The probability that step (iv) produces some integer equals 1 − 1 p 2 − 1 p . Proof. Let |a |s be the state given after the measurement in step (iii) of the procedure. Finally, the sum of these probabilities leads to the statement of the Lemma. Proof. Let w be an integer, with (w − 1, p) = 1 and w p−1 ≡ 1 mod p 2 . Then any integer k ≡ 0 mod p can be written in the form for some integer d k . Now suppose that at the end of step (iii), we obtain a state |a |s , with a ≡ 0 mod p and s = p. It then follows that the inner sum of equation (9) equals with σ := j p (q p (a) + s)| Q(Γp ) , by definition of the element Γ p . Since there are p(p − 1) ways which lead to the same σ, the statement of the Lemma follows. Proof of the Main Theorem To state the proof of Theorem 1, we first define the Mirimanoff polynomial This polynomial is closely related to the Fermat quotient, since and therefore κ p = min{n > 0 | γ p (n) ≡ 0 mod p}. For an introduction Mirimanoff polynomials and their basic properties, we refer to [2]. If we denote the zeros of γ p modulo p by η p it can be shown that: Theorem 2 There exist positive constants c 1 and c 2 such that for all primes p κ 2 p < c 1 η p < c 2 Γ max,p . Proof. The first inequality is given by Theorem 1 in [2], while the second is shown in [3], Prop. 3.16. Now, in order to prove Theorem 1, we look at the following statement: Statement 1 There exist positive integers s, p 0 ∈ N such that, for all primes p > p 0 ,
2009-04-11T10:21:18.000Z
2009-04-11T00:00:00.000
{ "year": 2009, "sha1": "3937116f6f427a089b92d8fc8daf6231b8609bd8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3937116f6f427a089b92d8fc8daf6231b8609bd8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
118381307
pes2o/s2orc
v3-fos-license
GeMSE: A new Low-Background Facility for Meteorite and Material Screening We are currently setting up a facility for low-background gamma-ray spectrometry based on a HPGe detector. It is dedicated to material screening for the XENON and DARWIN dark matter projects as well as to the characterization of meteorites. The detector will be installed in a medium depth ($\sim$620 m.w.e.) underground laboratory in Switzerland with several layers of shielding and an active muon-veto. The GeMSE facility will be operational by fall 2015 with an expected background rate of $\sim$250 counts/day (100-2700 keV). Introduction The new facility for low-background γ-ray spectrometry, GeMSE (Germanium Material and Meteorite Screening Experiment), will be dedicated to the characterization of meteorites as well as the selection of radiopure materials needed for rare event searches. It will be operated by two groups (geology/meteorite research and astroparticle physics) in a common interdisciplinary project. The detection and quantification of cosmogenic isotopes in meteorites by γ-ray spectrometry is a non-destructive analysis method that allows for the determination of their terrestrial age, in particular the detection of relatively recent [1,2] and very old [3] falls. Our main interest lies in the detection of rather short-lived cosmogenic isotopes to identify fresh falls, mainly studying meteorite samples from the Oman collection [4,5] hosted at the Natural History Museum Bern. This is one of the largest collections of hot desert meteorites comprising ∼880 fall events (see Fig. 1). From the activity of 22 Na (T 1/2 =2.6 y) or 60 Co (T 1/2 =5.3 y) we will be able to identify falls dating back to approximately 20 years. Using 44 Ti (T 1/2 =63 y) one can recognize meteorites fallen 100-200 years ago. As can be seen in Fig. 1 the needed sensitivity is ∼1 mBq/kg. One of our main goals is an estimation of the average fall rate during the past 100 years, and the relative proportion of young falls among the whole collection, by screening the most unweathered and thus youngest samples. We also plan a detailed research program on a series of fragments recovered from the Twannberg iron meteorite in Switzerland [6]. The production of cosmogenic isotopes in meteoroids is strongly dependent on shielding (depth), and thus the meteoroid's size [7,8]. Fragmentation of meteorites during fall events will thus yield samples with highly variable activities of cosmogenic isotopes, reflecting the production rates at given depths and pre-atmospheric radii. Statistical data on long-lived 26 Al (T 1/2 =7.2×10 5 y) in a series of samples will help to constrain the fallen mass, which likely represents one of the largest fall events in Europe. The background goals of the future dark matter experiments XENONnT [9,10] and DAR-WIN [11] demand very radiopure construction materials. Many astroparticle physics experiments, searching for rare events, have shown that it is necessary to carry out extensive screening campaigns until all components are identified [12,13]. As an example, a possible future DARWIN detector using 20 t of LXe will require the screening of ∼1000 PMTs as well as several batches of PTFE, copper and stainless steel. The typical activities of isotopes from the U/Th chains in these materials are in the mBq/kg range but can be as low as ∼20 µBq/kg [13]. The GeMSE facility will complement other HPGe detectors available to the collaborations at LNGS (Italy) [14,15] and in Heidelberg (Germany) [16,17]. Besides providing additional resources with a very competitive background level, our facility will have the advantage of fast accessibility from Switzerland, which can be very relevant for delicate or urgent samples. Isotope Half Detector and Shielding The detector of the GeMSE facility is a standard electrode, coaxial, p-type HPGe detector from Canberra with a relative detection efficiency of 107.7%. The Ge crystal ( =85 mm, h=65 mm) is embedded in a ultra-low background U-style cryostat made from Cu-OF. Our shielding design is schematically shown in Fig. 2. The cavity for samples has a size of 24×24×35 cm 3 . From inside to outside the detector will be surrounded by 8 cm of Cu-OFE, 5 cm of low-activity Pb (7.2±0.5 Bq/kg 210 Pb) and 15 cm of normal Pb (∼200 Bq/kg 210 Pb). The whole shielding will be enclosed in a glovebox which is continuously purged with N 2 gas. Samples can be inserted with a lock system without introducing radon. In addition, a 120×100 cm 2 plastic scintillator panel on top is used as muon veto. The setup will be installed in the Vue-des-Alpes underground lab near Neuchâtel (Switzerland) [18]. It features a rock overburden of 235 m corresponding to ∼620 m.w.e. which reduces the muon flux by a factor of ∼1900. The lab is located in a highway tunnel ∼45 min away from Bern and can therefore be easily accessed by car. Detector Characterization Before the underground installation, we carried out first measurements to characterize our HPGe detector. Fig. 3 (a) shows the energy resolution of the detector as a function of energy, measured with a shaping amplifier time constant of 6 µs. The resolution was determined for several peaks (mainly from U/Th chains) in a background spectrum measured without any shielding and an additional 60 Co source. The resolution (FWHM) at the 1.33 MeV peak of 60 Co is 1.76 keV (0.13%). To estimate the detection efficiency of a sample measurement it is important to know the thickness of the dead layer from the Li-diffused n+ contact. This thickness was determined by following the approach given in Ref. [19]. A spectrum was recorded with a 133 Ba source at a well defined distance (25 cm) from the detector. From this measurement the ratio of the 81 keV and 356 keV peak areas was determined. This result was compared to the same ratio determined by a Geant4 [20] (version 9.6p03) simulation of the setup, performed with dead layers of different thickness. By matching the measured value to the simulation we get a dead layer thickness of (0.65±0.05) mm (see Fig. 3(b)). Background Simulation The expected background for the GeMSE setup was estimated with a Geant4 (version 9.6p03) simulation. Initially, the simulation was also used to optimize the shielding design. The implemented geometry includes the detector with shielding, the muon veto panel and the laboratory cavern (including 2 m of rock). We simulated the background from radioactivity in the cryostat, Cu shielding and the inner 5 cm of Pb as well as that from cosmic ray muons. Figure 4 shows the values that we have assumed for the radioactivity of the cryostat and shielding components. For the cryostat and Cu shielding these were taken from the Gator screening facility [15] which uses the same type of HPGe detector and the same type of Cu in the shielding provided by the same supplier. The value of 7.2 Bq/kg taken for the 210 Pb contamination of the inner Pb layer was experimentally measured. The flux, angular distribution and energy spectrum of the muons were calculated using empirical equations from [21] assuming a rock overburden of 623 hg/cm 2 . The results of the simulation, namely the energy deposited in the Ge detector from each background source, are shown in Fig. 4. The dominant background component is from muons. It can be reduced by a factor of ∼6 using the top scintillator panel to reject all events in the Ge detector within a time window of 10 µs after an energy deposition >1 MeV in the scintillator. The total integrated background rate with (without) muon veto is 66 (128) counts/day (100-2700 keV). Assuming a reduced veto efficiency and some additional background from other materials inside the cryostat and residual radon we estimate a realistic background rate of ∼250 counts/day, comparable to that of the Gator screening facility [15]. Summary and Conclusion The GeMSE facility will be a highly sensitive screening setup for low-background γ-ray spectrometry. The shielding is currently under construction and we expect the facility to be operational by fall 2015. With its underground location (∼620 m.w.e.), several layers of shielding enclosed in a N 2 purged glovebox and a plastic scintillator muon veto we expect a integrated background rate of ∼250 counts/day (100-2700 keV). This is only a factor of ∼7 higher compared to the most sensitive screening facilities in the world [22]. GeMSE will help to answer important questions in meteoritics like the average fall rate or the pre-atmospheric size of the Twannberg meteorite. Furthermore, it will be used for the selection of radiopure materials for rare-event searches in astroparticle physics, such as the next generation dark matter experiments XENONnT and DARWIN.
2015-10-20T09:08:02.000Z
2015-05-26T00:00:00.000
{ "year": 2015, "sha1": "a995473e4200727d88a0d2bf909f10be4f10eed6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1505.07015", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a995473e4200727d88a0d2bf909f10be4f10eed6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236586496
pes2o/s2orc
v3-fos-license
Atopic biomarker changes after exposure to Porphyromonas gingivalis lipopolysaccharide: a small experimental study in Wistar rats [version 1; peer review: awaiting peer review] Background: IgE and IgG4 are implicated in atopic development and clinically utilized as major biomarkers. Atopic responses following certain pathogens, such as Porphyromonas gingivalis (Pg), are currently an area of interest for further research. The aim of this study is to measure the level of IgE, IgG4, and IgG4/IgE ratio periodically after exposure of periodontal pathogen Pg lipopolysaccharide (LPS). Methods: We used 16 Wistar rats (Rattus norvegicus) randomly subdivided into four groups: Group 1, injected with placebo; Group 2, injected with LPS Pg 0.3 μg/mL; Group 3, injected with LPS Pg 1 μg/mL; and Group 4, injected with LPS Pg 3 μg/mL. Sera from all groups were taken from retro-orbital plexus before and after exposure. Results: Levels of IgE and IgG4 increased significantly following exposure of LPS Pg at day-4 and day-11. Greater increase of IgE rather than IgG4 contributed to rapid decline of IgG4/IgE ratio, detected in the peripheral blood at day-4 and day-11. Conclusion: Modulation of atopic responses following exposure to LPS Pg is reflected by a decrease in IgG4/IgE ratio that accompanies an increase of IgE. Therefore, Pg, a keystone pathogen during periodontal disease, may have a tendency to disrupt atopic biomarkers. Open Peer Review Reviewer Status AWAITING PEER REVIEW Any reports and responses or comments on the article can be found at the end of the article. Page 1 of 13 F1000Research 2021, 10:371 Last updated: 22 JUL 2021 Introduction The oral cavity is the habitat of numerous bacteria, including Porphyromonas gingivalis (Pg). Pg is a gram negative, facultative anaerobic pathogen, which is responsible in causing gingivitis or periodontitis. 1 In low-income countries, gingivitis and periodontitis can affect up to 90% of the adult population. 2 Rather than alveolar bone and ligament destruction, Pg is believed to be involved with the development of atopic responses in a susceptible host. 3 Following Pg infection, hosts' adaptive immune response (both cell-mediated and humoral-mediated) could induce a systemic inflammatory reaction, not only just local destruction of tooth-supporting tissues. [4][5] Although periodontal pathogens, such as Pg, play a major role in the initiation of local and systemic inflammatory reaction, 6 the host aberrant immune responses require further study. Since humoral immune responses are stimulated following Pg infection, there might be a link to the occurrence of atopy. Despite long-standing research about hygiene hypothesis for several decades, there is an unequivocally accepted fact that the prevalence of atopy increases more among children who have periodontal pathogen colonization or infection. 7 While endorsing these hygiene hypothesis approaches, there is an alternative hypothesis in which exposure to some periodontal pathogens will exclusively trigger an "immunoglobulin-E skew" rather than reducing it. 8 Within the context of the hygiene hypothesis, the most essential microbial exposures needed to be studied is the biomolecular relationship between host antibody and regulatory T-cell with lipopolysaccharide (LPS), an endotoxin released by Pg to affect host immune reaction. Hygiene hypothesis principles might not be able to answer all phenomenon of increasing incidence of atopy among children with poor oral hygiene. 9 Some studies report a positive association between the colonization/infection of Pg with the development of allergic diseases, 10-15 whereas some studies report no association. [16][17][18][19][20][21] Due to lack of conclusive evidence about the association between Pg and allergic diseases, 22 we try to measure the level of atopic biomarkers following Pg infection. To the best of our knowledge, measuring IgG 4 and IgE antibody may have a closer association to atopic profiles, since IgG 4 and IgE are released after activation of mature B cells following the modulation of IL-4 and IL-5 released by Th-2 cells during type I hypersensitivity. 23 By looking at the alteration of IgG 4 and IgE antibodies level after exposure to these selected components of Pg in a rat model, we hope to understand more deeply the biological mechanism of B-cell production antibodies pattern and humoral immune responses before the clinical manifestation of atopy. We chose a rat model since they are inbred so they are almost identical genetically and their genetic, biological and behavior characteristics closely resemble those of humans. Ethics approval This article was reported in line with the ARRIVE guidelines. Animal experimental study was conducted under the approval of the Institutional Animal Research Ethics Committee of Universitas Airlangga (UNAIR), Surabaya, Indonesia (animal approval no:50/KKEPK.FKG/IV/2015) under the name of Sindy Cornelia Nelwan as the Principal Investigator. The study was carried out in strict accordance to internationally accepted standards of the Guide for the Care and Use of Laboratory Animals of the National Institute of Health. All efforts were made to ameliorate any suffering of animals through using anaesthetic to euthanize the rats at the end of the experimental procedure. Animals Sample size N = (Z α/2 ) 2 s 2 /d 2 , where s is the standard deviation obtained from previous study or pilot study, and d is the accuracy of estimate or how close to the true mean. Z α/2 is normal deviate for two-tailed alternative hypothesis at a level of significance. Suppose sample size calculated by software is 3 animals per group and researcher is expecting 10% attrition then his final sample size will be 4 animals per group or 16 animals in total. Rats The present study used 16 male Wistar rats (Rattus novergicus) between eight and ten weeks of age (average body weight 120-150 grams). The rats were housed in microisolator cages and maintained in a constant room temperature ranging from 22°C to 25°C, with a 12-h light/12-h dark cycle, under artificially controlled ventilation, with a relative humidity ranging from 50% to 60%. The rats were fed a standard balanced rodent diet (NUTRILAB CR-1 ® ) and water were provided ad libitum. Inclusion criteria was male Wistar rats, age 8-10 weeks, with body weight 120-150 grams. Female Wistar rats, diseased, sick, and lazy male Wistar rats were strictly excluded. Experimental design and groups The present study design was a pre-test post-test-controlled unblinded group design using quantitative method. The 16 male Wistar rats were randomized using randomized block sampling and classified into four groups. Each group consisted of 4 matched Wistar rats (age, weight, IgE and IgG 4 baseline characteristic). Group 1 were given placebo (0.9% normal saline solution). Group 2 were given lipopolysaccharide (LPS) of Porphyromonas gingivalis (Pg) (American Type Culture Collection, Rockville, Md.) at dose 0.3 μg/mL. Group 3 were given LPS Pg at dose 1 μg/mL. Group 4 were given LPS Pg at dose 3 μg/mL. The rats received LPS by an intra-sulcular injection. Intra-sulcular injection has an advantage due to the its direct delivery of LPS to oral cavity in which the tip of needle is injected slowly at the crestal bone. Longitudinal quantitative measurement was performed; IgE level, IgG 4 level, and IgG 4 /IgE ratio in both groups on day-0 (before treatment), day-4, and day-11. An average of 0.2 ml peripheral blood sera was obtained by Pasteur pipette from retro-orbital plexus, using a lateral approach on each of these days from each rat. The potential expected adverse events were anaphylactic shock, allergic reaction, bleeding and infection. However, to the best our knowledge, there were no expected nor unexpected adverse events in the experimental procedures. Following the end of the experiments, all efforts were made to ameliorate any suffering of animals through injection of sodium pentobarbital anesthetic to euthanize the rats at the end of the experimental procedure. Level of IgG 4 and IgE Sample of the sera were collected and stored at −70°C (−94°F) at Institute of Tropical Diseases Universitas Airlangga (UNAIR). All sera were assessed by direct-sandwich enzyme-linked immunosorbent assay (ELISA) with mouse IgE antibody (MAB9935) and IgG 4 antibody (MAB9895) under the manufacturer's (R&D System Europe Ltd, Abingdon, UK) protocol. Briefly, the sera were examined using microtiter plates using 25 ml of 3,3',5,5'-tetramethylbenzidine to 1 ml of phosphate-citrate buffer plus perborate in a mildly acidic buffer (adjust pH 5.7). Levels of IgG 4 were detected using monoclonal antibody anti-IgG 4 , transferring it to microtiter plates, adding the supplied conjugate, adding blocking solution, diluting plasma sample (1:100,000), and washing between the steps. Level of IgE was detected using monoclonal antibody anti-IgE, following similar steps until diluting the plasma sample (1:200). A minimum value of 0.01 pg/mL for IgE and 0.01 ng/mL for IgG 4 were assigned for below the limit of detection. We used 3,3',5,5'-tetramethylbenzidine as chromogenic substrate, which allows direct visualization of signal development through spectrophotometer. Statistical analysis All measurements were performed at least three times. Results were presented as means AE standard errors (SEM). The assumption of the normality for the complete data was assessed by Shapiro-Wilk test. Test of homogeneity of variances was assessed by Levene Statistics. Statistical significance was examined by one-way ANOVA and repeated measure ANOVA using SPSS version 17.0 for Microsoft (IBM corp, Chicago, USA). Table 1 show the baseline characteristics of the 16 Wistar rats (Rattus norvegicus). No significant differences were found for mean age (p = 0.774), body weight (p = 0.700), baseline IgE (p = 0.071), baseline IgG 4 (p = 0.770), and baseline IgG 4 / IgE ratio (p = 0.053) among the four groups. Comparison of serum IgE level between the four groups Prior to experiments (day-0), there was no difference of serum IgE level between the four groups (p > 0.05). On day-4, there was a significance difference of serum IgE level between all groups (p = 0.006). At day-4, the highest average IgE level could be found in Group 3 treated with LPS Pg 1 μg/ml (17.00 AE 1.69 pg/ml) and the lowest average IgE level could be found in Group 1 (control) (5.31 AE 0.76 pg/ml). On day-11, there was also a significance difference of serum IgE level between both groups (p = 0.047). At day-11 the highest average IgE level could be found in Group 2 treated with LPS Pg 0.3 μg/ml (180.34 AE 10.42 pg/ml) and the lowest average IgE level could be found in Group 1 (5.06 AE 1.86 pg/ml) ( Table 2). Comparison of serum IgG 4 level between the four groups Prior to experiments (day-0), there was no difference of serum IgG 4 level between the four groups (p > 0.05). On day-4, there was a significance difference of serum IgG 4 level between all groups (p = 0.008). At day-4, the highest average IgG 4 level could be found in Group 4 (LPS Pg 3 μg/ml; 23.86 AE 1.59 ng/ml) and the lowest average IgG 4 level could be found in Group 1 (8.34 AE 0.58 ng/ml). On day-11, there was a greater difference of serum IgG 4 level between all groups (p = 0.005). At day-10, the highest average IgG 4 level could be found in Group 4 (LPS Pg 3 μg/ml; 63.74 AE 4.74 ng/ml) and the lowest average IgG 4 level could be found in Group 1 (13.91 AE 0.99 ng/ml) ( Table 3). Discussion Several mechanisms have been suggested to alter atopic inflammatory responses following LPS Pg infection. One of the mechanisms proven in this study is an elevation of IgE antibody and reduction of IgG 4 /IgE ratio. 24 As far as we have known, Th-1 and Th-2 cells are not two different CD4+ T-cell subsets, but it represents polarized forms of the highly heterogenous CD4+ Th cell-mediated immune response. Host genetic and microenvironmental factors could have contributed with series of modulatory factors including: 1 the ligation of T-cell receptor (TCR); 2 the activation of costimulatory molecules and its particular components; 3 the predominance of an inflammatory cytokine in the local environment; and 4 the number of postactivation cell divisions following exposure to antigens. Down-regulation of the Th-1 cell is associated with depression of cell-mediated immune response and stimulation of humoral immune response, thus pathogens are able to evade immune clearance. 25 Porphyromonas gingivalis possess very sophisticated defense mechanisms against host immune responses. These pathogens produce capsules containing long chain LPS which is designed effectively to counter membrane attack complex. These long chain LPS can also downgrade cell-mediated immunity by shifting Th-1 into Th-2 which less dangerous to pathogens. 26 LPS may have an essential role in switching cell-mediated to humoral-mediated immune responses. 27 LPS Pg antigen is processed and presented on its surface with MHC-II molecule. Recent studies suggest an activation of alternative complement pathway, disruption of classical complement pathway, modulation of antigen presenting cells, and downregulation of anti-inflammatory cytokines are responsible for the Th2-skewed immune response following exposure to LPS Pg. Predominance shifting from Th-1 into Th-2 occurs in several extra-lymphoid tissues; the ideal site for Porphyromonas gingivalis is the oral cavity. 28 Interleukin-4 (IL-4), which is produced by naive T cells, acts as autocrine manner known to be responsible for the differentiation and activation of Th-2 phenotype. 29 Guo et al (2014) shows upon the occurrence and development of allergic diseases, there is a complex pathobiology which results in an imbalance of Th-1/Th-2. 30 In an atopic disease such as bronchial asthma or urticaria, naive T cell can differentiate into Th-2 under IL-4-induced STAT6 and GATA-3 transcription factors. 30 Th-2 predominant immune response will automatically stimulate plasma cell to release IgE and IgG 4 . 31 Upon re-exposure of antigen or allergen, binding of the allergen to IgE orchestrates the adaptive immune system to initiate rapid sensitization. Frequent sensitization is a major risk factor for the development of allergic diseases such as urticaria, bronchial asthma, hay fever or atopic dermatitis/eczema. 32 Our previous study used whole-cell body of Porphyromonas gingivalis to study different molecular responses in Wistar rats. Our first project studied the association between periodontal pathogen and host innate immunity. Exposure to Porphyromonas gingivalis had been shown to stimulate level of TLR2 and depress level of TLR4. 33 Our findings might indicate that several bacterial properties can turn-off host innate immunity and host inflammatory response. Our second project studied the association between periodontal pathogen and host adaptive immunity. We summarized that high dose CFU of Pg stimulates fold increase of Th-2 cytokines (IL-4, IL-5 and IL-13) and decrease of Th-1 cytokines (IFN-γ and IL-17). 34 These were the cornerstone to continue our project in studying LPS as the most important component of these bacteria. At this moment, both total IgE or specific IgE antibodies have little diagnostic value in the occurrence of allergic manifestation. Even total or specific IgE is increasing, yet the manifestation of allergy doesn't usually develop, since IgG 4 level also increases as a counter-regulator. 35 It means that even human or rat become susceptible to atopic allergy due to the increasing level of IgE, body mechanism is able to provide protection, with increased IgG 4 as a counter response to prevent manifestation of allergic diseases and immediate hypersensitivity. Thus, exposure of LPS Pg will develop chance of atopic and hypersensitivity markers, but manifestation of allergic reaction is a complex pattern. 36 IgG 4 /IgE ratio has closer accuracy to detect any alteration of atopic inflammatory pathway. Increase level of IgE, which isn't accompanied by IgG 4 , can be seen in patients with urticaria or atopic dermatitis. 37 IgE-switched B cells are much more likely to differentiate into plasma cells, whereas IgG 4 -switched B cells are less likely to differentiate. 38 This reason would explain why IgE antibody is the most dominant antibody in the development of atopic inflammatory pathway, whereas IgG 4 antibodies become prominent later during chronic non-atopic stimulation. 39 According to this reason, IgG 4 /IgE ratio may predict atopic responses more accurately than total or specific IgE level. Limitations and strengths Several limitations should be highlighted. First, this study had limitations with regard to very small number of samples which can increase the likelihood of error and imprecision. Second, results from animal models often do not translate into replications in humans. 40 IgE antibody responses in Wistar rats are typically transient, whereas the atopic IgE response in human persists for many years. 41 Other crucial difference is IgG 4 /IgE ratio, which is usually much higher in the Wistar rats than humans. [42][43] These factors may have an impact on the interpretation of our results. Thus, the findings should be interpreted within the context of this study and its limitations. The strengths of the study were its high statistical power and the homogeneity of each group to enable comparison between groups and periods.
2021-08-02T00:05:51.866Z
2021-05-11T00:00:00.000
{ "year": 2021, "sha1": "d905e807487ef3e790b132d515ecd51fabaeae2f", "oa_license": "CCBY", "oa_url": "https://f1000research.com/articles/10-371/v1/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b88cd1555dfed7dbefa773aed06d7bb742f83a39", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
79037823
pes2o/s2orc
v3-fos-license
Soil-Transmitted Helminths Helminths currently affect over 2 billion people worldwide with a quarter of the world’s population infected at some time in their lives. Sobering statistics from the WHO March 2008 report that 80% of the “Bottom Billion” impoverished population of the world have Ascaris , 60% have Trichuris , and 57% have hookworms. This would only be a problem of pharmacologic distribution if not for an additional report demonstrating that several new studies reported to the WHO claim a 50% failure rate clearing Trichuris and 90% failure rate clearing hookworm. These parasitic infections pose a challenge to tropical physicians who have considered mebendazole and albendazole as adequate treatments for children. This is even more of a challenge for physicians in temperate climates who may be less familiar with these medications. This article presents the recent data and the approach to treatment failure and new therapeutic approaches. Introduction Intestinal parasites cause substantial morbidity and mortality, particularly in children in whom they have detrimental effects on growth and cognitive performance. Parasitic infestation leads to deformity and long-term disabilities and often stigmatizes the child. Parasitized pregnant women are anemic, have increased fetal wastage, and have low birth weight newborns. Though tropical diseases affect a large proportion of the world's population, less than 1% of new drug development over the past 30 years focused on tropical diseases. Recent philanthropic interest has resulted in research, long tardy, for these diseases. Epidemiology There are three soil-transmitted helminth infections, Ascaris lumbricoides, hookworm (Ancylostoma duodenale and Necator americanus), and Trichuris trichiura, labeled by the WHO as the "Unholy Trinity." They are ubiquitous in tropical climates and even temperate rural areas in poverty-stricken communities with poor sanitation (see Table 1). Ascaris and Trichuris increase in prevalence from infancy to puberty and then decrease in adulthood. In contrast, hookworm, the leading cause of anemia throughout the world, continues to increase through life, not reaching a plateau until age 40. This characteristic has a profound effect on women of childbearing age and is associated with small-for-gestation newborns as well as increasing fetal loss. The WHO has classified parasitic infestation by egg intensity to clarify symptomatic and asymptomatic infestations (see Table 2). Children with light worm loads are often asymptomatic, but children form the greatest population of the heavy intensity group. Even children considered asymptomatic may have subtle differences in learning and intellectual achievement [1]. The clinical presentation relates to parasite migration in the skin, viscera, and gastrointestinal tract. Trichuris and Ascaris are a result of fecal-oral ingestion. Wheezing, dyspnea, nonproductive cough, fever, bloody sputum, chest x-ray infiltrates, and systemic eosinophilia result during pulmonary vascular migration. Once swallowed the larvae mature, and their migration in the gut causes abdominal pain, distention, and malabsorption. If adult Ascaris migrate into the biliary tree, then pancreatitis, cholangitis, and cholecystitis result. Hepatic abscesses and appendicitis may result from Ascaris migration. In younger children, heavy loads of worms can cause partial or complete bowel obstruction in the ileum. Swelling of Peyer's patches leads to an increased risk of intussusception and volvulus. Unrecognized obstruction may eventually cause bowel infarction and perforation with resulting peritonitis. Trichuris may infect any part of the colon, but the parasite prefers the cecum. Eggs release the larvae in the small intestine, and the worms mature in the colon where they tunnel into the mucosa, causing inflammation. Heavy infestation causes a dysentery syndrome severe enough that it may result in rectal prolapse. Impaired growth and anemia are the consequences of chronic infestation. Hookworm infects through skin penetration. Itching erythematous rash from multiple skin penetrations causes severe pruritus of the skin, usually on the feet or hands. The larvae use the pulmonary vasculature to access the bronchial secretions and then, when swallowed, mature into adults in the gastrointestinal tract. Bronchial migration presents as clinical pneumonitis but may be mistaken for asthma. Pulmonary symptoms are seldom as dramatic as with Ascaris. The significant sequelae of infection relate to intestinal blood loss. As few as 40 worms can reduce the hemoglobin below 11 g/dl. Heavy infestations lead to loss of protein with resulting loss of plasma osmotic pressure and anasarca. Helminth infestations cause anemia and malnutrition, growth stunting, and cognitive deficits, associated with poor school attendance and performance. Since this occurs in an impoverished area where the diet has limited resource to protein, the consequences of the poor child's limited diet magnifies the malnutrition. If this occurs in a malaria area, the anemia caused by helminths exaggerates the anemia of malaria. This is especially crucial for women of childbearing age, since infected women were 2.6 times more likely to have preterm deliveries and 3.5 times more likely to have small-for-gestational age infants. If the woman lives in a malaria-endemic area, the risks of malaria increase the infant mortality. Treatment There are four medications currently available to treat soil-transmitted helminth infections (see Table 3). Benzimidazoles impede the microtubular system, in particular β-tubulin, in the worm. Since this is not a host system, patients tolerate these drugs with minimal side effects. Very few patients report nausea, vomiting, and headache, but allergic reactions with fever are rare. Levamisole and pyrantel pamoate are nicotinic acetylcholine receptor agonists, which paralyze the worms and precipitate their expulsion. Gastrointestinal symptoms, headache, dizziness, fever, and rash are usually mild and self-limited. However, a bulk of paralyzed worms increases the risk of a bowel obstruction. The most important aspect of treatment is efficacy. Cure rates and egg reduction rates are high for all four drugs when treating Ascaris (see Table 4). Nevertheless, recent studies have documented ineffective and inconsistent treatment of Trichuris and hookworm, whether Ancylostoma duodenale or Necator americanus. The concern is drug resistance, despite lack of previous investigation. Researchers presumed that the drugs were effective in the past because they were effective with the other helminths. Recent studies by veterinarians tested efficacy in mass drug administration to animals in endemic areas. Such studies presumed human efficacy. Subsequent studies done in adults excluded children and pregnant women, the most at-risk populations. Currently, research established benzimidazoles as safe for children greater than 1 year of age. Teratogenic potential seen in animal studies requires careful Table 4. Efficacy of single-and multiple-dose anthelminthic drugs against common soil-transmitted helminth infections [4]. Soil-Transmitted Helminths DOI: http://dx.doi.org/10.5772/intechopen.87143 assessment of benefit/risk ratio. The WHO does recommend treatment of hookworm in pregnancy due to the adverse effect of anemia which is greater than the risk of the medication [2]. Limited studies show no congenital anomalies or perinatal mortality with the use of albendazole, mebendazole, or ivermectin, although use in the first trimester is still discouraged. Studies have yet to focus on levamisole and pyrantel in pregnancy [2]. Prevention Because of the large burden of disease, prevention needs to be the foremost consideration in improving community health. Sanitation, access to a clean source of water, and careful food preparation limit fecal-oral contamination. Careful disposal of feces decreases exposure to helminthic eggs, and footwear limits hookworm exposure. The other approach has been to limit morbidity through periodic treatment. The school system has been the logical institution for community treatment. Many studies have employed deworming schoolchildren on an annual basis, while others have focused on women of reproductive age. One recent study focused on community versus schoolchildren treatment justified a strategy that involves the entire community [3]. Community treatment in several studies documents the requirement to reach at least 75% of the at-risk population. Governments willing to institute such programs recognize the cost of $0.02 USD. Several pharmaceutical companies made the drugs affordable. One example, a study done in Zanzibar, examined the coadministration of ivermectin, albendazole, and praziquantel in 5055 children and adults. This mass drug administration benefitted the entire community. Future research and treatment Considering the high prevalence of soil-transmitted helminths and the established resistance, there is a need for other treatment options. This has provoked enthusiasm for vaccines and drugs with novel mechanisms of action. Unfortunately, there has been little financial incentive for developing human vaccines and novel drugs for poverty-stricken areas, but veterinary medicine has the financial incentive of herd treatment. The nicotinic acetylcholine receptor is unique to helminths and nematodes, although it appears to be a malaria parasite receptor as well. Since this receptor does not exist in humans, a medication to block this receptor should be effective and well tolerated. A vaccine with an antibody against this receptor seems a logical potential step for research. Tribendimidine is an L-type nicotinic acetylcholine receptor agonist. It is very effective in animals. Clinical trial in humans resulted in approval in China in 2004. Despite the difference in chemical structure and the hypothesized receptor agonist effect, it proved to have the same mechanism of action as benzimidazoles and showed no advantage in humans. Monepantel is a nicotinic acetylcholine receptor agonist. It is highly effective and licensed for sheep. Researchers initiated studies in humans. It does appear to have a unique mechanism of action since in animals it has been effective in multidrug resistant nematode infections it may also be effective in humans with resistant infestations. Developing a vaccine requires an antigen. Developers have struggled with which antigen to use that will allow a sufficient and effective antigenic response. Vaccines developed for soil-transmitted helminths are effective in newborn animals. A vaccine to the hookworm antigen, Na-ASP-2, is effective in dogs [4]. Vaccinated while still puppies, they were resistant to hookworm infection. This success led to a limited phase 1 trial in Brazil. Unfortunately, 30% of the patients developed urticaria, and one patient developed anaphylaxis. These reactions stopped the trial. Speculation as to the cause of this intense reaction led to the hypothesis that the study patients had antibodies to the antigen because of previous exposure from residing in an endemic area. Like the puppies, the requirement must to vaccinate human subjects prior to antigenic exposure [5]. Conclusions Helminth infections are a common problem. Presumed effectiveness of drugs is a deficient hypothesis. The available medications are not as effective as once thought. The trials of mass treatment of schoolchildren do not exterminate the source of infection or resolve the community exposure. New medication research is essential, especially for Trichuris. Novel treatments such as vaccines may be on the horizon, but safety concerns for humans with previous exposure is an important immunologic problem. Sanitation is still the most important community solution. The recent disaster in Port-au-Prince, Haiti, demonstrated that without sewer systems and potable water, we humans are indeed a vulnerable species. Author details Richard R. Roach Internal Medicine Department, Western Michigan University School of Medicine, Kalamazoo, MI, USA *Address all correspondence to: richard.roach@med.wmich.edu © 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2019-03-16T13:12:25.623Z
2020-02-13T00:00:00.000
{ "year": 2020, "sha1": "7bd41a65e06e64cf7ddb346692d2bf6b91169c9d", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/70761", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "297a81b15d2d6155cef555901e51578ecd218c50", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
15193042
pes2o/s2orc
v3-fos-license
Mashup Based Content Search Engine for Mobile Devices Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiveness on e-learning content searches with a variety of content types, image, document, pdf files, moving picture. INTRODUCTION Mashup technology is defined as search engine with plural different APIs.Mashup has not only the plural APIs, but also the following specific features. 1) it enables classifications of the contents in concern by using web 2.0, 2) it may use API from the different sites, 3) it allows information retrievals from both sides of client and server, 4) it may search contents as an arbitrary structured hybrid contents which is mixed contents formed with the individual contents from the different sites, 5) it enabling to utilize REST, RSS, Atom, etc. which are formed from XML conversions. There are some Mashup tools, Yahoo!Pipes 1 , Microsoft Popfly 2 , etc. while there are some services using Mashup technology, ChaMap -Enjoy Geo Communication!- 3 , Newsgraphy 4 , Flowser on Amazon 5 Although Mashup allows content search which is same as portal, Mashup has the aforementioned different features from portal.Therefore, Mashup is possible to create more flexible search engine for any purposes of content retrievals.The search system which is proposed here is that make it possible to control the graph in the 3D space display with these 1 http://pipes.yahoo.com/pipes/ 2 http://www.microsoft.com/ja-jp/dev/default.aspx 3 http://chamap.net/ 4http://newsgraphy.com/ 5 http://www.flowser.com/peculiarity on Android devices.Mashup technology utilized search engine for e-learning content retrievals is proposed.Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API.The retrieved results are also merged and linked each other.Therefore, the different types of contents can be referred once an e-learning content is retrieved.The implemented search engine is evaluated with 20 students.The results show usefulness and effectiveness on e-learning content searches with a variety of content types, image, document, PDF files, moving picture. The following section describes the proposed search engine followed by implementation results and experimental results with 20 students which show usefulness and effectiveness of the proposed content search engine for mobile devices.Then conclusion is described together with some discussions on future investigations. A. Background of the Proposed Search Engine Figure 1 shows research background of the proposed search engine. Research background of the proposed mashup based content search engine There are seven key components, user involvement, user contribution, arranging by users, reliability, rich experiences, www.ijarai.thesai.orglong tail, and distributed system.Facebook, Mixi, etc. of social network system allows user involvement.Users may contribute to create huge database through amazon.com,for instance.Long tail becomes true with amazon.com,Rakuten, etc, Not only contents but also rearrangement of the contents can be done by users through Hatena Bookmark, Nikoniko, etc. Reliable Q&A is available through Wikipedia, Yahoo answer, etc.Also rich experiences are obtained through e-mail communications with Gmail, for instance.These activities can be done based on Web 2.0 technology utilized distributed system.In particular, Mashup technology allows efficient and effective information retrievals for the distributed information providing systems. B. Content Types and Representation Models for the Content Search There are some contents types for search.Namely, document, moving picture, a variety of types of contents, images, etc.Also there are some representation models for the different search content types, helical, star, star-helical, and star slide models.Figure 2 shows an example of star slide model of representation of contents types (Blue star) and the search results (Smile marks).Namely, there are the different content types on the top and there are search results under the content types.With the touch panel function, these content types can be selected (swipe in horizontal direction) and also search results can be selected through swipe in vertical direction.Under the icon of these five content types, candidate contents are aligned below the icons as search results.In order to show the candidate contents in a 3D representation, the sizes of icons and the candidate contents are changed depending on their locations.Namely, the icons which are located near forward are displayed with relatively large size while those which are located near backward are displayed with comparatively small size as shown in Figure 4. Icon sizes are changed depending on their locations Also swipe for candidate URLs in vertical direction and flick for rotation of candidate URLs in horizontal direction are available as shown in Figure 5. C. Examples on Content Search Operations The icon of the application software, ELDOXEA: elearning document search engine appears on the screen as shown in Figure 9. ELDOXEA is proposed for e-learning content search [1]- [5].In this section, example of the search operation with the keyword of "java" is demonstrated as shown in Figure 10.First, users have to key-in their keyword through the dialog box.After the research results appear on the screen, then users have to select the candidate the search results with swipe and flick operations. When you click the icon, then Figure 11 (a) of dialog box with search start of radio button appears followed by keyboard screen as shown in Figure 11 (b).When you key-in the keyword, "java" for example, then search result of screen images appear as shown in Figure 12.Examples of swipe and flick operations are shown in Figure 13.Thus users may take a look at research results freely and easily.After that, when user click one icon of the candidate of the search results from the candidates of icons as the search results, the content of the candidate appears as shown in Figure 14.There are some keywords in the Meta tag of the source code of URLs.By using the number of common keywords in the Meta tag keywords for both content types of URLs, the distance between URLs is defined.Thus the closest URL can be retrieved using the distance between URLs.This is also specific feature of the Mashup technology of search engine.Figure 16 Small portion of the report from the users (20 of students) about ELDOXEA Although most of those reports are positive, effective search and easy to use, a small number of negative report say that merged contents are not so easy to refer.Link structure has to be clearly shown in the screen.Through experiments with 20 of students who use ELDOXEA for e-learning content search, it is found that the proposed search engine provides e-learning content retrievals comfortably and in a comprehensive manner.On the other hand, there is the report which says that merged contents which are closely related to the retrieved content are not so easy to refer.It should be solved in the future. Fig. 1 . Fig.1.Research background of the proposed mashup based content search engine Fig. 2 . Fig.2.Content type and search result selections by star slide model of representation methodFigure3shows five content types as example.Namely, there are Youtube for moving picture, Amazon.comfor product, Yahoo for document, Yahoo search for image, and Yahoo for Web searches as shown in Figure3. Fig. 3 . Fig.3.Five different content types are available for search Fig. 4 . Fig.4.Icon sizes are changed depending on their locations Fig. 5 . Fig.5.Swipe for candidate URLs in vertical direction and flick for rotation of candidate URLs in horizontal direction Away3D on Andoroide is used for 3D representation of icons while the APIs which are shown in Figure 6 are used for Mashup.These are aligned in the pentagonal shape as shown in Figure 7.Meanwhile, example of icon movement through swipe for the case of Yahoo search is shown in Figure 8. Candidate of URL icon is selected with swipe (up and down operations). Fig Fig.6.APIs used for mashup Fig. 9 . Fig.9.Example of display image of smartphone of which the proposed search engine is implemented github.com/legnoh/ledoxea Fig. 15 . Fig.15.Example of linked retrieved results (Merged contents of Yahoo search results with the other retrieved contents) III.EXPERIMENTS 20 students in the laboratory of department of information science, Saga University are participated to the experiments using the developed ELDOXEA.Then more than 30 of reports are sent as shown in Figure 16. Fig Fig.16.Figure16Small portion of the report from the users (20 of students) about ELDOXEA IV. CONCLUSION Mashup based content search engine for mobile devices is proposed.Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API.The retrieved results are also merged and linked each other.Therefore, the different types of contents can be referred once an e-learning content is retrieved.The implemented search engine is evaluated with 20 students.The results show usefulness and effectiveness on elearning content searches with a variety of content types, image, document, PDF files, moving picture.
2014-10-01T00:00:00.000Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "6b01bcdfde54736fc84431f691033aa141e731e1", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_6-Mashup_Based_Content_Search_Engine_for_Mobile_Devices.pdf", "oa_status": "HYBRID", "pdf_src": "CiteSeerX", "pdf_hash": "6b01bcdfde54736fc84431f691033aa141e731e1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
3823903
pes2o/s2orc
v3-fos-license
SOLVING THE PROBLEM OF SMOKING IN THE POLISH ENTERPRISES DURING 2003 – 2015 Objectives: Tobacco smoking is a major organizational, economical, and public relations-related (PR-related) problem for the company. Moreover, it is an important health determinant of the working population. The paper reports the results of the research which analyzed the current state and the tobacco control activities’ transformations undertaken by Polish employers between 2003 and 2015. Material and Methods: Data comes from the research performed in 2006, 2010 and 2015, involving random-selected representative samples of Polish enterprises, employing no fewer than 50 employees. The sampling pattern covered location and classification of activities (excluding public administration, national defense, social security, education, health care and social assistance sectors). Consecutive interviews were conducted with representatives of 611, 1002 and 1000 companies, respectively. Results: The companies improved their compliance with the national regulations on smoking in the workplace. The strategy for limiting smoking in public places resulted in a steady increase in the number of companies (11%, 23%, 38%, respectively) that introduced smoking ban. Approximately in every second company, smoking was allowed only in the smoking room or outdoors. Voluntary activities (e.g., education and support for employees wishing to cease smoking) were very rarely undertaken by medium and large companies (several percent) and since 2010, when the law had become more restrictive, such tendency reinforced. Employers also were seldom interested in the prevalence of tobacco smoking among their personnel, its consequences for the company’s functioning and the effectiveness of the implemented tobacco control measures. Conclusions: National anti-smoking policy caused that companies were more focused on smoking-bans at the expense of education and support for those who wanted to cease smoking. Although this contributes to reducing secondary smoking in the workplace, the companies’ potential to become a major agent for tobacco control policies is neglected while the downward trend of smoking in the Polish society has slowed down. Int J Occup Med Environ Health 2018;31(3):261 – 280 INTRODUCTION Recent data shows that, despite systematic reduction of the prevalence of smoking (although, unfortunately, in recent years the reduction has been slowing down) every 4th citizen of our country (about every 5th woman and every 3rd man) is a smoker [1].Tobacco smoking is detrimental to the individuals and the society both from health and economic perspective. It is assumed that, comparing to other behavioral factors, health risks of tobacco smoking are better scientifically documented (smoking contributes, e.g., to the development of ischemic heart disease, cerebrovascular disease, infections of the lower respiratory tract, chronic obstructive pulmonary disease (COPD), cancers of the trachea, bronchi, lungs, and tuberculosis), and are also considered to be a factor increasing the likelihood of disability and ber, 2006) [9].These documents instruct, inter alia, the Member States how to protect people from the exposure to tobacco smoke, how to expand the areas covered by the prohibition of smoking, including public places, how to monitor the market of tobacco products, how to control the content of the ingredients in products of this type, how to warn against the adverse effects of smoking through information provided on tobacco product packaging, how to reduce advertising and promotion of tobacco products [9].Both above mentioned approaches touch upon the problem of reducing the prevalence of tobacco smoking, in the so-called the sphere of work, that is time and place of work.Particular attention is paid to that dilemma in the health promotion approach.The settings-based approach stresses the importance of taking steps to promote health, including tobacco control in environments in which we live (e.g., school, workplace, local community), with due respect to their specific (social, organizational, financial) character, while maintaining the rights of members of a given milieu to assess the situation, propose the desired changes in the region and participate in their implementation and evaluation [10].According to such an approach, health promotion emphasizes the role of work environment as an important sphere of the adult population functioning, in which it is possible to minimize the number of new smokers, or better cope with nicotine addiction through the development and implementation of workplace-based tobacco-control programs and internal policies [11].Workplace health promotion programs reach those people who are not reachable through other community programs (non-responders, not interested, denying).As far as smoking is concerned workplace health promotion measures offer possibility to protect non-smokers from involuntary smoking, to offer support for those who would like to quit, and minimize the number of new smokers although it is well known that majority of people had begun to smoke before they were 20 [12]. premature death [2,3].Smoking is also expensive.A statistical smoker in Poland spends almost PLN 2000 per year on cigarettes.This means that the purchasing power of a smoker's family is weaker, e.g., in terms of spending on food, but also on culture and education [4].In the macro scale, the smoking epidemic translates into higher rates of morbidity and mortality (especially due to cancer and cardiovascular diseases) [5] and socio-economic costs, such as burden on the health care system caused by treatment of tobacco-related diseases (e.g., in Poland it is about PLN 18 billion per year) [4].It has been assessed that in Poland in 2 decades the socio-economic costs connected with second-hand smoking will account to PLN 135 billion whereas the costs of treatment of diseases caused by second-hand smoking is estimated at PLN 22 billion [6].Thus, reduction of tobacco use is considered to be a major challenge for public health policies in the European Union (EU) countries as well as in the European Union as a whole.Activities to achieve this goal represent 2 analytical approaches.It focuses on convincing people, who are free to make their own choices, about the adverse consequences of tobacco smoking, motivate them not to start, or to quit smoking and offer them a possible comprehensive support for the implementation of the style eliminating tobacco smoke from their life environment [7,8].The second approach concentrates on the idea of reducing tobacco consumption by implementing the restrictions on smoking, enacted as part of the tobacco-control regulations (in public, administrative, criminal, civil, and labor law).This approach is increasingly denying the idea of individual freedom in the use of tobacco products, and the rationale behind the denial includes adverse effects of tobacco smoking on public health, economic and social facets of human life.The essence of today's anti-tobacco strategies is reflected in the Framework Convention of the World Health Organization on Tobacco Control of 2003 and its Guidelines (in Poland, it has become effective since 14 Decem-would be most desirable.In Poland, however, the stress is rather on the second approach.As mentioned above, since the effective date of the Law on the protection of health against the use of tobacco and tobacco products [15] (hereinafter referred to as the Law on Tobacco Control), enterprises have been among the places where smoking is restricted.Originally (due to ambiguities in the wording of the Law), according to some people, companies employing more than 20 workers were obliged to provide separate rooms for smoking [15,16].Currently, as provided by the amendment of April 2010 to the Regulation quoted above, the employer can independently decide whether to introduce indoor workplace smoking ban or possibly allow for smoking in rooms designed specifically for this purpose.The measures for health promotion in the workplace are encouraged rather declaratively (e.g., the Position of the Labor Protection Council of the Polish Parliament of 14 November 2006 [17] has never been implemented; the same is true about the Rantanen report of 2012 [18]).The information and education activities addressed to employers and managers, e.g., those justifying the advisability and teaching methods of effective methodologies for comprehensive programs of health promotion implementation and, above all, supporting their implementation (e.g., fiscal measures, organizational, financial support incentives establishing some sort of a partnership between the company and the state to improve the health of the working population) have been neglected.What is more, systemic mechanisms to support such business activity, as part of which programs designed to eliminate tobacco smoke from companies could be implemented, have not been developed.This approach is illustrated by the document of the Minister of Health of 2013 [17] summing up the activities for the implementation of the project intended to reduce adverse health effects of smoking in Poland during 1997-2013.With regard to the workplaces, those activities were limited to the information on the prohibition of smoking on their premises and regulations concerning the supervision It needs stressing that it is much easier to access employed (than unemployed) people (simply because they are present together at the same place and time); this is true in particular when it comes to the young people who are reluctant to participate in population-targeted health-promotion actions or campaigns.Furthermore, workplace-based health promotion campaigns are more easily implemented and less expensive (e.g., owing to the use of the existing training infrastructure, a company's management, training, safety, and internal interpersonal relations social dynamics experts, including support mechanisms and social control specialists).On the other hand, well-designed tobacco-control projects are beneficial to the companies implementing them, by contributing to the reduction of losses: -economic (resulting, e.g., from the deterioration of the quality of products or services, reduced effective working time due to cigarette breaks, increased absenteeism of smokers, increase in the cost of fire protection, maintenance, heating, maintenance of premises and equipment), -social (e.g., lower ability of smokers to perform their duties, smoking-related conflicts), -reputation [13].In the other approach, a workplace is primarily a public place, where smoking ban should be obeyed and in the event of non-compliance employee should be subject to penalties (e.g., in Poland, since 1996, companies are obliged to protect the health of non-smoking workers from tobacco smoke in public enterprises, and individual employees violating that regulation are currently at a risk of a fine in the amount of PLN 500).The supporters of that regulation emphasize that such a regulation causes that individuals do not smoke or smoke less, and thus reduce the environmental tobacco smoke in the workplace, which, for example, in Poland affects 34% of employees or 4.3 million people [14].As for the practice of tobacco-control measures, complementary combination of these 2 types of interactions brochures, presentations, infographics), and consulted the implementation of health-related projects in more than 200 companies.The Centre also tries to encourage decision-makers to be more involved in the process.Unfortunately, the achieved success is disproportionate to the needs.When it comes to decision-makers, the winning option is to attempt tobacco control through restrictive regulations.Consequently, there is no funding for social marketing addressed to employers and managers, motivating them to undertake non-compulsory measures to promote health, including tobacco control, and strengthen their determination to find and make use of systems that would help them in the implementation of those measures.One of the projects implemented by the team of the National Centre for Workplace Health Promotion involves monitoring the activity of medium-sized and large enterprises in Poland in terms of solving the problem of smoking among their personnel.The purpose of this paper is to present the trends in that respect over the last 12 years.In line with the above mentioned external circumstances of such commitment, attention will be paid to the extent to which employers respect the regulations restricting smoking and to the non-compulsory measures undertaken by the employers, including educational support for nonsmoking among their personnel.Thus, the diagnosis based on the results of our research shall contribute to the process of assessing the impact of national legal regulations, as a method of coping with the problem of exposure to tobacco smoke among the working population, including a reduction in the prevalence of nicotine addiction.In addition, it will be crucial for improving the quality of activities including dissemination of the methodology for health promotion programs in the workplace, with focus on the problem of smoking, which is another method of achieving the purposes outlined above.Such a general overview on how workplaces cope with the problem of smoking also provides some useful knowledge to profes-of the implementation of the project.The educational activities were addressed to the general public (e.g., in connection with the celebration of the World No Tobacco Day or the World Day of Smoking Cessation), and the most effective ones were said to be those addressed to children aged 5-16 years old.As to the employees, the information projects designed for officers and employees of the Prison Service, the Police, the State Fire Service, the Border Guard and employees of the Government Protection Bureau may be quoted as praiseworthy exceptions.Moreover, such selective activities were continued during 2004-2018 only for workers of institutions subordinated to the Ministries of: Justice, National Defense, and Home Affairs.These institutions are scheduled to be supported in their attempts to entirely protect their personnel from tobacco smoke in the context of their efforts to ensure safe and healthy working conditions and provide tobacco-control education [19].Hence, there is no strategy that would involve all our employers in reducing the prevalence of smoking.The span of efforts to prompt companies in Poland to implement healthy lifestyles, including reduction of tobacco consumption, are rather limited due to narrow support from public funds, mainly by the National Centre for Workplace Health Promotion, Nofer Institute of Occupational Medicine, Łódź.Among other things, in 1996 and 1998, the Centre issued the country's first guidebook presenting a model methodology for tobacco control activities in the workplace, during the period, 1991 to 2001, carried out the project "Smoke-free Workplace" and, among others, sent information packages on-site programs about the problems of anti-smoking in 9000 companies [20].In the years 2012-2014 in the framework of the "Prophylactic programme to prevent addiction to alcohol, tobacco and other drugs," co-financed by the Swiss program of cooperation with the new EU member states, the Centre prepared and published (this time also in the Internet) another package of educational materials (the guide, The first survey covered 611 enterprises, the second one -1002 enterprises, and the third one -1000 enterprises.The first survey was carried out through the classic interview questionnaire, the next ones -through standardized computer-aided telephone interviews (CATI).A company (an enterprise, factory) was the survey unit, and a random-selected single member of a company's executive personnel (e.g., from health and safety department, human resources, management board) was interviewed.Due to the fact that in each edition of the survey interviews were conducted in the representative samples of companies, the scale and reasons of refusal to take part in the surveys were not analyzed.Questionnaire sheets (basic version and its subsequent versions with minor modifications) had been developed by the National Centre for Workplace Health Promotion at Nofer Institute of Occupational Medicine, Łódź.Field surveys were performed by professional survey companies (PBS DGA Co. Ltd. from Sopot, PBS Obserwator from Krakow, and the Biostat Group from Rybnik).Due to the fact that the analysis was intended to indicate trends rather than to provide a detailed comparison between the results of the consecutive release of the survey, it had been assumed that some of the differences between their individual stages (due to, e.g., modification of how to conduct an interview, or the fact that the data was collected by different research centers) were negligible.The Table 1 depicts companies taking part in 3 surveys showing their ownership, economic condition and the amount of employees.Empirical material from the 3 studies was used as the basis of the diagnosis concerning: -regulations on smoking at work adopted by companies; -the extent to which those regulations are obeyed by the personnel and supervision by the management of the observance of the regulations, including the application of penalties for non-compliance; -non-compulsory tobacco-control measures implemented by the company; sionals offering specific services to employers associated with tobacco control (including those belonging to occupational health services) and internal structures dealing with health management in companies. MATERIAL AND METHODS The article presents the results of the research conducted in the last months of 2006, 2010 and 2015.They diagnose the extent and method of engaging companies in Poland in tobacco control activities in the last 2-3 years prior to the moment of data collection.Thus, it seems reasonable to assume that they present the situation in the period of 12 years.Such a choice of the consecutive releases of the survey allows for a comparison of what happened in the first period of the Act of 9 October 1995 on the protection of health from the effects of use of tobacco and tobacco products [15] with what occurred in connection with its "April amendment" [21] (which, among others, expanded the areas covered by the prohibition of smoking to include most public places, unified sanctions for non-compliance, tightened the rules on advertising of tobacco products), and with the state of things which took place after 5 years of its introduction, and before another amendment, passed in 2016 (in connection with the EU di rective on tobacco) [22].The companies were recruited at random as the representative of the total number of Polish enterprises employing over 50 employees.The stratified sampling schedule took into account the location (all provinces), type of activity (Statistical Classification of Economic Activities in the European Community -NACE) section, earlier according to NACE 2002 [23], then according to the ordinance of the Council of Ministers dated 24 December 2007 on the Polish Classification of Economic Activities [24], excluding enterprises belonging to public administration and defense, mandatory social security, education, health care and social assistance), and the number of employees (in increments: 50-249, 250-999, 1000 and more). pany, the attitudes to e-cigarettes (data from the last survey only), were analyzed.It was also tested whether there was a connection between the phenomena analyzed above and the number of employees in the company or its financial condition.The statistical analysis was conducted with the use of Statistica computer program.The statistical dependence was measured by Perason's Chi², it was assumed to be statistically significant at p < 0.05.All relationships between the variables presented in the paper are statistically significant. Restrictions on smoking at work In the first analyzed period (2006 survey), a total ban on smoking during working hours, irrespective of the work--the quality of companies' projects from the point of view of standard methodologies for the promotion of smoking cessation among employees.The focus was on issues such as: knowledge of the scale of cigarettes consumption problem by the personnel, analyzing the effects of that phenomenon on the company's functioning, the level of employee participation in the development of occupational regulations on smoking, presence of any internal documents relating to solving the problem of smoking and evaluation and anti-smoking activities; -in addition, selected parameters of the overall social climate related to the problem of smoking in the company, such as groups interested in limiting/eliminating smoking at work, smoking-related conflicts in the com- a On the basis of information given by the respondents in the interviews. had equipped with ashtrays, e.g., in the corridors, changing rooms, in 3% there had been no relevant regulations). When it comes to compliance with the internal rules on smoking by the personnel, according to the data for 2006 and 2010, it was at a very unsatisfactory level.Generally desirable state prevailed only in every second mediumsized and large enterprise.However, around 2015, there was a major improvement.Personnel of 3 quarters of companies behaved properly, and the regulations were disobeyed only in every 50th company.It should be noted, however, that still in every 5th company, although the situation was satisfactory, there was a need for further disciplining employees in that respect (Table 2). A similar situation occurred in the case of supervision by superiors in compliance with the relevant company regulations on tobacco smoking.This is illustrated in the Table 3. Apparently their involvement increased in that respect around 2010, as 4/5 of them tried to consistently fulfil their duty, while in the previous years the respective quantity had been a half or nearly 2 out of 3. Unfortunately, still every tenth company needed further improvements. As to the application of penalties for disobeying the rules on tobacco control at work, they were more often reprimands than fines (Table 4).Specific apogee in the use of warnings and reprimands was shown by the 2010 survey.Such a situation prevailed in over 40% of companies, while in the earlier and the later survey the same was true er's location, was introduced by approximately every 10th company; in about every 4th company such ban was valid within the premises of the company (with the option of smoking only outside); in nearly 1/3 of companies, smoking was allowed in smoking rooms designed for that particular purpose.A similar fraction of the companies allowed for smoking in designated, equipped in ashtrays places (i.e., in the corridors, staircase).In addition, more than every 10th company did not apply any regulations. When it comes to 2010, just after the Law on Tobacco Control became more restrictive, a total ban on smoking at work had been introduced by 23% of companies, 54% had allowed to smoke in the smoking room and on the outside of the building(s), and 23% had regulations incompatible with the law (e.g., smoking was permissible in the corridors), or had not had any relevant regulation. A few years after the Act [15], i.e., around 2015, total ban on smoking at work, indoors and outside of the company buildings had been introduced by 38% of the companies, a total ban on smoking on the premises of the company, but combined with the possibility of smoking outdoors had been introduced in 30% of the companies.Twenty-three percent of the companies had adopted regulations whereby tobacco smoking had been permitted only in closed smoking rooms and outdoors.Eight percent of the companies reported that they had not met the requirements of the tobacco control act (5% had permitted smoking in marked places, training) were implemented in the reported period by at most each 4th company, and on the average, by eve ry 5th company, hence rather rarely.Their popularity was the greatest around 2010.Afterwards the popularity decreased to only 3% and 1% in the preceding years and in 2015, especially the distribution of guides to help people cease smoking and training courses.The phenomenon of involving families in tobacco control activities was practically nonexistent, and bonuses for not smoking at work were very rare.The extent of activities intended to assist employees in ceasing their addiction to nicotine was also very small.On average, only every 16th company decided to respond to such a challenge.In addition, there was a downward trend in each of the forms of such support reported in the survey. In fact, companies did not actually undertake this kind of activity.They almost completely failed to encourage their employees to participate in nationwide smoking-cessation for 1 company in 4. The frequency of disciplining by means of fines remained at a similar level and was about every 10th company in the analyzed period. Extra-obligatory tobacco control activities in the workplace The Table 5 shows the frequency of measures undertaken by small and medium enterprises in our country other than those prescribed by the tobacco control law.It includes 2 basic types: educational, namely those aimed at increasing the knowledge of the personnel about the phenomenon of smoking and non-smoking motivation, and providing support for those employees who wish to cease smoking.In addition, it illustrates the use of the company's activities to increase the likelihood of recruiting non-smokers.Classical educational activities addressed to the general personnel (i.e., distribution of leaflets, posters, guides, implementation of 2 key areas of such initial diagnosis, namely, the prevalence of smoking among personnel and its implications for the development of the company.As far as monitoring of the prevalence of smoking in firms is concerned, it has become evident from the results of the study performed in 2006 that 17% of companies had a general data on the number of employees who smoked, while only 0.5% of companies had more detailed information, e.g., on the intensity of smoking.In 2010 and during the preceding 2-3 years, every 3rd company had such gen-campaigns, previously supported by every 10th company. The preference for non-smokers applying for a job in the company was also marginal. Quality of tobacco control projects implemented in the workplace From the companies' point of view upon the standard methodologies of banning tobacco smoke, a crucial issue is a diagnosis that allows to determine the needs and challenges in this regard.The accessible studies examined the Encouraging the employees to participate in tobacco-control events (such as, e.g., "The day without a cigarette" or "Quit smoking with us") Advertising of tobacco-control therapies accessible outside the workplace a The percentage shares do not add up to 100 because the companies showed all their activities."-" -the activity was not included in the survey. ing those decisions in companies' internal documents, in the 1st, 2nd and 3rd release of the reported research, the proportions of the companies that formalized tobaccocontrol measures were 54%, 68% and 65%, respectively (while the proportions of the companies which did not do it were 46%, 27% and 25%, respectively).Evaluation of measures taken to minimize the problem of tobacco use by employees was an important element motivating and enabling companies to improve those measures.The reported study focused on the assessment of the implemented regulations consequences, and other occupational activities of tobacco control in the company.Around 2006, as much as 93% of the companies did not attempt such evaluation, while around 2010 and 2015, the percentage shares of companies that did not attempt such activity were 71% and 61%, respectively.Thus, the resultant data confirmed a frequent tendency to neglect such assessments.Although this neglect was no longer as widespread as in 2006, still close to 2/3 of companies did not observe what was the result of the internal measures taken to control tobacco smoking among the company's personnel. Overall social attitudes towards the problem of smoking in the company The diagnosis of this phenomenon focused on finding companies that were supporters of regulations and other measures relating to smoking, whether there were smoking-related conflicts and, in the recent survey, what the attitude to the use of e-cigarettes by workers was. As regards the issue of fostering regulations and activities within the company which related to smoking, the data on this subject is provided in the Table 6. In each release of the survey, the company management was most frequently indicated as the supporter of tobacco control followed, at a considerably lower frequency, by the personnel of health, safety and personal resources departments.The 3rd supporter group consisted of non-eral information, and 2% had additional data.In 2015, the situation was the same, the corresponding numbers being 34% and 3%, respectively.As for the analysis of the effects of tobacco smoking by employees on the functioning of the company, our earliest study showed that only 5% of all surveyed companies tackled that issue, while in the next release of our survey the corresponding proportions were 13% and the last 31%, respectively.Therefore, this problem was being more and more often perceived, although 2/3 of medium-sized and large companies in Poland were still ignorant to the prevalence of smoking among their personnel and its consequences for the functioning of the company. Other important qualitative determinants of activities for tobacco control include, on the one hand, the necessity to establish partnership and conciliatory relations with the employees and, on the other hand, present the adopted tobacco control strategies in the form of an internal document, thus making it easier for the personnel to become familiar with and obey them. The extent of consulting with the employees the decision on smoking control in the consecutive periods was as follows.According to the data for 2006, only 15% of companies asked for the opinion of all employees and nearly 1 out of 5 consulted them with some of their representatives/organizations.As much as a half of the companies introduced tobacco control without such consultations.Around 2010, approximately every 4th company consulted those issues with all their employees, and every 3rd consulted on the issues with the representatives of the personnel.Every 3rd company also implemented tobacco control measures without prior discussion with the personnel.According to the 2015 data, 61% of companies agreed implementation of anti-tobacco regulations with all employees, 14% -with their representatives/associations, and less than in every 5th this was done without any such procedures.Regarding the formalization of the decisions on tobacco control in the workplace by includ-Contrary to the popular belief, smoking was not a problem that would give rise to any discernible company-wide conflicts, both when it comes to relationships among employees or between superiors and subordinates.The respondents representing 93%, 75% and 97% of companies (for 2006, 2010 and 2015, respectively) did not mention conflicts between smoking and non-smoking employees and 92%, 87% and 96% when it comes to conflicts between management and personnel.It seems that some intensification of conflicts occurred during the implementation of the April amendment to the Law on Tobacco Control [21], which took place in the end of 2010, both between management and personnel as well as between the smoker and non-smoker employees.Perhaps they were granted by the employer (the owner or operator) the right to decide whether to introduce a total ban on smoking or continue smoking as permissible in smoking rooms or off the premises of the company.As you can see, the situation calmed down and after 5 years, the conflicts were only of marginal significance.An interesting problem was companies' attitude to e-cigarettes.They appeared in Poland in 2006 and evidently became popular in 2008 and 2009.It seems reasonable to assume that e-cigarettes were used by a few percent smoker employees.The data proved that neither active smokers did actively search for a company's support to cease smoking nor trade unions showed interest in the problem.It appeared that more stringent stance on tobacco smoking, as reflected by the changes made to the Law on Tobacco Control in 2010 [21], unavoidably expanded the activity of management in the coming years to favor taking firm decisions in this regard.This is illustrated by almost 30% rise in the frequency of indicating that group in the period 2010-2015 as the supporters of solving the problem of smoking.It was, in a quite natural way, accompanied by a decreasing activity of professionals of other departments dealing with the health of employees, and non-smokers.It seems quite reasonable to assume that the "health-involved" professionals merely executed the decisions taken by the company's management, while the non-smokers were simply "reaping the rewards."Interestingly, only small percentage shares of companies reported their lack of allies for anti-tobacco activities among their personnel.The increase in such behavior to a level of approximately every 10th company occurred when the more restrictive law was implemented, which was shown in the 2010 survey, and then dropped to 3% according to the data for 2015.implemented extra-statutory tobacco control measures.In contrast, smaller companies were more ignorant about the prevalence of smoking among their personnel, less frequently consulted tobacco-control regulations with all their employees and more frequently implemented the regulations arbitrarily without any consultations, had no records on that subject in the internal documents and introduced a total ban on smoking at work, or made smoking permissible only outside of their premises.Findings from 2015 showed that better economic conditions favored consulting of introduced anti-tobacco regulations with the personnel, recording those regulations in the internal documents and implementing them 2-3 years preceding the survey.As to the company size, the companies employing over 501 people usually introduced a total ban on smoking at work and less consistently monitored the compliance with the adopted tobacco control rules.Smaller companies, just as before, better knew the prevalence of smoking among their personnel and more often consulted measures to be undertaken on tobacco control with all employees. Summary The data showed that over the years 2003-2015, there was a significant change in the companies' attitudes towards implementing relevant national smoking-control regulations.At the beginning of that period, nearly a half of the companies did not obey those regulations (acted unlawfully or were not at all interested in this issue), while at the end of that period only about 1 in 10 reported such disobedience.However, there was a consistent increase in the proportion of companies implementing a total ban on smoking at work (11%, 23%, 38%).Around 2010, it was also fairly common to isolate the smoker from the non-smoker employees, not only by providing a smoking room, but also by bringing the smokers out, to the outside of the company's buildings.According to the 2010 data, such measure was taken by a half (3-7% according to various data) of the population aged 15 years old and older [25].They have their detractors (among people opting for the complete eradication of nicotine dependence) and supporters (who argue that they are effective in reducing adverse effects of tobacco smoking).The study conducted in the end of 2015 undertook the problem of the company's attitudes towards e-cigarettes.The results showed that as many as 71% of companies were not interested in whether employees used ecigarettes or not, 13% of the companies limited their use, and 1% of the companies completely banned their use.Every 16th company promoted e-cigarettes as a healthier form of smoking.Nearly every 10th company experienced difficulty in taking a position in this regard. Size and condition of the company and its tobacco control activity In 2006 and during the preceding 2-3 years, the economic condition of companies exerted a statistically significant effect on the level of compliance by employees to the rules on smoking control adopted by their companies, and on the enforcement of those rules by the management.In companies with weak financial situation, the results were poorer.In addition, that type of companies experienced more frequent smoking-related conflicts between personnel and management.When it comes to the size of the companies, in those employing up to 100 employees, tobacco-control consultations with the whole personnel were most frequent, while similar consultations were least frequent in the largest companies.On the other hand, the larger the company the more likely the tobacco-control regulations were recorded in the internal documents. According to the data from 2010, there was no relationship between the economic condition of the companies and their attitude to smoking.On the other hand, it appeared that the larger the company, the more often it conferred formal status upon the internal regulations on the problem of smoking, organized smoking rooms and e.g., by promotion of tobacco-control therapy or provision of individual medical advice) in 2010, there was complete regression in this regard. As to the general quality of the approach of the studied companies towards solving the problem of smoking during working hours, it is far from optimum in the light of model methodologies.In particular, the companies lacked information about how a big challenge it was in their instance. The data indicated that companies showed little interest in the scale of smoking among their employees.In 2006 and during the 2-3 preceding years, only every 5th company had data on this subject.At the time when tobacco control rules were tighter, there was some improvement in this regard, but still about 2/3 of the companies were not interested in the extent of this phenomenon.There was, however, a clearly growing interest in the impact of tobacco smoking by the personnel on the company's efficiency.By the 2006 survey, the problem was considered only by every 20th company, while currently it is studied by every 3rd company.However, this index is still very low.The situation is better when it comes to taking into account the opinion of a company's personnel on the adopted tobacco control measures.While at the beginning of the analyzed period, such opinions were collected from all employees or their representatives only by every third company, around 2010 the opinions were collected by more than a half, and currently they are collected by 3 quarters of the companies.This is a positive phenomenon, the more so that there is a growth in the percentage share of companies that declare consultations with each employee.However, we must remember that the final interpretation of this fact is very dependent on the extent and manner in which those consultations are performed, but, unfortunately, no such data is accessible.Moreover, according to the 2010 data, it has become a little more frequent than before to formalize decisions on tobacco control by including them in the internal documents (ordinances, regulations).Since then, the situation of the enterprises, for 2015 -the corresponding number was 1 out of 4, but at the same time almost 1 company out of 3 permitted smoking only outdoors.In the event that companies could also offer their own assessment (as it was the case during the 2015 interview) of what changes had occurred in connection with the amendment of the 2010 regulations on smoking in public places, including workplaces, it turned out that only 7% of them considered that the situation remained the same, while 89% of respondents claimed that smoking had become limited.Only less than 3% indicated that the smokers were more at ease.A similar fraction of respondents were no table to provide any answer to such a question.As to obeying -by a personnel -the tobacco-control regulations currently in force in their companies, the implementation of more stringent law resulted in an improvement, but still every 4th company should strive to improve in that respect.The company management also tended to more consistently monitor the level of compliance by the employees to the tobacco-control regulations, and this trend was intensified in 2010-2015.But the problem was not satisfactorily solved in about every 10th company.Penalties, as a method of dealing with it were mostly used by companies around 2010.This was true about more than a half of them.According to the findings of 2015, penalties were applied in every third company, while warnings and reprimands were twice more frequent than fines.Voluntary tobacco control actions were taken very rarely by medium and large companies in our country (a few, or exceptionally an extra dozen percent or so).After a slight increase in the interest in educational activities around 2010, the companies practically did not undertake them.(This is particularly true with tobacco control training and providing smoking-cessation guides to the personnel.)A bonus paid for not smoking at work was an extremely rare practice.Active support for employees wishing to cease smoking was also neglected.After a slight increase in the interest in their problem (manifested, companies are not interested in how they are used by the personnel.It turns out, however, that if this issue is recognized, the companies tend rather to restrict their use (every 7th company) than regard e-cigarette use as an advisable method for limiting traditional smoking (every 16th company). Companies' economic condition and size do not significantly affect their attitude to the problem of smoking. Better financial situation in the first and the last of the analyzed periods favored workers' compliance with the new tobacco-control legislation, and currently it also favors consulting of tobacco-control measures with employees and including them in the internal documents.In small companies, it was more usual to agree the tobaccocontrol measures with the employees rather than to include them in formalized internal regulations. DISCUSSION The findings cannot be related to similar diagnoses of this kind in our country because such research has not been performed.Some external view of the situation, but limited solely to obeying by companies the regulations on tobacco control, may be obtained from the Chief Sanitary Inspectorate (Główny Inspektorat Sanitarny -GIS) monitoring data.They show a better situation in that respect than that portrayed by the results of the survey reported above.According to the 2014 GIS data, only 0.1% of the 70 258 inspected companies have not implemented the law [26].However, according to the 2015 internal evaluation of representatives of companies, 8% of companies do not meet the requirements of the relevant law (in 5% of the monitored companies smoking is permissible in specifically designated places equipped with ashtrays, e.g., in the corridors, changing rooms, while in 3% of the companies, smoking is not regulated at all), in as many as every 4th company the relevant regulations are not fully obeyed by the employees, and nearly in every 5th company, the management has problems with consistency in the supervision is similar; currently, 2/3 of companies behave like that. Unfortunately, the assessment of the effects of tobaccocontrol measures continues to represent a serious problem.A significant proportion (about 2/3) of companies do not have the habit of analysis.Given the small number of those that carry out activities other than those arising from the statutory provisions, it is reasonable to suppose that it is the effect of laws restricting smoking at work that are predominantly disregarded in the analysis, which means that the tobacco-control measures resulting from current legal regulations are implemented fairly mechanically. When it comes to a general social climate about the phenomenon of smoking during work, it has become evident that the advocates of the implementation of measures to reduce smoking include primarily managements of the companies, followed by departments responsible for safety, health and human resources and the non-smoking personnel of the company.Trade unions seldom become involved in that sphere, and instances of smoker employees seeking for smoking cessation support are extremely rare.After 5 years following the moment when the tobacco control law had been made more restrictive, company managements not only have been seen most frequently in such a role, but increasingly have outrun in that respect other individuals and groups among the personnel.At the same time, the resistance to measures undertaken to control tobacco smoking in the workplace becomes weaker.This is confirmed by findings on conflicts around the problem of smoking in the workplace.According to the 2006 and 2015 data, the overwhelming majority of representatives of the companies did not identify such phenomena among its personnel.Only in the 2010 study, i.e., during the implementation of the stringent restrictions on tobacco smoking, every 7th company experienced tensions between the personnel and the management, and every 4th -between smoking and non-smoking employees. As for e-cigarettes, their emergence and spread is not as yet a problem for companies.Nearly 3 quarters of the felt that appropriate adjustments had been made to the Law on Tobacco Control [29] while in 2012 up to 84% supported, e.g., the ban on smoking in public places, while the number of opponents of these measures decreased [30]. As for the ban on smoking at workplaces, only the findings for the earlier period are accessible, but they also show a similar trend, because only 27% accepted it during late 1990's, while in 2009 it was accepted by as much as 69% of the members of our society [31]. It also seems interesting to refer the situation in our country in the field of tobacco control activity of companies employing at least 50 employees to the data coming from the U.S. Provisions of tobacco control law [32,33] in that country are in fact more liberal (in most of the municipalities, legal regulations that exercise control over the laws of the companies located within their premises have not implemented any restrictions on smoking).According to the data from the states of Texas and Washington [32,33] illustrating this particular situation from the perspective of a representative sample of members of the Society for Human Resource Management (suitably collected in 2013,2008,2015), despite no legal requirements, up to 77-85% of companies have their internal tobacco control policies, but smoking bans have been introduced on a smaller scale than in our country.For example, in Texas, only in a half of companies included in the survey, smoking is restricted to designated sites, and regulations completely banning tobacco from indoor spaces and indoor/outdoor areas have been adopted only in every 5th company.However, smoking cessation support activities are undertaken more frequently than in Poland because every 4th company organizes courses of this kind.In Washington State, such assistance in about every 4th company is in the form of classes or support groups, and about every 3rd company offers insurance covering the costs of medications and medical advice, and in a similar proportion of companies there are procedures that enable of the compliance to the law.Such divergence in the assessment of the extent of respecting the law on tobacco control by companies in Poland makes it reasonable to presume that the monitoring lacks deeper insight, since in their self-evaluation, companies see more trouble with respecting the law in Poland (a similar situation occurred also during the previous release of the survey).However, there is data proving that employers show interest in tobacco-control activities undertaken by employees.According to the 2010 findings of the National Centre for Workplace Health Promotion, Nofer Institute of Occupational Medicine, approximately every 10th employee in our country expects education on, and support in the struggle against addiction to nicotine [27], and according to the 2014 The Confederation of Polish Employers (Pracodawcy Rzeczypospolitej Polskiej -PRP) and Luxmed data, the smoking-cessation support is expected by nearly every fifth employee [28].Thus, the involvement of companies is therefore below the expectations of the workers.However, the relationship between the cited 2015 studies may be observed, with respect to: -the fact that nearly 90% of the surveyed companies used the opportunity to implement various bans on smoking at work, -a clear invigorating of the companies' boards in the role of banning tobacco smoke supporters in the work environment, -decrease in the percentage share of enterprises in which there are no allies for this type of measures, and those concerning the attitudes of the Polish society against restrictions on smoking in public places, collected in connection with work on more restrictive law on tobacco control.They illustrate a positive social climate towards these changes.Already in December 2010, according to the data from the Center for Public Opinion Research (Ośrodek Badania Opinii Publicznej -TNS OBOP), 62% of adults voluntary tobacco control activities.Support for voluntary tobacco control offered by medium-sized and large companies is far less frequent than the disciplinary activities, and the situation in that respect has become worse since 2012.Currently, such additional tobacco control activity is an exception and, to make things even worse, it is usually limited to the distribution of leaflets and posters about the dangers of smoking, which is the simplest, classic form of tobacco control through education.This may suggest that our employers treat tobacco smoking bans primarily as a convenient way to directly deal with the problems of the organization of work (such as a loss of working time) or minimization of costs (e.g., maintenance of premises and equipment, fire safety, provision of smoking rooms) and they use them quite mechanically.The proof is a rare knowledge (existing only in every 3rd company) about the prevalence of tobacco smoking among the personnel and its consequences for the functioning of the company, and also the fact that as many as nearly 2/3 of the companies did not analyze the consequences of the adopted tobacco control regulations.Our employers seem to underestimate, or disregard the impact of smoking (either at work or on the outside) on the health of their personnel (including, e.g., the effect on the efficiency in the performance of official duties, absenteeism, or premature retirement).They also seem to forget about demographic problems, such as, e.g., the aging of the population.Such approach to solving the problem of smoking also means that employers do not pay due attention to the phenomenon of personnel's commitment to the company (they still believe in the continuance of the employer market).The commitment in of nicotine-dependent workers may be considerably disturbed by their inability to satisfy their needs as a result of the implementation of smoking bans only.Probably the attitude of our employers may also be interpreted in terms of low social commitment.Data on their directly expressed opinions about healthy workplace programs shows that only 1 out of 10 intends forcing employees who violate tobacco-control regulations to make use of specialized smoking cessation services.Moreover, in the opinion of 54% of the U.S. human resource experts, in their companies education on the benefits of not smoking is conducted [32][33][34].Polish companies, however, introduce rather bans, well exemplified by the fact that in nearly a half of the companies of the analyzed size there is a total ban on smoking at work, i.e., indoors and outside the building, and the assistance for those who want to liberate themselves from the addiction to nicotine is completely marginalized.Prevention-oriented education activities are also much rarer; they are undertaken by every 6th company.Moreover, the U.S. employers less frequently applied penalties for failure to comply with anti-tobacco regulations than our companies (1% vs. 12%, respectively). CONCLUSIONS In the light of this data, it seems that companies have eagerly embarked upon the trend of anti-tobacco measures adopted in our country, that involves primarily limiting smoking in public places (regardless of the company's level of employment or its economic condition).They tend to implement the most restrictive form of smoking ban allowed by the modified Tobacco Control Act [15].Such attitudes of employers are probably enhanced by changing social attitudes, involving the acceptance of the measures taken to reduce the freedom to use tobacco products.This allows the employers to be less afraid of any opposition to such a regulation by the nicotine-dependent employees, and enjoy clearer support from the personnel exposed to second-hand smoke.This approach helps companies to reduce the phenomenon of second-hand smoke in the workplace, and each reduction of exposure to tobacco smoke should be regarded as a positive phenomenon (although it is worth remembering that the positive consequences are somewhat reduced by the fact that the ban is not always respected).On the other hand, it does not augment, and maybe even hinders As for the question of quality, i.e., the way in which tobacco control measures should be implemented in the companies, the diagnosis indicates the following challenges: -encourage companies to better understand the prevalence and patterns of smoking by the personnel (although the interest in this issue is growing, but it is still at an unsatisfactory level, because some sort of the understanding seemed to exist only in every 3rd company); -encourage companies to take extra-obligatory activities to improve knowledge of personnel about the problem of tobacco use and help to free themselves from nicotine addiction; -further develop the dialogue with the employees in the area of internal policies on tobacco control (it is advisable to sustain the positive trend in this area and to ensure that the opinions and needs of personnel are reliably recognized); -stimulate employers to ensure that the rank of tobacco control regulations is sufficiently high by including them in the internal documents (every 3rd medium-sized and large company still do not use this method of action and such situation has prevailed for the last 5 years). A document showing a company's program/strategy for dealing with the problem of smoking by personnel, not just a list of rules restricting smoking, may be quoted as a good example; -popularize and rationalize the approach to the assessment of the impact of tobacco control measures undertaken by the company (e.g., on the quality of products or services, the social climate in the company and company's repute) to counteract the mechanical implementation of the regulations, or prevent the use of fashionable and not necessarily effective measures.The presented findings should be confronted with further analyses (the more so that slight differences exist in the selection of the sample and methods of respondents in this study) and made more profound so as to make them more suitable for use by public health decision-makers, for the to reduce the cost of medical care [28].This type of diagnosis is acceptable and it seems reasonable to permit the matters to take their own course.On the other hand, considering the fact that the declining trend in the prevalence of smoking in the Polish society has become less evident as well as that data indicating that limiting the places where smoke is allowed has motivated only a few percent of respondents to cease smoking [14,35] (this is a very low rate, even if it is underestimated due to the tendency to eliminate or not to disclose the effect of pressure on decisions) as well as taking into account the low efficiency of traditional educational populationtargeted campaigns [35], it seems that it is worth to remember that, as part of public health policy, it is advisable to take stronger action stimulating employers to implement comprehensive health conservation programs including tobacco control measures.Implementing them right there creates an opportunity to address mainly young workers who are particularly difficult to be involved in local projects and implement more individualized strategies for educational interventions, including those designed for nicotine addicts.Although the research on the impact of the company (workplace) involvement in the effectiveness of specific methods used in smoking cessation campaigns is not advanced, the accessible results indicate that for a number of those methods the effects are similar, and financial support from the employer (e.g., premiums for not smoking, sponsorship of nicotine replacement therapy, etc.) seems to be particularly motivating [36].Of course, the adoption of this perspective requires a partnership approach from the state to finance health-promoting activity of companies.So far, it seems unlikely.It also means that there is a need for intensification of educational activities for employers and executives, improving their motivation and ability to support actions intended to improve personnel health.This is in line with the goals of the recent version of the National Health Programme for 2016-2020 [37]. Table 1 . Characteristics of the representative samples of Polish companies taking part in 3 surveys on tobacco smoking in 2006, 2010 and 2015 Table 2 . Compliance of employees with the rules on tobacco smoking at work valid in representative samples of Polish companies taking part in 3 surveys on tobacco smoking in 2006, 2010 and 2015 Table 3 . Enforcement by management of the rules on tobacco smoking valid in representative samples of Polish companies taking part in 3 surveys in 2006, 2010 and 2015 Table 4 . Application of penalties for non-compliance with tobacco-control regulations in representative samples of Polish companies taking part in 3 surveys in 2006, 2010 and 2015 Table 5 . Activities undertaken to prevent tobacco smoking in representative samples of Polish companies taking part in 3 surveys in 2006, 2010 and 2015 Table 6 . Supporters of actions intended to ban tobacco smoke in the company in representative samples of Polish companies taking part in 3 surveys in 2006, 2010 and 2015
2018-04-03T06:10:22.593Z
2017-12-21T00:00:00.000
{ "year": 2018, "sha1": "6839057beb43a04def4829303195099c06549081", "oa_license": "CCBYNC", "oa_url": "http://ijomeh.eu/pdf-75292-16997?filename=Solving%20the%20problem%20of.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "96a2d624ed76571463794c6c9b6639a8d0cd49ae", "s2fieldsofstudy": [ "Business", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Business" ] }
27552295
pes2o/s2orc
v3-fos-license
Role of interleukin-22 in inflammatory bowel disease Inflammatory bowel disease (IBD) is a chronic inflammatory disease thought to be mediated by the microbiota of the intestinal lumen and inappropriate immune responses. Aberrant immune responses can cause secretion of harmful cytokines that destroy the epithelium of the gastrointestinal tract, leading to further inflammation. Interleukin (IL)-22 is a member of the IL-10 family of cytokines that was recently discovered to be mainly produced by both adaptive and innate immune cells. Several cytokines and many of the transcriptional factors and T regulatory cells are known to regulate IL-22 expression through activation of signal transducer and activator of transcription 3 signaling cascades. This cytokine induces antimicrobial molecules and proliferative and antiapoptotic pathways, which help prevent tissue damage and aid in its repair. All of these processes play a beneficial role in IBD by enhancing intestinal barrier integrity and epithelial innate immunity. In this review, we discuss recent progress in the involvement of IL-22 in the pathogenesis of IBD, as well as its therapeutic potential. INTRODUCTION Inflammatory bowel disease (IBD) is a group of inflammatory conditions of the small intestine and colon, and includes Crohn's disease (CD) and ulcerative colitis (UC). Despite extensive research efforts, however, the etiology of IBD remains unclear. The current opinion about IBD pathogenesis is that the disease results from interactions between environmental factors, mainly microbes of the intestinal lumen and their products, and dysregulation of immune responses in genetically susceptible individuals [1] . Certain harsh environments that may affect barrier integrity (to increase barrier permeability to luminal macromolecular substances, such as protein antigens and microbial products) and over-absorption of luminal microbial products (which has been ascribed to a number of mucosal pathologies) can lead to an over-activation of immune system, thus resulting in mucosal inflammation [2] . Interleukin (IL)-22, a member of the IL-10 cytokine family which is composed of IL-10, IL-19, IL-20, IL-24 and IL-26 [3] , is expressed by both the cells of the innate immune system [such as dendritic cells (DCs), lymphoid tissue inducer (LTi)-like cells and natural killer (NK) cells) as well as on the surface of adaptive lymphocytes (including CD4 + T cell subsets, CD8 + T cells and so on) [4] . Several cytokines [such as IL-23, IL-6, tumor necrosis factor (TNF) α, IL-1β, transforming growth factor (TGF) β and IL-17), many of the transcriptional factors (signal transducer and activator of transcription (STAT) 3, RARrelated orphan receptor (ROR) γt and aryl hydrocarbon receptor (AhR)] [5] and T regulatory cells (Tregs) are known for their regulation of IL-22 expression [6] . Through activation of the Jak-STAT signal transduction pathway, IL-22 induces proliferative and anti-apoptotic pathways, as well as the production of antimicrobial peptides, which help prevent tissue destruction and assist in its repair and restoration [7] . IL-22 is also associated with IBD susceptibility genes that are crucial for regulating tissue responses during inflammation [8] . All of these processes play critical roles in the pathogenesis of IBD. In recent years, it was demonstrated that treatment with recombinant cytokine or gene therapy involving IL-22 can suppress the inflammatory response and alleviate tissue injury [8,9] . Thus, these findings suggest that further research focused on IL-22 may elucidate the underlying mechanisms of IBD and facilitate the development of novel effective, targeted therapeutic approaches for IBD. This review focuses on IL-22 and its functional role in IBD. IL-22 signaling The IL-22 receptor is a heterodimer composed of IL-22 receptor 1 (IL-22R1) and IL-10 receptor 2 (IL-10R2) [10] . IL-10R2 is ubiquitously expressed by most cell types, while the expression of IL-22R1 is limited to nonhematopoietic cells (such as hepatic cells, pancreatic cells, kidney cells, epithelial cells, and skin keratinocytes) [10,11] . Therefore, the expression profile of IL-22R1 determines how IL-22 specifically targets innate cell populations, and not adaptive immune cells [12] . STAT3, STAT1 (in a relatively small number of cells) and STAT5 (in certain cells) were shown to be activated after IL-22 stimulation [13] . Further analysis has also demonstrated that IL-22 signaling propagates downstream phosphorylation signals, including several of the mitogen-activated protein kinase (MAPK) pathways (extracellular signal-regulated kinase (ERK)1/2, MEK1/2, C-Jun N-terminal kinase (JNK), and p38 kinase), and STAT1, STAT3 and STAT5 by utilizing Janus kinase (JAK)1 and tyrosine protein kinase (TYK)2 [14] ( Figure 1). The capacity of IL-22 to activate JNK, ERK1/2 and p38 MAPK pathways has been implicated in liver diseases [14] . Moreover, the strong activation of IL-22 to stimulate STAT3 has been confirmed in human colon cancer cell lines, human colonic biopsy, as well as the primary mouse colonic epithelial cells [15,16] . In fact, a recent study has shown that, compared with IL-6, IL-22 has a stronger ability to activate STAT3 [17] . Pickert et al [18] have demonstrated that in dextran sulfate sodium (DSS)-induced colitis, the activation of epithelial STAT3 is more dependent on IL-22 than on IL-6, a well known activator of STAT3. This is due to IL-22R1 utilizing its constituent C-terminal tail to interact with the coiled-coil domain of STAT3, which has been to conformed in a recent discovery as a novel mechanism to activate STAT3 [19] . Similar to other IL-10 family cytokines, IL-22 primarily relies on STAT3 to mediate its functions. Binding of cytokines to this receptor results in the activation of STAT3 signaling pathways, which in turn leads to the induction and production of various tissue-specific genes, including serum amyloid A (SAA), antimicrobial proteins (β-defensin, Reg3c and lipocalin-2) and mucins. Meanwhile, IL-22 also induces proliferative and antiapoptotic pathways in some responsive cells of certain tissues [10,20] . Cellular sources of IL-22 Basu et al [21] have suggested that both innate lymphoid cells (ILCs) and T cells produce IL-22. They showed that IL-22 produced by ILCs was strictly IL-23-dependent, and that the development of IL-22 induced by CD4 + T cells was via an IL-6-dependent mechanism that was augmented by IL-23 and was dependent on both transcription factors T-bet and AhR. At the same time, Wolk et al [22] confirmed that activation of murine T cells, especially T helper (Th) 1 cells, mainly express IL-22. A novel Th subset -the Th17 cells -was identified in 2005 [23,24] . IL-17 (or IL-17A), a hallmark cytokine preferentially expressed by Th17 cells, distinguishes these cells from other Th subsets, such as Tregs, Th1 and Th2 cells [25] . Th17 cells play an essential role in host defense, especially against extracellular bacteria and other infectious bacteria, and are involved in the pathogenesis of various autoimmune diseases [26,27] . The level of IL-22 produced by Th17 cells is much higher than that of the production from undifferentiated Th0 cells or Th1 cells. However, the expression and regulation of IL-22 and IL-17 produced in T cells are unparalleled. Researchers have discovered that IL-6 and TGFβ are both required for inducing IL-17 expression in naïve T cells, yet IL-6 alone can sufficiently promote the expression of IL-22 [28][29][30] . In fact, TGFβ has been shown to suppresses IL-22 production in a dose-dependent manner [28] . Through activation by anti-CD3 or concanavalin A (ConA), human T cells can produce IL-22 [31] . Based on studies using the lineage marker chemokine CC receptor (CCR) 6 and CCR4, human Th17 cells produced in vitro or purified ex vivo from blood were shown to preferentially express IL-22 [32,33] . Moreover, IL-17-and IL-22-expressing cells in human peripheral blood mononuclear cells (PBMCs) are defined by another surface maker, CD161 [34] . The abovementioned Th17 cells can produce IL-22 and IL-17; in addition, CD161 + human CD8 + T cells can also generate these two cytokines [35] . Recent studies have also demonstrated a unique cell subset, designated as the IL-22-producing CD4+ Th subset, in human peripheral blood, which expresses neither IL-17 nor interferon (IFN)-γ [36][37][38] . In skin, these cells mainly express CCR10. Moreover, the human IL-22-producing T cells can also be generated from naïve CD4 + T cells in the presence of IL-6, rather than TGFβ, which is consistent with what has been reported in the mouse system [36] . Human Langerhans cells are able to differentiate T cells into the only IL-22-producing Th cells in vitro [39] . The human innate immune cell types, such as NK and LTi cells, can also produce IL-22 [40,41] . In addition to CD4 + T cells, the Th17 cells, CD8 + T cells and NK T cells also express high levels of IL-22 upon activation, especially when activation occurs along with IL-23 intervention [28,42] . Recently, LTi cells and developmentally-related NK-like cells (NK22), which express the NK marker NKp46, were demonstrated to be the main innate sources of IL-22 expression, especially in the intestinal tract [43,44] . Treatment of NK cells with IL-2 and IL-12 was shown to lead to expression of IL-22 [45] . Human immature NK cells, defined as CD161 + CD117 + CD34 -CD94cells, express both IL-22 and AhR [46] . The equivalent NKp46 + NK-like cells in mice have been found to be developmentally linked to LTi cells [47,48] . Finally, subsets of myeloid cells express the IL-23 receptor (IL-23R) and combine with IL-23 to release lower levels of IL-22 [49,50] ; those cells that produce high levels of IL-22 may be the major cells of IL-22 origin in mucosal immunity. In contrast to the IL-22 produced by leukocytes, such IL-22 targets mainly tissue epithelial cells rather than immune cells [51] . Although expression of IL-10R2 is widespread, IL-22R1 expression has only been detected on epithelial cells. Upon binding to its receptors on the surface of these epithelial cells, IL-22 produces an accelerating effect on the proliferation and differentiation of these cells, and induces these cells to express genes involved in host defense and wound-healing responses [52] . These cellular functions of IL-22 underlie its crucial role in epithelial barrier defense, especially against invading extracellular bacteria. In fact, in a preclinical model of mucosal immune responses to Gram-negative bacteria, such as Klebsiella pneumoniae and Citrobacter rodentium, IL-22 played an indispensable role [53] . Moreover, IL-22 is associated with the development of various 18179 December 28, 2014|Volume 20|Issue 48| WJG|www.wjgnet.com in innate cell populations has yet to be examined [62] ( Figure 2). Recent studies have demonstrated a close relationship between CD4 + Foxp3 + Tregs and proinflammatory IL-17-producing Th17 cells expressing the lineage-specific transcription factor RORγt. It has been shown that IL-17-secreting Foxp3 + T cells that express RORγt share features of conventional RORγt + Th17 cells. However, RORγt + Foxp3 + Tregs mostly fail to secrete IL-22 after phorbol 12-myristate 13-acetate/ionomycin stimulation [63] . Foxp3 transcription factor binding sites (TFBSs) in the IL-22 promoter restrain RORγt + Foxp3 + T cells to produce IL-22 at the transcriptional level [64] . Despite the decreased expression of IL-22 in Foxp3 + Tregs, it has been found that Tregs can promote naïve T cell differentiation. In a mouse model of infection with oral Candida albicans, Foxp3 + Tregs were shown to powerfully promote the transition of naïve CD4 + T cells to responding CD4 + cells (Tresps) [65] . Tresps markedly produce IL-22. Therefore, there is the possibility that Tregs can regulate the expression of IL-22. IL-22 regulates intestinal barrier immunity The IL-22 signaling pathway is activated through a heterogeneous receptor complex composed of two subunits, IL-22R1 and IL-10R2 [66] . Although IL-10R2 is widely expressed on almost all of the cell types, the expression of IL-22R1 is restricted to the surfaces of nonhematopoietic cells such as epithelial cells, hepatocytes and keratinocytes [67] . This limited expression of IL-22R1 on nonhematopoietic cells allows IL-22 to specifically target innate cell populations within such tissues as the skin, kidney, digestive tract and respiratory systems [68] . A wide variety of innate and adaptive immune cells, including CD4 + T cells, and most notably Th17 and Th22 cells, CD8 + T cells, LTi cells, NK cells and DCs, can produce IL-22 [69] . Upon binding to the IL-22R1 and IL-10R2 receptor complex, these cells produce IL-22 to activate receptor-associated JAK1 and TYK2, resulting in tyrosine phosphorylation of STAT3 [70] . This in turn allows IL-22 to induce different kinds of tissue-specific genes, including those encoding proteins involved in antimicrobial defense, cellular differentiation, and expression of mucins; a large, heavily glycosylated family of proteins in the gastrointestinal tract forms a protective layer, which serves to separate commensal bacteria from pathogenic bacteria in the epithelium layer, thereby minimizing the immune response [71] . Through the production of antimicrobial peptides, enhancement of epithelial regeneration, and regulation of wound healing, IL-22 plays a particularly vital role in regulating intestinal inflammatory responses [72] . Furthermore, a direct effect of IL-22 on colonic epithelium is proliferation of epithelial cells, which maintains the integrity of the intestinal epithelium. Recent studies have focused on possible protective human autoimmune diseases [25] . The expression of IL-22 is unregulated in autoimmune diseases, such as IBD, rheumatoid arthritis and psoriasis. IL-23 appears to be a principal inducer of IL-22 in Th17, NK or NK-like cells, suggesting that IL-22 acts a pivotal mediator in IL-23-dependent immune reactions in skin and mucosal epithelia by stimulating innate antimicrobial responses as well as promoting tissue repair [54] . Regulation of IL-22 expression IL-23 is a member of the IL-12 cytokine family, and its stimulation of activated T cells induces IL-22 expression [54] . Research has found that il23a -/and il22 -/mice are both highly susceptible to infection with extracellular Gram-negative bacteria, suggesting that a critical function of IL-23 in infection is to induce IL-22 expression [55,56] . Additionally, IL-23 has been found to be important in the terminal differentiation of Th17 cells, assisting in their proliferation and effector functions [56] . Therefore, the ability of IL-23 to enhance Th17 cell proliferation appears to be linked to IL-22 expression. In addition to IL-23, other cytokines have been found to regulate the expression of IL-22. In cultures of purified naïve murine CD4 + T cells, IL-6 and T cell receptor (TCR) stimulation, or IL-6, TNFα, IL-1β and TCR stimulation, was sufficient to induce IL-22 expression [57] . Increasing concentrations of TGFβ dose-dependently inhibited IL-22 expression while maintaining stable IL-17A expression. It has recently been demonstrated that IL-17A can partially inhibit the expression of IL-22 from Th17 cells in vitro and in vivo, indicating that Th17 cell-associated IL-17A can also negatively regulate IL-22 expression. IL-22 expression in γδT cells can also be induced independently of IL-23 and TCR stimulation by IL-1β, as well as Toll-like receptor (TLR) 1, TLR2, and dectin-1 ligands [54,58] . Similar to cytokine-mediated regulation of IL-22, many of the transcriptional factors are known for regulation of IL-22 expression. STAT3 is critically involved in the induction of IL-22 expression in T cells [59] . Similarly, RORγt, a lineage-specifying transcription factor for the differentiation of Th17 cells, is also required for optimal expression of IL-22. STAT3 and RORγt both control expression of IL-23R, and this regulation may account for their ability to promote IL-22 production in Th17 cells. Therefore, many of the same transcription factors involved in Th17 cell differentiation are also required for IL-22 expression in CD4 + T cells [60] . In addition, AhR is a ligand-dependent transcription factor that is best known for its role in mediating toxicity to the organic compound dioxin. AhR also partially contributes to the differentiation of Th17 cells and is required for expression of IL-22, thus linking IL-22 and Th17 cells to toxicity following exposure to different environmental compounds [61] . A number of IL-22producing innate cell populations have also been found to express STAT3, RORγt and AhR, yet the involvement of these transcription factors in regulating IL-22 expression effects of IL-22 in IBD, and have used several DSSinduced as well as Th1-and Th2-mediated colitis mouse models [73,74] . In the DSS-induced colitis model, feeding mice DSS causes disruption of the intestinal epithelial barrier, leading to colitis within 1 wk. In IL-22 knockout mice or wild-type (WT) mice, administration of neutralizing anti-IL-22 antibodies leads to more extensive epithelial destruction and inflammation in the colon, more severe weight loss, and more impaired recovery compared to the DSS-induced acute colitis model. In addition, T cells from IL-22 -/mice or IL-22-deficient mice cause a more severe colitis in the T cell transfer model of IBD [75] . In a Th1-cytokine-mediated model of colitis, expression of IL-22 by CD4 + T cells is crucial for relief of disease severity. Sugimoto et al [76] showed that receipt of supplemental IL-22 leads to rapid amelioration of local intestinal inflammation in the colons of Th2mediated chronic colitis. IL-22 knockout mice showed delayed recovery from DSS-induced acute colonic injury. Treatment with neutralizing anti-IL-22 antibodies also impaired the recovery of WT mice. Finally, in IL-22deficient and RAG1-deficient double knockout mice, lacking both T and B cells, no recovery was observed [77] . IL-22 gene delivery mediates STAT3 activation specifically within colonic epithelial cells and enhances reconstitution of goblet cells and production of mucus, thereby reinforcing the mucus barrier function within the gastroenterology tract [78] . In DSS-induced acute colonic injury, recovery is significantly impaired and delayed in IL-23R-deficient and RAG2-deficient double knockout mice lacking of IL-22 expression, and treatment with recombinant IL-22 rescues the recovery in these mice [79] . Pancreatic cells produce TGFβ and IL-10 upon IL-22 stimulation, which can inhibit IFN-γ production, facilitating relief of intestinal injury. These mouse models of colitis suggest that IL-22 plays a protective role in IBD through its ability to improve the integrity of the mucosal barrier and enhance the inherent epithelial defense function. IL-22 responses to intestinal pathogens In addition to maintaining the mucosal barrier function in the gastrointestinal tract, IL-22 induces genes to encode anti-microbial proteins involved in bacterial defense and protection of intestinal mucosa, suggesting a role for IL-22 against extracellular bacteria in the innate immune system. CD and UC are thought to be driven by an abnormal immune response to the intestinal Anti-microbiota Goblet cell regeneration and mucin production Tissue protection, regeneration, and wound healing flora [80] . However, since intestinal dysbacteriosis is also a characteristic of IBD pathogenesis, it is difficult to determine whether there is an inflammatory response to abnormal flora or if an abnormal inflammatory response is altering the microbial communities [81] . Intestinal flora, as an environmental factor, may be associated with genetic susceptibility that alters the interactions between ourselves and our microbiome. The first major susceptibility gene discovered for CD is NOD2 (or CARD15), which is known as a receptor for bacterial peptidoglycan (PGN) [82] ; another susceptibility gene, ATG16L1, has been shown to be critical for autophagy [83] . The intestinal flora may also lead to disorders of intestinal lymphoid cell subsets, such as Th17 cells and innate lymphocytes, which are important for regulating mucosal immunity [83] . Although there have been numerous studies investigating stool samples of and mucosa-associated bacteria in IBD patients, there has been a lack of consensus between the associations observed in these studies [83] . Although extensive changes have been reported, such as expansion of the Proteobacteria phylum in IBD patients [84] , only few specific associations have been reproducibly identified. Although the causes of changes in microbiota species that can trigger IBD remain unclear, and studies on this subject are continuing, the general theme observed so far is that the diversity of microbial communities is significantly decreased in IBD [85] . There have also been repeated observations of the microbiota composition being disrupted during inflammation, resulting in dysbiosis that may induce or perpetuate the inflammatory condition. However, both host genotype and the environment have major impacts on the shape of such dysbiosis, as well as upon which members of the microbiota can stimulate pathogenic immune responses [86] . By promoting the maintenance of intestinal epithelial barrier function, IL-22 can prevent the spread of pathogenic microorganisms in the gut, such as enteropathogens, including Citrobacter rodentium and Salmonella typhimurium (enteric ecotype) in the gastrointestinal tract, thereby limiting bacterial growth. Tregs promote IL-22-dependent clearance of fungi during acute Candida albicans infection [87,88] . In addition, IL-22 can help to eliminate pathogenic microorganisms by inducing various anti-microbial proteins (Figure 2). IL-22 has already been confirmed as a regulator of the expression of antimicrobial proteins such as the S100 family proteins (S100A7, S100A8 and S100A9), β-defensin family proteins (β-defensins BD2 and BD3), Reg family proteins (RegⅢa, RegⅢb and RegⅢc) and lipocalin-2 [83,[89][90][91] . These proteins may be important in the control of gut pathogens. IL-22 plays a protective role in the host inflammatory response to microbial infections or in promoting the release of inflammatory mediators, depending on the type of pathogenic microorganisms causing the infection. Song et al [92] showed that IL-22 plays an crucial role in host defense immunity against infection with the Gram-negative enteric bacteria Citrobacter rodentium, as an inducer of the expression of antibacterial peptides in colonic epithelial cells. The protective effect of IL-22 in systemic infections caused by Salmonella enterica has been demonstrated. IL-23-dependent IL-22 was required for both liver cells' survival and pathogen defense against systemic Salmonella infection in mice, especially when accompanied by decreased production of IL-12 [93] . IL-22 is not only able to protect our intestine against bacterial pathogens, but also plays a protective role in intestinal fungal infections with Candida albicans. Compared with infected WT mice, IL-22 knockout mice infected with Candida albicans hyphae intragastrically had a higher fungal burden and showed signs of more severe mucosal inflammatory hyperplasia in the stomach and colon [94,35] . These results indicate that IL-22 serves as a protective guardian in regulating inflammatory responses and maintaining mucosal barrier integrity in a variety of intestinal infections. However, IL-22 has also been shown to promote intestinal inflammation in parasite infection [95] . Toxoplasma gondii-infected IL-22 knockout mice and mice whose IL-22 was neutralized with an anti-IL-22 monoclonal antibody developed significantly less intestinal pathology and had less weight loss and mortality, despite having similar parasite burdens to infected WT mice. Perhaps the strongly skewed Th1 immune response caused by the Toxoplasma gondii infection may explain this difference. As mentioned above, IL-22 produced by Th17 cells can be regulated by the gut microbiota. Different from the neutrophil induction response of IL-17, IL-22 serves an important role in tissue repair during mucosal immune system response [96] . Regardless, the relationship between the intestinal microbiota and IL-22-producing cells is extremely close. Most notably, it was recently shown that IL-22-producing innate lymphocytes play a crucial role in preventing systemic inflammation by inhibiting systemic dissemination of commensal bacteria [83] . Sonnenberg et al [97] administrated Rag1 -/mice a neutralizing anti-IL-22 monoclonal antibody, and found that the signs of systemic inflammation increased as did levels of lipopolysaccharide (LPS); in addition, bacteria could be cultured from the spleens and livers of these mice. The disseminated commensal bacteria were subsequently identified as Alcaligenes sp. Therefore, together with the protective role against IBD, IL-22 also serves as a mucosal protector and plays a critical role in separating our intestinal tract from our gut flora. Under gut homeostatic conditions, viable bacterial pathogens are sampled by DCs that carry them to the mesenteric lymph nodes and these microbes do not disseminate systemically to secondary lymphoid tissues, indicating that a mesenteric guardian may act in concert with a mucosal firewall to distinguish intestinal bacteria [98] . IL-22 in tissue protection, regeneration and wound healing In addition to its antibacterial activity, IL-22 can enhance the survival and proliferation of epithelial cells for tissue differentiation and healing [99] . IL-22 induces the expression of antiapoptotic proteins, including Bcl-xL, Bcl-2 and Mcl-1, as well as proteins directly involved in cell cycle and proliferation, such as c-Myc, cyclin D1, Rb2 and CDK4, and anti-inflammatory or protective proteins, such as IL-11 and follistatin [100][101][102] . Moreover, IL-22 has been shown to be capable of stimulating a colonic cancer cell line to express a molecule termed deleted in malignant brain tumor 1 (DMBT1), which may play a vital role in the differentiation of epithelial cells [103] . IL-22 has also been shown to induce RegIα, which serves as a trophic and antiapoptotic factor in the inflamed colon of UC patients. Recent research has determined that, through the activation of STAT3, IL-22 can induce the proliferation and reconstruction of mucosal epithelial cells in the intestinal tract [104] . This increased healing response can further prevent the penetration of microorganisms into the intestinal epithelial layers. IL-22 is associated with IBD susceptible genes An attractive biological feature of IL-22 is its functional association with some major IBD susceptibility genes. Interaction of IL-23 with IL-23R has been implicated in the development of IL-22-producing innate cells, including ILCs, LTi cells and NK cells [4,33,34] , and in the maintenance of IL-22-producing Th17 cells [105,106] . Functional polymorphisms of the IL-23R gene have been negatively correlated with development of both CD and UC [105,106] . IL-22 is located within a UC-risk locus on chromosome 12q15 [107] . IL-22 can be combined with its receptors that are composed of IL-10R2 and IL-22R1. Polymorphisms of il110r2 are positively associated with both CD and UC [108] . Binding of IL-22 with its cognate receptor induces rapid activation of STAT3 through JAK1 and TYK2. Stat3, jak1 and tyk2 are all well-defined susceptibility genes of CD and, to a lesser extent, of UC [95,96] . STAT3 activation stimulates epithelial cells to produce Muc1. A recent genome-wide association study proposed muc1 as a potential candidate gene associated with CD [109] . In addition, genome-wide association analysis of IBD patients has identified gene mutations involved in encoding IL-22 and the IL-10R2 subunit of the IL-22R complex [110,111] . FOR IBD Due to its crucial roles in regulating barrier immunity and antimicrobiota, IL-22 may have therapeutic potential for IBD. Understanding the various mechanisms of IL-22 in regulating immunity, together with development of immunosuppressive drugs, may open up a new path for the future treatment of IBD. Treatment with recombinant cytokine or gene therapy delivery of IL-22 may alleviate tissue damage during inflammatory responses. Suppressing the immune system via anti-inflammatory treatments, such as TNFα inhibition, can lead to unwanted dampening of the immune response, impairing the ability of its response to infection. However, IL-22 is an ideal therapeutic candidate because it specifically affects tissue responses and does not have direct effects on the immune response. IL-22 has produced the expected results in an experimental animal model of IBD. Administration of a more specific targeting agent of IL-22 via microinjection of an IL-22 DNA vaccine into already inflamed colonic tissues of mice with IBD has been shown to lead to reduce infiltration of inflammatory cells as well as to increase number of goblet cells [112] . This enhances the production of mucin, thereby buffering the colonic epithelium from commensal bacteria that may otherwise initiate an immune response. Andoh et al [113] did not find IL-22 expression in the gut mucosa of patients with infectious colitis. It seems that IL-22 plays a protective systemic role in CD [114] and a protective local role in UC [115,116] . It should be mentioned at this point that recently, Leppkes et al [117] demonstrated that the adoptive transfer of IL-22-deficient T cells into RAG1-deficient mice caused severe colitis that was indistinguishable from that caused by transferred WT cells. Genome-wide linkage analysis of IBD patients has identified gene mutations involved in encoding IL-22 and the subunit complex of IL-10R2 and IL-22R1 [111] . The IL-22R complex is highly expressed within the gastrointestinal tract and in the inflamed colon; IL-22 is expressed by CD4 + T cells, likely Th17 cells, and innate lymphocytes, such as NK cells and LTi-like cells. Using different experimental models of IBD -DSS-induced colitis, which is thought to be mainly driven by innate immune response cells, and CD4 + CD45RB high T cellmediated colitis, in which naive T cells devoid of Tregs are transferred into T cell-deficient mice where they proliferate unimpeded leading to colitis -IL-22 has been shown to be protective [118,119] . Furthermore, Strengell et al [118] showed that IL-22 can be therapeutic in IBD; gene therapy transfer of the IL-22 gene into the colons of already inflamed mice resulted in amelioration of inflammation. In addition, the authors reported that in vivo gene delivery of IL-22 attenuates Th2-mediated colitis and regulates the expression of genes related to mucus layer formation [120] . Some existing biologic therapies are also able to mediate effects on IL-22 expression in patients. Anti-TNFα antibodies (such as infliximab) and the anti-IL-6 antibody toclizumab have been used to treat IBD. Th22 cells depend on TNFα for differentiation; therefore, both Th17 and Th22 cells depend on IL-6 for differentiation, and they can indirectly decrease IL-22 expression in patients to treat IBD [121,122] . Lastly, ustekinumab is able to target both IL-12 and IL-23 and therefore prevent the differentiation of Th1, Th17 and Th22 cells, eliminating several sources of IL-22; this drug is currently being studied in Phase Ⅲ clinical trials of CD [123] . The treatments mentioned above suppress inflammation by indirect inhibition of IL-22. CONCLUSION IL-22 plays a critical role in the regeneration of damaged epithelial monolayers and stimulates antimicrobial peptide generation. Importantly, the ability of IL-22 to promote intestinal wound healing and proliferation of intestinal epithelial cells in mice and humans has been reproducibly demonstrated by independent groups using different experimental methods, and recent advances in genomewide association studies have led to results suggesting that the IL-22 pathway is closely related to some major IBD susceptibility genes. These collective findings clearly highlight IL-22 as a promising target for IBD therapy. Therefore, further extensive research on IL-22 is necessary to bring about novel and practical interventions for improving the quality of life of patients with IBD in a safe and effective way. Further understanding of the regulation and function of IL-22 would certainly play a favorable role in the future treatment of IBD.
2018-04-03T01:00:46.643Z
2014-12-28T00:00:00.000
{ "year": 2014, "sha1": "c485806491b6205022440f6ba9c70957ebc0fca2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v20.i48.18177", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7af0ce09402cced6ea5fb329e1427171e3ec475e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257740134
pes2o/s2orc
v3-fos-license
Reticular Adhesion Formation is Mediated by Flat Clathrin Lattices and Opposed by Fibrillar Adhesions Reticular adhesions (RAs) consist of integrin αvβ5 and harbor flat clathrin lattices (FCLs), long-lasting structures with similar molecular composition to clathrin mediated endocytosis (CME) carriers. Why FCLs and RAs colocalize is not known. Here, we show that FCLs assemble RAs in a process controlled by fibronectin (FN) and its receptor, integrin αvβ5. We observed that cells on FN-rich matrices displayed fewer FCLs and RAs. CME machinery inhibition abolished RAs and live-cell imaging showed that RA establishment requires FCL co-assembly. The inhibitory activity of FN was mediated by the activation of integrin α5β1 at Tensin1-positive fibrillar adhesions. Conventionally, endocytosis disassembles cellular adhesions by internalization of their components. Our results present a novel paradigm in the relationship between these two processes by showing that endocytic proteins can actively function in the assembly of cell adhesions. Furthermore, we show this novel adhesion assembly mechanism is coupled to cell migration via a unique crosstalk between cell matrix adhesions. Introduction Integrins are nonenzymatic dimeric transmembrane receptors which recognize extracellular matrix (ECM) components. These mechanosensory proteins govern cell adhesion to the ECM maintaining correct tissue development and function, with elaborate connections to cellular homeostasis and disease (Kanchanawong and Calderwood, 2023). Ligand availability, and biochemical and physical properties of the ECM, determine integrin activation status, integrin clustering, and, ultimately, the formation of cellular adhesion structures (Kechagia et al., 2019). Cells can form a variety of integrin-based adhesions. Small Integrin clusters engaged to the ECM, called nascent adhesions, form on the cell periphery and establish their connection to the actin cytoskeleton via adaptor proteins such as Talin. A balancing act of traction forces and signaling molecules determine whether nascent adhesions mature into the larger and molecularly more complex focal adhesions (FAs) (Wehrle-Haller, 2012). In migrating cells and in the presence of the ECM component fibronectin (FN), FAs can serve as platforms for the formation of fibrillar adhesions (FB), where FNbound α5β1 integrins "slide" from FAs to form FN fibrils (Georgiadou and Ivaska, 2017). In common to all these types of cell adhesions, their disassembly is mediated by the removal of integrin molecules from adhesion sites via endocytosis (Kechagia et al., 2019). Recently, a novel type of integrin-based cell adhesion was discovered (Lock et al., 2018). Called reticular adhesions (RAs), these structures contain integrin αvβ5, lack the typical markers for the other adhesion types, such as Talin1 or Paxillin, and are not connected to actin stress fibers. RAs can occupy a significant portion of the substrate-facing surface of cells in culture and can significantly outlast FAs. Their physiological function is, however, not clear. Intriguingly, RAs colocalize with large, persistent forms of clathrin structures at the cell membrane called Flat 1 Clathrin Lattices (FCLs) (also referred to as clathrin plaques) (Grove et al., 2014). The structure containing FCLs and RAs is called clathrin-containing adhesion complexes (CCAC) (Lock et al., 2019;Zuidema et al., 2020). FCLs were previously considered as stalled endocytic events of the Clathrin-Mediated Endocytosis (CME) pathway. However, recent studies have changed this view and support the idea that FCLs are signaling platforms (Leyton-Puig et al., 2017;Grove et al., 2014;Alfonzo-Méndez et al., 2022). In vivo, FCLs localize to adhesive structures between bone and osteoclasts (Akisaka et al., 2008) and are required for the organization of sarcomeres (Vassilopoulos et al., 2014). The functional relationship between FCLs and RAs is not clear. A confounding factor in this relationship lies in the fact that, although FCLs always localize to RAs, the opposite is not true. RAs can occur as large structures with FCLs covering only a fraction of their area. Moreover, integrin αvβ5 can localize to both RAs and FAs. Although details on the factors mediating integrin αvβ5 localization to FCLs are becoming clearer (Zuidema 2018 and, why these structures co-exist, what their function is and how cells control their formation remain a mystery. In this study, we show that FCLs are required for the establishment of RAs. Moreover, we found that a FN-rich ECM acts as an inhibitor of FCL-mediated RA formation. This inhibitory role of FN is mediated by the activation of integrin α5β1 localized at fibrillar adhesions. Furthermore, we show that the transition from a static to a migratory state is mirrored by the disappearance of FCLs and RAs. Fibronectin inhibits the formation of FCLs While studying CME dynamics, we serendipitously observed that cells on fibronectin (FN) appear to display fewer FCLs when compared to cells plated on noncoated glass dishes. To confirm if ECM proteins in general can influence CME, we assessed the effect of several major ECM components as well as non-ECM coatings and non-coated surfaces on the amount of FCLs. For that, dishes were coated for 16-24 h after which cells were let to attach for 16-20 h in serumcontaining medium before imaging. For quantifications, we established the metric FCL proportion, which defines the average fraction of FCLs per frame among all clathrin coated structures detected in a 5-minute movie (see methods for details). These experiments were performed using U2Os cells with an endogenously GFP-tagged alpha adaptin 2 sigma subunit (AP2S1, hereafter referred to simply as AP2). AP2 is a widely used CME marker which faithfully mirrors clathrin dynamics (Almeida-Souza et al., 2018;Ehrlich et al., 2004;Rappoport and Simon, 2008). We used endogenously tagged cell lines throughout this study as the expression level of the AP2 complex was shown to modulate the amount of FCLs (Dambournet et al., 2018). U2Os cells on non-coated dishes presented typical and abundant FCLs (i.e. bright, long-lived AP2-GFP marked structures) ( Figure 1A, B, S1A and Supplementary video 1), similar to what has been found in many cell lines (Zuidema et al., 2022;Moulay et al., 2020;Sochacki et al., 2021;Saffarian et al., 2009). Similarly, U2Os cells plated on dishes coated with the non-ECM proteins bovine serum albumin (BSA) and poly-L-lysine (PLL) also presented high FCL proportions ( Figure 1B, S1A and Supplementary video 1). Out of the major ECM proteins tested, FN, collagen IV (Col IV) and laminin-111 (LN111) reduced FCL proportion significantly. The integrin αvβ5 ligand vitronectin (VTN) did not increase or decrease the FCL proportion when compared to non-coated dishes ( Figure 1B, S1A) (see discussion). Similarly, and in line with a recent study (Baschieri et al., 2018), collagen I (Col I) did not reduce FCLs ( Figure 1B, S1A and Supplementary video 1). Different concentrations of FN used for coating (10 or 20µg/ml) did not show significant differences ( Figure 1B). Recently, it was described that SCC-9 cells produce more FN when plated on Col IV or LN111 (Lu et al., 2020). To probe if this is also the case for our cells, we stained FN from U2Os cells plated directly onto non-coated dishes, or plated on FN, VTN, Col IV, Col I or LN111. While U2Os cells produce little FN overnight, cells plated on Col IV produced a striking amount of FN, which assembled into elongated fibrils ( Figure 1C and 1D). LN111 coating also induced FN production, but less strikingly than Col IV. Col I and VTN coating were unable to stimulate FN production ( Figure 1C, D). These results suggest that FN is the main ECM component inhibiting FCL formation. For many cell lines, it is common to find considerable variability in the amount of FCLs in culture. We thus decided to test if this variability is due to differential FN production within the culture. Confirming this hypothesis (and bearing in mind that U2Os secrete FN modestly, see below), we found that cells plated on non-coated dishes displaying fewer FCLs were predominantly lying on top of a FN-rich region of the culture ( Figure S1B). Next, we asked if the reduction in FCL proportions observed in FN-coated samples are a cell-wide effect or specific to cellular regions in direct contact with the extracellular substrate. For that, we used patterned dishes containing FN-coated regions interspersed with uncoated regions, where single U2Os-AP2-GFP cells could adhere simultaneously to both a FN-and a noncoated region. In line with a contact-dependent effect, low FCL proportions were observed in cellular regions in contact with FN whereas FCL proportion was high in cellular regions contacting non-coated surface ( Figure 1E-G and Supplementary video 2). effects we observe are also due to changes in clathrin splicing, but found no difference when comparing cells plated on non-coated or FN-coated dishes ( Figure S1C). Thus, these results show that FN is a potent inhibitor of FCLs. Moreover, FN inhibits FCLs in a contact-dependent manner locally within a single cell ( Figure 1H). Fibronectin inhibits the formation of RAs in a similar manner as FCLs. As discussed in the introduction, FCLs localize to RAs. To check how ECM composition affects these structures, U2Os AP2-GFP cells plated on FN, VTN, Col IV, Col I or LN111 and stained with the RA component integrin αvβ5 and ̶ to be able to distinguish integrin αvβ5 on RAs or FA s ̶ w e r e a l s o s t a i n e d w i t h a n FA m a r k e r (phosphorylated paxillin, p-PAX Y118). Cells plated overnight without coating formed abundant RAs ( Figure 2A, B). On FN-coated dishes, big RAs were largely absent but small "dot-like" nascent RAs were present in a few cells. Similarly, on Col IV and LN111 coatings (which stimulated FN production) ( Figure 1C), cells formed significantly fewer RAs than on non-coated dishes ( Figure 2A, B). Coating with VTN, the ligand for integrin αvβ5 present at RAs and FAs alike ( Figure S1C) (Lock et al), did not result in more RAs (Figure 2A, B). Different coatings also changed the total amount of integrin αvβ5 on the bottom surface of cells ( Figure S1D). However, they did not follow a clear relationship with the amount of RAs. To quantify differences in RA amounts in cells we developed a metric called RA coverage, which measures the fraction of the area of the cell covered by integrin αvβ5 signal (excluding FAs). RA coverage serves as a good metric to distinguish between large and nascent RAs and, crucially, it shows a clear correspondence between RA content and 4 both FCL proportion and FN abundance in the ECM (see figures Figure 1B, 1D and 2B). Next, we used our substrate patterning strategy to check if the local FN effects on FCLs were also similar for RAs. Strikingly, cells plated on patterned FN revealed that RAs, akin to FCLs, were completely inhibited on cellular regions in contact with FN. Cellular regions in contact with non-coated surfaces displayed many FCLs colocalizing to RAs while regions in contact with FN presented no RAs or FCLs ( Figure 2C). Interestingly, in these patterned substrates, most of the integrin αvβ5 signal segregated to non-coated regions forming typical RAs ( Figure 2C,D). This contrasts with cells plated in fully coated dishes (Figure 2A), where integrin αvβ5 can be seen in both RAs and FAs. Hence, the inhibitory effects of FN on FCLs affects RAs in a similar manner ( Figure 2E). The effect of fibronectin on FCLs and RAs is clear in various cell lines. Next, we checked if the effects we see in U2Os cells are also true for other cell lines. To avoid problems of 5 overexpression, we endogenously tagged AP2 with either Halo tag or GFP in various human cell lines: HeLa (epithelial, cervical carcinoma), MCF7 (epithelial, breast cancer), HDF (dermal fibroblast, noncancerous), Caco2 (epithelial, colon carcinoma) and hMEC (Human mammary epithelial cells). These cell lines presented a large variation in the amount of FCLs and the morphology of RAs. Importantly, these cells could be divided into two groups in terms of endogenous FN secretion, and this division clearly correlated with the amount of FCLs and RAs ( Figure 3A, B). U2Os, HeLa and MCF7 composed the group of low FN-secretion cells. U2Os form large RAs on non-coated dishes, whereas HeLa formed multiple dot-like nascent RAs (which colocalized with FCLs) with bigger RAs found more seldom ( Figure 3A). MCF7 cells formed many FCLs and large RAs covering almost the entire cell area ( Figure 3A). None of the high FN-secreting cell lines (HDF, Caco2 and hMEC) formed large RAs ( Figure 3A, B). In these high FNproducing cells, small FCL/RA dots were often found in areas with less deposited FN ( Figure 3A). We next evaluated the response of these cell lines to FN pre-coating. In low FN-producing cell lines (U2Os, HeLa, and MCF7), RA coverage dropped significantly ( Figure 3C-E). Among the high FN-production cells lines, only Caco2 reduced its RA coverage on FN-coated dishes ( Figure 3C,D). As expected, HDF and hMEC, which had low RA coverage without coating, did not show a significant response to FN coating ( Figure 3C, E). For all experiments so far, we used media supplemented with serum, which is known to contain ECM components, including FN. Given that our cells are left to attach overnight in this media, it would be reasonable to expect that the FN present in serum would coat the dishes and completely mask our results. To test why this does not seem to happen ( Figures 1C and 3A), we compared the amount of FN deposited on the glass surface in different conditions: dishes were coated for 24 h with 10, 5 and 1µg/ml of FN (diluted in PBS), 100% fetal bovine serum (FBS), media with 10% FBS and PBS as a control. After coating, U2Os cells were plated and left to attach for 16 h before being fixed and stained for FN. Surprisingly, our results revealed that very little FN was deposited on glass in dishes "coated" with full media or pure FBS ( Figure S1F-G). These results are in line with similar experiments performed 30 years ago (Steele et al., 1992). We hypothesize that this phenomenon occurs due to the high concentrations of BSA in serum (40 mg/ml) which rapidly saturates the surface of culture dishes, thereby acting as a blocking agent for the binding of serum FN. in different cells lines ( Figure 3F), suggesting a common ̶ and general ̶ mechanism for the establishment of these structures. The CME machinery is essential for RA formation Next, we set out to dissect the relationship between the formation of FCLs and RAs. It has been shown that integrin αvβ5 is required for the establishment of FCLs (Baschieri et al., 2018;Zuidema et al., 2018). We confirmed this observation by silencing integrin β5 from U2Os AP2-GFP cells plated on non-coated dishes and, indeed, they displayed a significantly lower FCL proportion compared to control cells ( Figures S2A, B). Further, while integrin β5-silenced cells were unable to form RAs, they did form FAs ( Figure S2C, D). The dependency of integrin β5 on FCL formation was further confirmed using Cilengitide, the inhibitor for integrin αvβ5 (Desgrosellier and Cheresh, 2010), as the treatment led to a rapid disassembly of FCLs and RAs (Figures S2E-F). While all FCLs colocalize to RAs, FCL-free areas of larger RAs are rather common (e.g. Figures 2A, 3A, 3C and S2D), which may give the impression that FCLs are formed on pre-existing RAs. Nevertheless, the fact that both structures are inhibited independently by FN suggests a deeper relationship and led us to ask if RAs can exist without the CME machinery. To answer this question, we quantified the RA coverage in U2Os-AP2-GFP cells silenced for the clathrin adaptor AP2 complex subunits alpha 1 (AP2A1) or sigma 1 (AP2S1) in cells plated on non-coated dishes, a condition where we observe large RAs. Consistent with an important role played by the CME machinery in RA formation, AP2A1or AP2S1-silenced cells (easily recognizable as cells with little to no AP2-GFP signal), did not display RAs. Instead, integrin αvβ5 localized to FAs ( Figure 4A, B). To confirm these results, we expressed the AP180 Cterminal fragment (AP180ct), which acts as a strong dominant negative of CME (Ford et al., 2001). AP180ctpositive U2Os-AP2-GFP cells plated on non-coated dishes displayed low AP2 signal at the membrane and, akin to AP2-silenced cells, RAs were largely absent with integrin αvβ5 localized to FAs, whereas AP180ct-negative cells displayed typical FCLs and RAs (Figure 4 C, D). Thus, the CME machinery is required for the formation of RAs ( Figure 4E). Next, we set out to visualize the dynamics of AP2 during RA formation. For that, we generated a double U2Os knock-in cell line ̶ AP2-GFP and Integrin β5 (ITGB5)-mScarlet. RAs are remarkably stable structures (Lock et al., 2018) and their de novo formation is rare, making it rather difficult to capture such events. To minimize this challenge, we optimized the conditions for Cilengitide treatment to disassemble RAs followed by a washout, when RAs could start reforming ( Figures S3A, B). Using these washout conditions, we were able to capture events showing that the formation and growth of ITGB5postive structures are accompanied by the formation of of the dot-like structures we see in many cells), we noticed that the establishment of an FCL was typically accompanied by an increase in ITGB5 fluorescence (Figures 5B,S3C and S3D and Supplementary video 3). Importantly, ITGB5-postive structures, which did not colocalise with an FCL, rapidly disappeared. In many cases, this disappearance was preceded by bona fide CME events (short lived AP2-GFP signals), likely representing CME-mediated ITGB5-cluster disassembly ( Figures 5B and S3D). Data are the mean ± SD, ns. non-significant p-value; * p-value<0.05, *** p-value < 0.001. Scalebars 10 µm, insets 5 µm. Taken together, our results show that the relationship between FCLs and RAs is beyond a simple colocalization. In fact, our data reveals a strict co-dependency, where FCLs are required for the stabilization and growth of integrin αvβ5 clusters thereby establishing RAs ( Figure 5C). The inhibitory effect of fibronectin on FCL and RA formation is mediated by integrin α5β1. To understand the mechanism controlling the coassembly of FCLs and RAs, we turned our attention back to FN. While integrin αvβ5 binds to VTN at FAs and RAs, the major FN receptor is integrin α5β1 (Humphries et al., 2006). First, we acutely interfered with integrin β1 binding to FN using the function-blocking antibody mab13. U2Os-AP2-GFP cells seeded on FN-coated dishes were treated with mab13 and monitored for the acute formation of FCLs and RAs. Over the time course of 45 min, mab13 induced the relocalization of integrin αvβ5 from FAs to small, newly formed RAs ( Figure 6A, B). Further supporting the role of FCLs in RA assembly, these newly formed RAs completely colocalized with FCLs (bright AP2 signals) ( Figure 6A, C). A similar experiment followed by live-cell imaging confirmed these results and showed a gradual increase in FCL proportions after mab13 treatment ( Figure 6D In line with these results, integrin β1 silencing in U2Os-AP2-GFP cells plated on FN resulted in a high FCL proportion and large, prominent RAs ( Figure 7A-D and S4A-C). Despite the striking increase of integrin αvβ5 on the bottom surface of silenced cells, this increase was not reflected in expression levels, indicating that the stimulation of RA formation leads to a change in the trafficking of this integrin dimer ( Figure S4C). A significant increase in RAs was also seen in cells silenced for integrin a5, the alpha subunit which pairs with integrin β1 for FN binding ( Figure 7C, E and S4D, E). Taken together, these results show that the inhibitory activity of FN on RAs and FCLs occurs via the activation of integrin α5β1 ( Figure 7F). Activation of integrin α5β1 at fibrillar adhesions controls RA and FCL formation When bound to FN, Integrin α5β1 can slide centripetally on the cell membrane, translocating from FAs to form elongated structures called fibrillar adhesions (FB). This movement generates long FN fibrils in a process called FN fibrillogenesis and is mediated by the cytoskeleton scaffolding protein Tensin1 (Pankov et al., 2000). To determine which type of adhesion structure active integrin α5β1 localizes to under our experimental conditions, we plated U2Os-AP2-GFP cells on FN and non-coated dishes and stained them with an active Data are the mean ± SD,*** p-value < 0.001. Scalebars 10 µm, insets 5 µm. integrin β1-specific antibody (12G10) and Tensin1 or p-Pax to mark FBs or FAs, respectively. The staining revealed that in the FN-coated dishes, active integrin β1 was colocalizing to FBs (Tensin1) ( Figure 8A). As expected, active integrin β1 and Tensin1-positive adhesions were largely absent in non-coated dishes ( Figure 8A). Next, to determine which active integrin β1 pool is more important for the inhibition of FCLs and RAs, we silenced FAs and FB components on U2Os-AP2-GFP cells and plated them on FN. In accordance with the higher accumulation of active integrin β1 in FBs, silencing of Tensin1 led to a marked increase of RAs and FCLs accompanied by a reduction in the presence of active integrin β1 on the membrane (evidence by 12G10 antibody staining) ( Figure 8B, C and S5A-C). Silencing of the FA component Talin-1 also led to increased RAs and FCLs and a reduction of active integrin β1 on the membrane ( Figure S5D-F). Given the strong phenotype on Tensin1 knockdown, this result was expected as FAs are precursors of FBs. FB formation indicates activated and migratory cell phenotypes. Indeed, active sliding of integrin α5β1 and Tensin1 bound to FN along central actin stress fibers increases traction forces (Georgiadou and Ivaska, 2017;Pankov et al., 2000) and is required in cell migration during development and cancer metastasis (Efthymiou et al., 2020;Schwarzbauer and DeSimone, 2011). If the extension of active integrin α5β1 into FBs is indeed required for the inhibition of FCLs and RAs, we hypothesized that physical confinement of cells -which inhibits cell migration -would also inhibit the sliding of FB from FAs. In turn, the absence of integrin α5β1 in FBs would favor FCLs and RAs, even if cells were plated on an FN-rich matrix. To test this possibility, we turned to single cell micropatterns. In contrast to the patterned coatings we used in Figures 1 and 2, these micropatterns do not allow cells to attach outside the defined areas on a coverslip. Given the small size of these areas (1100 µm 2 ), cells are laterally confined. U2Os-AP2-GFP cells were plated on slides with arrow-and H-shaped micropatterns either precoated with FN or not and stained for integrin αvβ5 and p-PAX and imaged to measure RA coverage. In addition, to measure integrin β1 activation, U2Os-AP2-GFP-ITGB5-mScarlet cells were plated similarly and stained for active integrin β1. Supporting our hypothesis, we could detect clear FCL and RAs in FN-coated micropatterns ( Figure 9A, B, S5K). On arrows, FCLs and RAs developed on the shaft of the pattern, rather than the arched area. In the H-patterns, FCLs and RA developed all over the pattern. Cells on non-coated patterns made large FCLs and RAs often extending throughout the pattern ( Figure 9A, B). Crucially, the RA coverage was not significantly different between coated or non-coated patterns ( Figure 9B). As expected, staining with active integrin β1 (12G10) showed a clear difference in signal between FN-coated and non-coated patterns ( Figure 9C, S5L). Importantly, further supporting a need for FB formation to inhibit FCLs and RAs, 12G10 signal was not organized as elongated, central FBs but rather confined at the cell periphery ( Figure S5K). Thus, the inhibitory role of FN on FCLs and RA formation occurs primarily via the activation of integrin α5β1 on Tensin1-positive fibrillar adhesions. The disassembly of FCL/RA is coupled to cell migration As physical restriction favored FCLs and RAs, we wondered if inducing migration will have the opposite effect. To test this hypothesis, we monitored FCLs and RAs in a classic wound healing assay. U2Os-AP2-GFP-ITGB5-mScarlet cells were plated on non-coated dishes and allowed to grow to full confluency for 2 days. Cultures were then wounded and cells were allowed to migrate. At 0 minutes (i.e. just after wounding), FCLs and RAs were abundant and equally distributed at the edge and away from the wound ( Figure 9D). Within 80 minutes, the cells at the migration front had lost most of their FCLs and RAs, while cells further away from the edge maintained their FCLs and RAs ( Figure 9D). At 4 hours, as the migratory front grew larger, the loss of FCL and RAs also extended away from the wound ( Figure 9D). In full accordance with the results we presented above, the disappearance of FCLs and RAs was preceded by the increase in FN secretion by the cells at the edge of the wound ( Figure 9E, F). Together, these results place the resolution of FCLs and RAs as an intrinsic part of the cascade of events triggering cell migration ( Figure 9G). Discussion The extracellular environment is a key regulator of cellular physiology with integrins playing a key role translating the chemical composition of the extracellular milieu into intracellular signals. Among various mechanisms controlling integrin function, integrin trafficking via endocytosis and exocytosis plays a major role (Moreno-Layseca et al., 2019). Thus far, the relationship between integrin-based matrix adhesions and endocytosis has been considered primarily antagonistic, with endocytosis playing a role in the disassembly of said adhesive structures (Ezratty et al., Here we provide evidence, for the first time, of a constructive relationship between the endocytic machinery and cellular adhesions, where the CME machinery, in the form of FCLs, is key for the formation of integrin αvβ5 RAs. Moreover, we show that FCL-mediated αvβ5 RA formation is counteracted by the activation of a distinct integrin heterodimer, α5β1, in distinct adhesion structures, FBs, revealing an interesting mechanism of inter-adhesion crosstalk. Our results support the idea that FCLs and RAs are two sides of the same structure (Lock et al., 2019;Zuidema et al., 2020). Previous studies have demonstrated the importance of integrin αvβ5 at RAs in the formation of FCLs (Zuidema et al., 2022(Zuidema et al., , 2018Lock et al., 2019). Here, we show that this relationship is also crucial in the other direction, with FCLs being required for the formation of integrin αvβ5 RAs. Therefore, we believe the previously suggested term clathrin containing adhesion complexes (or CCAC for short) is a more appropriate terminology to refer to these structures. The mechanism of FCL-mediated RA formation We observed that FCL-mediated RA formation events are rare, which led us to use non-physiological conditions ̶ a Cilengitide washout experiment ̶ to detect them. Therefore, the physiological trigger leading to the formation of FCLs and establishment of RAs remains to be understood. VTN, the ligand for integrin αvβ5 could be considered a good candidate. However, as we show in figure 2E and as reported by others (Zuidema et al., 2022), integrin αvβ5 binds VTN equally on FAs and RAs. While it is clear that the presence of VTN is important as an extracellular tether for the formation of integrin αvβ5 adhesions (Zuidema et al., 2022(Zuidema et al., , 2018Lock et al., 2018), the switch between these adhesion types is likely an inside-out mechanism. We did not detect an increase in integrin αvβ5 RAs on VTN-coated dishes ( Figure 1B). This could seem counterintuitive, but VTN ̶ which was initially called "serum spreading factor" (Hayman et al., 1983) ̶ is readily secreted by cells during attachment. Therefore, we advise caution when making conclusions based on the results of non-coated and VTN-coated dishes on the role of this ECM component on RA coverage. Further work is necessary to shed light on this issue. Recent evidence showed that EGFR activation led to the enlargement of FCLs in an integrin β5 phosphorylationdependent manner (Alfonzo-Méndez et al., 2022), pointing to a possible mechanism for the initial coassembly of FCLs and RAs. This possibility is further reinforced by the fact that the relationship between growth factor receptors and integrins has been established in multiple contexts (Ivaska and Heino, 2011). Another key unknown aspect of FCL-mediated RA formation concerns how these structures can be molecularly differentiated from canonical endocytic events. The connection between integrin αvβ5 located in RAs and FCLs occurs primarily via the endocytic adaptors ARH and NUMB (Zuidema et al., 2018). Importantly, these adaptors also participate in integrin endocytosis (Ezratty et al., 2009;Nishimura and Kaibuchi, 2007), suggesting that other mechanisms may be required to define the identity of FCLs. Recently, a correlation was found between the presence of clathrin plaques and an alternatively spliced isoform of clathrin containing exon 31 in myotubes (Moulay et al., 2020). We did not detect any changes in clathrin splicing in our experimental system, which was not surprising given the effects we see are contact-dependent and could not be explained by transcriptional changes. In addition, we cannot ensure that the clathrin plaques detected in myotubes are equivalent to the FCLs we observe here. Nonetheless, it is possible that the abundance of the exon 31-positive clathrin isoform works as a dial that changes the probability, speed or efficiency by which cells form FCLs. Another unusual function for the clathrin machinery In addition to its endocytic function, the clathrin machinery has been shown to participate in other processes. For example, clathrin helps to stabilize the mitotic spindle by binding to microtubules (Royle, 2012) and, during E. coli infection, the CME machinery is coopted to form a clathrin-based actin-rich adhesive structure for the bacteria called a pedestal (Veiga et al., 2007). Furthermore, a clathrin/AP2 tubular lattice was recently described to envelop collagen fibers during cell migration (Elkhatib et al., 2017). The results we present here add to this list of non-endocytic functions of CME components with an important twist. FCLs can also be disassembled into individual endocytic events (Lampe et al., 2016;Tagiltsev et al., 2021;Maupin and Pollard, 1983), providing an elegant and efficient mechanism for cells to switch the same machinery from an adhesion assembly to an adhesion disassembly function. Inhibition of clathrin-containing complexes (CCAC) and its relationship to cell migration In addition to defining FCLs as key factors in the establishment of CCACs, our work has also revealed many interesting aspects on the inhibition and disassembly of these structures. We show that activation of integrin α5β1 by FN and the capacity of this integrin heterodimer to slide on the plasma membrane to form fibrillar adhesions are both essential conditions for the inhibition and disassembly of CCAC. In a classical wound healing assay, we observed that as cells start to migrate they secrete FN, leading to the disappearance of CCACs. However, using laterally confined cells ̶ which cannot form fibrillar adhesions ̶ we observed that the mere presence of FN is not enough to inhibit CCACs. A recent study showed that high levels of activated myosin light chain (p-MLC) correlated with integrin αvβ5 localizing to FAs over RAs (Zuidema et al., 2022). Moreover, overexpression of a constitutively active RhoA mutant in a cell line with low p-MLC levels promoted integrin αvβ5 localization to FAs (Zuidema et al. 2022). As Integrin α5β1-mediated FN fibrillogenesis is required for optimal activation of the RhoA-MLC pathway, which in turn increase actin stress fiber-based migration along fibrillar adhesions (Gagné et al., 2020;Huveneers et al., 2008;Danen et al., 2002), these findings perfectly complement our data. Together, these results suggest that the disappearance of CCACs is the result of a computation of multiple cellular signals occurring during the cell migration process. Whether the disassembly of CCACs occurs actively or is a mere consequence of a nonpermissive environment for the de novo formation of new adhesions is still unknown. Given the fact that RAs are long-lasting cellular adhesion structures, it is tempting to hypothesize that these structures act as a "parking brake" for a cell. As the cell is triggered to migrate, this "brake" needs to be released for efficient cell movement. This process would be analogous to the loss of cell-cell contacts which happens during epithelial to mesenchymal transition (Kalluri and Weinberg, 2009), but instead of happening between cells, it would happen between the cell and the ECM. Therefore, we propose that disassembly of RAs is an intrinsic process during cells migration. We showed that FN regulates CCAC assembly in all the cell lines we tested. However, how these in vitro findings will operate in vivo is still unknown. Even though the ECM composition in tissues is complex, the FN effect on CCAC formation is local and strictly contact dependent, which opens the possibility that, in vivo, tissues may use focal changes in ECM composition to control these structures. FN patterning To study local vs. global effects of FN, FN was mixed with 50 ng/ml of Alexa647-labelled BSA and used to precoat the imaging dishes overnight in +37°C. The coated surface was subsequently scratched with a needle to allow partial reappearance of non-coated surface. After scratching, the dishes were heavily rinsed with PBS. 20 000 U2Os-AP2-GFP cells were seeded on patterned imaging dishes to ensure sufficient single-cell attachment to border areas. Overexpression of mammalian proteins The clathrin inhibitor AP180 c-terminal fragment (AP180ct; amino acids 516-898) cDNA, from rat origin, was described previously (Ford et al., 2001). This construct was cloned into Gateway compatible pCI vectors, containing an N-terminal monomeric EGFP using the Gateway system. The donor template sequence was: where C terminal tagging with GFP is in green (codon-optimized) + short linker in purple. 150 bp homology arms (orange) were incorporated via PCR amplification from a synthesized (IDT), codon-optimized monomeric EGFP. dsPCR product was purified and 150 ng was used directly for transfection together with gRNAs. 70-80% confluent 24-well plates of U2Os cells were transfected with 2 µg PEI (1 µg/ml), 150 ng of plasmid and 150 ng of the PCR product. In addition, hMEC-AP2-GFP were treated with 1 μM DNA-PKc inhibitor NU7441 for 48 h post transfection. Two days after transfection cells were treated with puromycin (1 µg /ml) to enrich for successfully transfected cells. After expansion, GFP-positive cells were sorted by FACS, and single clones were expanded and genotyped. Generating the U2Os-AP2-halo and HeLa-AP2-halo cell lines U2Os-AP2-halo and HeLa-AP2-halo cell lines were generated with the same protocol as the U2Os-AP2-GFP cell line. The donor template sequence was: where C terminal tagging with halo is in green (codon-optimized) and short linker in purple. 150 bp homology arms (orange) were incorporated via PCR amplification from a synthesized (IDT), codon-optimized monomeric halo tag. Generating the HeLa-AP2-GFP, MCF7-AP2-halo, HDF-AP2-halo and CAco2-AP2-halo cell lines The most effective gRNA (TGCTACAGTCCCTGGAGTGA) was ordered as sgRNA from Synthego and Cas9 protein (purified in the lab) was used instead of plasmid. The donor templates for these cell lines to insert either EGFP or Halo tag were same as above, respectively. Cell lines were then treated with 1 µM DNA-PKc inhibitor NU7441 for 48 hours post nucleofection. Generating the U2Os-AP2-GFP-ITGB5-mScarlet cell line This cell line was produced by the same protocol as the U2Os-AP2-GFP cells with the following changes: The gRNA sequence was CAAATCCTACAATGGCACTG, and the donor template was: GGTTTGAGTGTGTGAGCTAACATGTGTCCTCATCCTCTTCCCCGCCGTGTTCTGTAGGCTTCAAATCCATTATACAGAAAGCCTATCTCCACGCACACTGTGGACTTCA CCTTCAACAAGTTCAACAAATCATATAACGGCACTGTTGACGGAAGTGCATCTGGGAGCTCAGGCGCTAGTGGTTCAGCGAGCGGGGTGAGCAAGGGCGAGGC AGTGATCAAGGAGTTCATGCGGTTCAAGGTGCACATGGAGGGCTCCATGAACGGCCACGAGTTCGAGATCGAGGGCGAGGGCGAGGGCCGCCCCTACGAGG GCACCCAGACCGCCAAGCTGAAGGTGACCAAGGGTGGCCCCCTGCCCTTCTCCTGGGACATCCTGTCCCCTCAGTTCATGTACGGCTCCAGGGCCTTCATCAAG CACCCCGCCGACATCCCCGACTACTATAAGCAGTCCTTCCCCGAGGGCTTCAAGTGGGAGCGCGTGATGAACTTCGAGGACGGCGGCGCCGTGACCGTGACC CAGGACACCTCCCTGGAGGACGGCACCCTGATCTACAAGGTGAAGCTCCGCGGCACCAACTTCCCTCCTGACGGCCCCGTAATGCAGAAGAAGACAATGGGC TGGGAAGCATCCACCGAGCGGTTGTACCCCGAGGACGGCGTGCTGAAGGGCGACATTAAGATGGCCCTGCGCCTGAAGGACGGCGGCCGCTACCTGGCGG ACTTCAAGACCACCTACAAGGCCAAGAAGCCCGTGCAGATGCCCGGCGCCTACAACGTCGACCGCAAGTTGGACATCACCTCCCACAACGAGGACTACACCG TGGTGGAACAGTACGAACGCTCCGAGGGCCGCCACTCCACCGGCGGCATGGACGAGCTGTACAAGTAATGTTTCCTTCTCCGAGGGGCTGGAGCGGGGATCT GATGAAAAGGTCAGACTGAAACGCCTTGCACGGCTGCTCGGCTTGATCACAGCTCCCTAGGTAGGCACCACAGAGAAGACCTTCTAGTGAGCCTGGGCCAGGA GCCCACAGTGCCT where A=silent mutations in 5' HA, linker region is in purple, and mScarlet is in orange. Lentiviral shRNA production and transduction Lentiviruses for shRNA production were produced using packaging plasmids pCMVR and pMD2.g and specific shRNAs in pLKO.1 vector as follows: 80% confluent HEK293T cells in DMEM supplemented with 10% FBS and 100U penicillin-streptomycin were transfected using PEI MAX transfection reagent. 5 hours later, the medium was changed to DMEM supplemented with 4% FBS and 25mM HEPES. U2Os-AP2-GFP cells were transduced with lentiviral media expressing respective shRNAs in the presence of Polybren 8 µg/ml (Sigma-Aldrich, TR-1003) for 5 hours, and replaced with culture medium. 48 h later, puromycin (1 µg/ml) was added for 24 h to allow selection Microscopy All live videos and images from fixed samples were acquired with the ONI Nanoimager microscope equipped with 405, 488, 561 and 647 lasers, an Olympus 1.49NA 100x super achromatic objective and a Hamamatsu sCMOS Orca flash 4 V3 camera. The ONI nanoimager microscope set to TIRF angle was used to acquire AP2 lifetimes at the cell membrane from 300 frames (1 frame/s) with the exposure time of 330 ms. Each video represents endocytic events from 2-3 cells (total field of view). Acute manipulation of integrin activity Integrin β1 blocking Acute modulation of ligand binding activity for integrin β1 was achieved using the function blocking antibody mab13 (0.3 µg/ml). U2Os-AP2-GFP cells were plated on FN as explained above, and 16-20 h later subjected to live TIRF imaging. 0 min sample has no mab13 added, to control base-line FCL proportions. Immediately after mab13 addition, 5 min time lapses were continuously collected until 35 min, control videos (time point 0) had no mab13 added. Integrin β5 blocking To acutely induce the inhibition of integrin αvβ5 we used the small molecular inhibitor Cilengitide (MedChem Express HY-16141, 10 µM). U2Os-AP2-GFP cells plated on non-coated imaging dishes were treated with Cilengitide for 15 or 45 min, fixed, stained, and imaged with the ONI nanoimager microscope at TIRF angle, and analyzed for the resulting reduction of RA coverage. Cilengitide washout U2Os-AP2-GFP-ITGB5-mScarlet cells were plated on non-coated imaging dishes and 1 d later confluent monolayers were treated with 1 µM Cilengitide for 15-25 min, during which most FCLs and RAs were dissociated from the cell membrane. Samples were then washed twice and immediately subjected to live TIRF imaging to detect the de novo formation of FCLs and RAs. 1 h time-lapses were acquired with the ONI nanoimager microscope at TIRF angle, at 30 s intervals, with an exposure time of 100 ms for AP2 and 300 ms for integrin β5. Immunofluorescent staining and imaging For immunofluorescence experiments, cells were fixed with 4% paraformaldehyde-PBS for 15 min in a +37°C incubator, washed with PBS and blocked with 1% BSA-PBS. Primary antibodies diluted in 1% BSA-PBS were incubated for 1 h, samples were washed with PBS, and secondary antibodies diluted in 1% BSA-PBS were let to bind for 30 min. Samples were imaged with the ONI nanoimager microscope using TIRF angle and exposure times of 500 ms or 1000 ms. CME lifetime analyses (FCL proportion) To track CME events and measure lifetimes we used 'u-track 2.0' multiple-particle tracking MATLAB software at default settings (Jaqaman et al., 2008). For all experiments, "n" refers to a movie, which contained 2-4 cells. To determine the proportion of FCLs, we used the output from u-track to count the number of pits (events lasting longer than 20 s and shorter than 120 s) and the number of FCLs (events lasting longer than 120 seconds, as described in (Saffarian et al., 2009) in all frames. We took a conservative approach to identify CCPs, where events that were present at the start or lasted beyond the end of the movies were not counted as CCPs. This approach artificially led to higher FCL proportions in the first and final 120 movie frames. Therefore, FCL proportions are presented as the average FCL proportion from frames 120 to 175 for each movie. Other analyses With the exception of FCL proportions, all image analyses were performed using ImageJ. Simple fluorescence measurements were done manually. Others were performed using custom scripts as shown below: RA coverage Individual cells were marked and ITGB5 (or αvβ5) and p-Pax channels were segmented using the Robust Automatic Threshold Selection function. RAs were defined as ITGB5 (or αvβ5) signals not colocalizing with p-Pax. The area of RAs was then divided by the area of each marked cell to obtain RA coverage. Data is presented as percentage of the cell area covering RAs. For the wound healing experiment, a line on the migration front of each image was manually drawn. This line was then used as a reference to automatically draw a box, 100 pixels in width (11.7 µm). RA coverage (as above) was calculated for this box, which was then moved inward in the culture in 50 pixels steps, where the RA coverage analysis was repeated. Values are normalized to the average RA coverage on the three innermost areas in the culture. AP2-ITGB5 dynamics Events showing the appearance of both AP2 and ITGB5 were identified by visual inspection of videos. For the generation of graphs, we selected only events where we could unambiguously ensure that significant FCLs and ITGB5 signals were not present in the region for at least 3 minutes. Time zero was defined as the frame where AP2 signal appeared and fluorescence Intensity from a 10 µm x 10 µm region around each event was measured for 3 min before (six frames) and 5 min after (10 frames). Fluorescence was normalised to the highest value in these frames. AP2 intensity per colocalisation status AP2, ITGB5 and p-Pax channels were segmented using the Robust Automatic Threshold Selection function. Each segmented AP2 spot had the fluorescence intensity measured from the original image and classified for its colocalisation with either marker (ITGB5 or p-Pax). We used full images for these analyses. RAs colocalising to AP2 Individual cells were marked and AP2, ITGB5 and p-Pax channels were segmented using the Robust Automatic Threshold Selection function. RAs were defined as ITGB5 signals not colocalizing with p-Pax. In the conditions used for these experiments, (FN + mab13) RAs 23 were primarily individual spots. The colocalisation of RAs to AP2 was classified by measuring the intensity of each RA region at the segmented AP2 channel. FN intensity vs. AP2 intensity AP2, ITGB5 and P-PAX channels were segmented using the Robust Automatic Threshold Selection function. Each segmented AP2 spot had the fluorescence intensity measured from the original image. A 3 µm x 3 µm region was drawn around each AP2 spot and used to measure the intensity of FN from the original image. Data is presented as the fluorescence for each AP2 spot. Statistics Figure legends state the exact n-values and individual repeats used in analyses. For multiple comparisons one-way ANOVA was performed followed by Tukey's multiple comparison. Pairwise comparisons were performed using two-tailed Student's t-test with equal variance. All graphs and statistical calculations were performed with GraphPad Prism 9.
2022-06-02T13:22:57.277Z
2023-03-23T00:00:00.000
{ "year": 2023, "sha1": "b3531ee776ebd71df5b255309d61eec0e90cc132", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/05/30/2022.05.30.494022.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "b3531ee776ebd71df5b255309d61eec0e90cc132", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
207958479
pes2o/s2orc
v3-fos-license
The Tumor Suppressor Role of Zinc Finger Protein 671 (ZNF671) in Multiple Tumors Based on Cancer Single-Cell Sequencing In humans, zinc finger protein 671 (ZNF671) is a type of transcription factor. However, the contribution of tumor heterogeneity to the functional role of ZNF671 remains unknown. The present study aimed to determine the functional states of ZNF671 in cancer single cells based on single-cell sequencing datasets (scRNA-seq). We collected cancer-related ZNF671 scRNA-seq datasets and analyzed ZNF671 in the datasets. We evaluated 14 functional states of ZNF671 in cancers and performed ZNF671 expression and function state correlation analysis. We further applied t-distributed stochastic neighbor embedding to describe the distribution of cancer cells and to explore the functional state of ZNF671 in cancer subgroups. We found that ZNF671 was downregulated in eight cancer-related ZNF671 scRNA-seq datasets. Functional analysis identified that ZNF671 might play a tumor suppressor role in cancer. The heterogeneous functional states of cell subgroups and correlation analysis showed that ZNF671 played tumor suppressor roles in heterogeneous cancer cell populations. Western blot and transwell assays identified that ZNF671 inhibited EMT, migration, and invasion of CNS cancers, lung cancer, melanoma, and breast carcinoma in vitro. These results from cancer single-cell sequencing indicated that ZNF671 played a tumor suppressor role in multiple tumors and may provide us with new insights into the role of ZNF671 for cancer treatment. INTRODUCTION Cancer is a complex ecosystem composed of cells with heterogeneous functional states, leading to both therapeutic resistance, and frequent cancer recurrence or metastasis, which poses a major obstacle to cancer diagnosis and treatment (1)(2)(3). Some tumor cells have high proliferative or apoptotic capacity, some have invasion and metastasis activities, some show stem-like properties, and some exhibit a quiescent state (4,5). These functionally heterogeneous cancer cells act cooperatively or competitively during tumor progression or metastasis, leading to distinct tumor phenotypes (6)(7)(8). Therefore, it is essential to systematically and comprehensively identify the functional states of cancer cells. Single-cell mRNA-sequencing (scRNA-seq) provides a powerful tool for characterizing the omic-scale features of heterogeneous cell populations (9,10). ScRNA-seq technologies permit the dissection of primary tumor cells, metastatic tumor cells, cancer stem cells (CSC), circulating tumor cells (CTC), and disseminated tumor cells in a comprehensive and unbiased manner, with no need of any prior knowledge of the cell population. ScRNA-seq has become a reference tool for analyzing the composition of cancer tissues and for establishing the characteristics of the cellular microenvironment (11). Thus, understanding single cancer cells will advance our understanding of not only therapeutic resistance but all facets of cell biology. Furthermore, the application of scRNA-seq in the clinic has the potential to change our approach to cancer management fundamentally (12). In this study, we analyzed the expression of ZNF671 in cancer scRNA-seq datasets systematically. We explored the functional role of ZNF671 in solid tumors and analyzed its expression and functional correlation in tumors. We further described the distribution of cancer single cells and explored their functional relevance in different tumor cell subgroups. Our results provide important insights into tumor heterogeneity and enhance knowledge of the tumor suppressor role of ZNF671 in solid tumors. Data Collection Data were collected based on the following keywords: ("single cells" OR "single cell" OR "single-cell" OR "single-cells") AND ("transcriptome" OR "transcriptomics" OR "scRNA-seq" OR "scRNA seq" OR "RNA-sequencing" OR "RNA-seq" OR"RNA sequencing") AND ("carcinoma" OR "tumor" OR "tumor" OR "cancer" OR "neoplasm" OR "neoplastic"). According to the method used by Yuan et al. (28), three human data sets from Array Express, Sequence Read Archive (SRA), and Gene Expression Omnibus (GEO) datasets were collected and all single-cell data in these datasets were analyzed via expression quantification, quality control, and characterization of functional states. Data Processing Transcript expression quantification was performed using Salmon (version 0.9.1) with the optional parameter k (k = 31 for long reads and k = 15 for short reads). The GENCODE (Release 28, GRCh38) reference transcriptome was used to detect gcBias, seqBias, and other default parameters in the quasi-mappingbased mode. For scRNA-seq datasets with only an expression matrix, we directly converted the expression values to transcripts per million (TPM)/counts per million (CPM) values using a custom script. Expression values were log2 transformed with an offset of 1. Dimensionality Reduction Using t-distributed Stochastic Neighbor Embedding (t-SNE) Analysis According to the method used by Li et al. (35), donor files were imported into R, and expression matrices containing measured intensities at the single-cell level were extracted from the flowCore package. A subset of cells was selected for each donor at random and merged into a single expression matrix before t-SNE analysis. The beads, viability, center, offset, residual, event length, intercalator, and time channels were removed from the expression matrix. The ZNF671 protein marker was the only factor included in the t-SNE analysis, and ZNF671 intensities were transformed using the inverse hyperbolic sine (arcsinh) function. T-SNE calculations were performed with 1,000 iterations, a perplexity parameter of 30, and a trade-off θ of 0.5, which was used to visualize similarities and the proximity of cells in a twodimensional plot. T-SNE maps were generated by plotting each event of the t-SNE dimensions in a dot-plot. ZNF671 intensities were overlaid on the dot-plot to show the expression in different cell islands and to facilitate the assignment of cell subsets to these islands. The t-SNE dimensions were characterized by t-SNE1 and t-SNE2 in the given graphs. The software is available at https://github.com/KlugerLab/FIt-SNE. ZNF671 Expression and Functional State Correlation Analysis The expression level statistics of ZNF671 in each cell were converted to normalized ranks and Next, the Kolmogorov-Smirnov liker random walk statistic, similar to the GSEA method, was used to summarize the ZNF671 expression-level rank statistics of a given signature gene set into a final enrichment score, which was used to characterize the signature activity. The enrichments of 14 signatures across cells in the scRNA-seq data were calculated, and only cells with detectable expression of ZNF671 were used. Correlations between ZNF671 expression and functional state activities were assessed using correlation analysis with false discovery rate (FDR) corrections for multiple comparisons (FDR < 0.05 and P < 0.05). Migration and Invasion Assays Transwell plates (8-µm pores) (Costar/Corning, Lowell, MA) were used for Transwell migration or invasion assays. 5 × 10 4 (migration assay) or 1 × 10 5 (invasion assay) cells resuspended in serum-free medium were placed in the upper chamber of each insert, either uncoated or coated with Matrigel (BD Biosciences). The lower chamber contained culture medium with 10% FBS to act as a chemoattractant. The cells were incubated for 12 or 24 h and were then fixed and stained. Cells on the undersides of the filters were observed and counted under 200× magnification. Statistical Analysis Statistical analysis was performed using SPSS version 17.0 (SPSS Inc., Chicago, IL, USA). Differences between two groups were analyzed using the two-tailed unpaired Student's t-test; P < 0.05 was considered statistically significant. ZNF671 Functional States in the scRNA-seq Datasets Expression analysis showed that ZNF671 was obviously downregulated in GBM, glioma, AST, ODG, LUAD, MEL, and BRCA (Figure 1), which indicated that ZNF671 might play an important role in tumor progression. To further explore the functional role of ZNF671 in different cancers, 14 crucial functional states of cancer cells, including angiogenesis, apoptosis, cell cycle, differentiation, DNA damage, DNA repair, EMT, hypoxia, inflammation, invasion, metastasis, proliferation, quiescence, and stemness were summarized and analyzed. As shown in Figure 2, the expression of ZNF671 and the activity of each functional state across single-cell datasets in different cancers were explored using an interactive bubble chart. The upper bar plot shows a summary of the association between the functional state and the number of single-cell datasets. We found that the expression of ZNF671 had a significant negative regulation for angiogenesis, apoptosis, EMT, hypoxia, invasion, and quiescence, which was consistent with our previous research (26,27). These results indicate that ZNF671 might play a suppressor role in tumor development. The Different Roles of ZNF671 in Different Cell Groups To determine the functionally heterogeneous roles of ZNF671 in cancer cells, we inferred that single cells exhibited widespread heterogeneity in terms of their functional states in cancer. We applied t-SNE to reduce the non-linear dimensionality of the cancer cell data and placed different cell clusters on a t-SNE map (Figure 5), which indicated that the cell groups might be associated with the functional heterogeneity of cancer. To reveal the roles of ZNF671 in different cell groups, we further the explored functional roles and correlations of ZNF671 in different cancer subgroups. As shown in Figure 6, ZNF671 expression was positively associated with DNA repair, DNA damage, and apoptosis but negatively associated with angiogenesis, differentiation, and proliferation in MGH30 cell groups of GBM, while ZNF671 expression was positively associated with proliferation in MGH31 cell groups of GBM. In glioma (brain), ZNF671 expression was negatively correlated with angiogenesis in MUV1, with DNA repair, DNA damage, and cell cycle in MUV5, with DNA repair in BCH836, and with apoptosis in BCH869. ZNF671 expression was positively correlated with hypoxia in MUV10, BCH836, and BCH869 in glioma (brain). In glioma (PDX), ZNF671 expression in BCH869 correlated negatively not only with hypoxia but also with EMT, apoptosis, angiogenesis, and quiescence. In AST, ZNF671 expression was positively correlated with stemness in MGH45 and MGH56, with invasion in MGH61, and with inflammation in MGH64, and it was negatively correlated with cell cycle and invasion in MGH45, with angiogenesis in MGH57, and with invasion in MGH64. In ODG, ZNF671 expression was positively correlated with metastasis, hypoxia, inflammation, and apoptosis in MGH36 and with inflammation in MGH60 but negatively correlated with apoptosis in MGH54 and with quiescence in MGH93. Similarly, in MEL, ZNF671 expression was positively correlated with stemness in tumor78, with proliferation and stemness in tumor79, with proliferation and differentiation in tumor88, and with inflammation in tumor89. However, ZNF671 expression was negatively correlated with DNA repair in tumor78, DNA damage and angiogenesis in tumor80, and cell cycle in tumor89. In LUAD, ZNF671 expression was positively correlated with DNA repair in MBT15 but negatively correlated with metastasis and invasion in PT45. In BRCA, ZNF671 expression was only positively correlated with DNA damage in CSL KO xenograft tumor (Figure 6, all * P < 0.05; * * P < 0.01). ZNF671 Inhibits Cell EMT, Migration, and Invasion in vitro To determine the functional roles of ZNF671 in cancer cells, we performed Western blot assay and migration and invasion assays using U87, U251, A375, MDA-MB-231, and BT-549 cell lines transfected with ZNF671 or vector plasmids. As shown in Figure 7A, Western blot analysis validated that ZNF671 protein was obviously upregulated after transfection of ZNF671 plasmid. Furthermore, the overexpression of ZNF671 was associated with increased expression of the epithelial marker Ecadherin and decreased expression of the mesenchymal marker Vimentin. Transwell assays showed that overexpression of ZNF671 inhibited cancer cell migration and invasion in vitro (Figures 7B-D). These findings indicate that ZNF671 inhibits the EMT, migration, and invasion of U87, U251, A375, MDA-MB-231, and BT-549 cells in vitro. In this study, we found a total of eight solid tumorrelated ZNF671 scRNA-seq datasets, including GBM, glioma, AST, ODG, LUAD, MEL, and BRCA. ScRNA-seq functional state analysis showed that ZNF671 played a tumor suppressor role and/or an oncogenic role in angiogenesis, apoptosis, cell cycle, differentiation, DNA damage, DNA repair, EMT, hypoxia, inflammation, invasion, metastasis, proliferation, quiescence, and stemness. The different functional states in tumors may be associated with the inherent heterogeneity of the tumor. However, the synthetic analysis of eight solid tumors showed that ZNF671 was negatively associated with angiogenesis, apoptosis, EMT, hypoxia, invasion, and quiescence. Western blot and transwell assays showed that ZNF671 inhibited EMT, migration, and invasion of CNS cancers, lung cancer, melanoma, and breast carcinoma in vitro. These results suggested a crucial tumor suppressor role for ZNF671 in the progression of these cancers, which was consistent with our previous studies (26,27). To further explore the heterogeneous functional state of ZNF671 in cancers, we applied t-SNE to describe the distribution of cells. We found different cell clusters on a t-SNE map and proposed that these cell subgroups might lead to cancer functional heterogeneity. Functional analysis of the cancer cell subgroups validated that the heterogeneous cell populations had different roles in cancer progression and development, which provided us with a fine level of resolution for cancer treatment. However, there were still several limitations. First, this study was based on current scRNA datasets, and several scRNA datasets only contain data for hundreds of single cells, so more cells should be considered for analysis. Second, we found the ZNF671 inhibits angiogenesis, apoptosis, EMT, hypoxia, invasion, and quiescence in CNS cancers, lung cancer, melanoma, and breast carcinoma. Moreover, we only identified that ZNF671 suppresses cell EMT, migration, and invasion in vitro. The angiogenesis, apoptosis, hypoxia, and quiescence functional states need be identified further, and the suppressor role of ZNF671 in vivo needs to be explored further. In conclusion, this study systematically evaluated the tumor suppressor role of ZNF671 based on scRNAseq datasets. Our findings revealed that ZNF671 is a tumor suppressor in LUAD, BRCA, GBM, glioma, AST, ODG, and MEL. However, the mechanism of ZNF671's tumor suppressor role remains unknown, and further studies are needed to clarify this issue. Our results provide new insights into the role of ZNF671 in multiple tumors and identifies ZNF671 as a novel target for cancer treatment.
2019-11-14T14:29:56.347Z
2019-11-08T00:00:00.000
{ "year": 2019, "sha1": "4e597b1de801557f21fad4cbfa1631f1524fb570", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2019.01214/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ddfc7d29a02b72f7b0a10bdb16003eb915e259a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236954501
pes2o/s2orc
v3-fos-license
Assessment of Tubal Patency with Selective Chromopertubation at Office Hysteroscopy versus Modified Minilaparoscopy in Infertile Women Objectives: Tubal factor is the leading cause of female infertility. Diagnostic hysterolaparoscopy with chromopertubation plays a pivotal role in its evaluation. Office hysteroscopy (OH) has gained popularity as the outpatient procedure for diagnostic purposes. OH being a less invasive approach, the current study was undertaken to compare the accuracy of assessment of tubal patency with chromopertubation at OH with modified minilaparoscopy in infertile patients. Materials and Methods: The present study was a pilot study conducted from March 2017 to August 2018. Eighty patients were recruited. OH was done without anesthesia. Diluted methylene blue dye was injected. The eddy current of blue dye, “Visualizable flow” at ostium, and disappearance of blue dye from the uterine cavity through ostium was documented as evidence of patent tubal ostium. In case of tubal occlusion, uterine cavity became blue due to backflow of dye. After OH, minilaparoscopy with chromopertubation was performed under general anesthesia. Both tubes were assessed separately for tubal patency. Results: All patients underwent OH followed by minilaparoscopy in the same sitting. OH was 87.5% sensitive with positive predictive value of 95.2%. Compared to minilaparoscopy, OH is 85.6% accurate in predicting tubal patency. The area under receiver operating curve was 0.96 (SE is 0.15 with 95% confidence interval of 0.93–0.99, P < 0.001). It implies that, OH should correctly identify all laparoscopic cases with probability of 0.96. Conclusion: OH chromopertubation can be used as an alternative to laparoscopy for assessing tubal patency with added advantages of lack of requirement of anesthesia, minimal cost, and better patient acceptance. Moreover, the procedure is less time-consuming and less invasive with high sensitivity and moderate specificity. MaterIals and Methods The present study was undertaken as a pilot study from March 2017 to August 2018 in the Department of Obstetrics & Gynecology. Ethical clearance was obtained from the Institute Ethics Committee for Postgraduate Research, AIIMS, New Delhi, India (IECPG-568/08.12.2016, approved on 22.3.17). Informed consent was obtained from all the patients. Eighty patients who fulfilled the inclusion criteria (infertile women posted for hysterolaparoscopy whose tubal status was not known or had confirmed cornual block on hysterosalpingography [HSG]) were recruited. Patients with confirmed tubal block on laparoscopy, diagnosed hydrosalpinx, and presence of acute pelvic inflammatory disease were excluded from the study. The procedure was performed by a single experienced surgeon between postmenstrual day 5-10 to achieve best visualization. OH was done using 2.9 mm telescope (compact OH) with 0° optic and without anesthesia by vaginoscopic approach without cervical dilatation. Normal saline was used as a distension medium. The uterine cavity, tubal ostia, fundal contour, and cervical canal were assessed. Diluted methylene blue dye, 2-10 ml was injected slowly. Each tubal ostium was assessed separately. The eddy current of blue dye, "Visualizable flow" at the ostium and disappearance of blue dye from the uterine cavity through the ostium was documented as evidence of patent tubal ostium. In case of tubal occlusion, uterine cavity became blue due to backflow of the dye. After tubal patency evaluation, blue dye got self-cleared within 3-4 s. The same procedure was repeated on the other side. After OH, general anesthesia was given. Minilaparoscopy was performed with 2.9 mm telescope and 3 mm accessory port in all the patients. The uterus, bilateral tubes, and ovaries were assessed. The presence of methylene blue dye in Pouch of Doughlas was noted before instillation of dye to help us correlate the findings of OH. Dye (20-30 ml) was injected through the intrauterine Foley's catheter. Both tubes were assessed separately for tubal patency. Port site skin suture was not applied, and adhesive plaster was used to approximate skin edges. Operative time (OT) was documented. For OH, OT was the time from the insertion of hysteroscope through the vagina till the completion of hysteroscopy, whereas for minilaparoscopy, OT was time from skin incision to the application of adhesive plaster. Prophylactic antibiotic dose was given half an hour before surgery and continued till postoperative day 5. The Visual Analog Scale (VAS) score was assessed 2 h postoperatively and at time of discharge. Subsequent follow-up was done at 1 week to assess wound healing. Statistical analysis Data analysis was carried out using software STATA version 12.0 (StataCorp, Texas, USA). All continuous variables were tested for the normality assumption using the Kolmogorov-Smirnov test. Descriptive statistics such as mean, standard deviation, median, and range values were calculated for the variables following normality assumption. The comparison of mean values between the subgroups was tested using the Student's t independent test. Frequency data were presented as numbers and percentages. Frequency data across categories were compared using the Chi-square or Fisher's exact test. Receiver operator curve analysis was carried out to find the area under curve for OH. Confidence interval 95% was calculated for all the diagnostic measures for all statistical tests as two-sided probability of P < 0.05 was considered as statistically significant. results Selective chromopertubation with OH followed by minilaparoscopy was performed in all the eighty patients in the same sitting. The baseline characteristics of all the patients are tabulated in Table 1. The mean age was 28.23 ± 3.97 years, and mean body mass index (BMI) was 24.17 ± 1.46 kg/m 2 . Primary infertility was present in 80% of the patients. In 95% of cases, husband semen analysis was normal. On OH, uterine cavity, endometrium, patency of ostia, and image quality were assessed [ Table 2]. The size of the image and image quality was satisfactory in both OH and minilaparoscopy group in all the cases. On minilaparoscopy, status of uterus and bilateral tubes and ovaries, bilateral tubal patency, and any pathological finding were noted [ Table 3]. The uterus was normal in shape and size in 69 cases and ovaries were normal in 67 cases. Fallopian tubes were healthy looking in 62 cases, peritubal adhesions were present in 4 cases, hydrosalpinx in 7, and beaded or tortuous tubes in 7 patients. On OH, eddy current or visualizable flow was seen in 77.5% through right ostium and 78.5% through left ostium. Delayed or absent visualization of flow on OH was seen in 22.5% on right and 21.25% on left side. On minilaparoscopy, dye spillage was seen in 83.75% on the right side and 86.25% on the left side. Unilateral tubal block was seen in six patients and bilateral in nine patients. Each fallopian tube was counted as an independent case. Patency was evaluated in 160 tubes. In 119/136 cases (87.5%), tubes were patent with both methods. Tubal block was diagnosed in 24/160 cases (15%) by minilaparoscopy and in 35/160 cases (21.88%) by OH. Tubal block was confirmed by both OH and minilaparoscopy in 18/160 cases (11.25%). In 17 cases tubal patency was observed by minilaparoscopy but identified as blocked by OH. Out of 24 cases diagnosed with tubal block on minilaparoscopy, 6 were identified as patent on OH making the specificity of OH as 75% with a positive predictive value (PPV) of 95.2% and negative predictive value (NPV) of 51.4% in predicting tubal patency as compared to minilaparoscopy. One hundred and nineteen cases of patent tubes and 18 cases with blocked tubes on OH were concordant with minilaparoscopy, reaching a sensitivity of 87.5% in infertile patients with proximal tubal occlusion [ Table 4]. Compared to minilaparoscopy, OH was 85.6% accurate in predicting tubal patency. Receiver operating curve (ROC) analysis was carried out to detect the accuracy of OH compared to minilaparoscopy. The area under curve was 0.96 (SE was 0.15 with 95% confidence interval is 0.93-0.99, P < 0.001). It implies that, OH would correctly identify all laparoscopic cases with probability of 0.96. On minilaparoscopy, pearly white bulky ovaries were seen in six patients who underwent ovarian drilling by harmonic in the same sitting. Adhesiolysis was done in seven patients with distorted tubo-ovarian relationship due to adhesions. The mean OT was 3.45 ± 0.73 min in OH and 7.5 ± 2.24 min during minilaparoscopy which was statistically significant (P < 0.001). No major or minor complication occurred in any of the patient. None of the patient had severe pain. The mean VAS score at 2 h postoperatively was 4.65 ± 1.00 and at discharge was 2.78 ± 0.66, and there was statistically significant decrease in VAS score from 2 h postoperatively to discharge. None of the patient had wound infection. Satisfaction rate was 100%. The mean hospital stay was 2.45 ± 0.49 h. dIscussIon Primary infertility is defined as the inability to ever become pregnant after 12 months of regular timed unprotected intercourse or therapeutic donor insemination. Secondary infertility is the inability to conceive further when there is prior conception irrespective of the outcome of prior pregnancy. [1] According to the CDC National Survey of Family Growth data statistics 2011-2015, approximately 12% of married women aged 15-44 years are infertile. [2] Leading causes of infertility include tuboperitoneal disease (40%-50%), disorders of ovulation (30%-40%), uterine factors (15%-20%), and male infertility (30%-40%). [3,4] Functional fallopian tubes have an important role to play in the reproduction by capturing the ova and transportation of embryos. [5] The role of tubal block or dysfunction in infertility is rising, contributing to 30%-35% of all cases of infertility worldwide. [6] Hence, assessment of the uterine cavity and tubal patency is an important step in the assessment of female infertility. HSG is still the preferred test by many gynecologists as the first step to evaluate tubal patency as it is a day care procedure with no need of anesthesia along with a therapeutic effect of oil soluble contrast media. However, the major disadvantages of HSG are the cornual spasm which gives around 10%-20% false picture of tubal obstruction, painful procedure, radiation exposure, and risk of infection. In a study by Hortu et al., [7] it was found that PPV of HSG was 81.1% but NPV was only 53.2%. In view of the low NPV of HSG, laparoscopy should be done to confirm tubal obstruction. DHL with chromopertubation plays a pivotal role and considered as a major tool in the gynecologist's armamentarium in the evaluation of female infertility. [8][9][10] With advancement of minimally invasive surgery, conventional laparoscope has been gradually replaced by smaller diameter telescopes with advantages of smaller incision, reduced risk of injury to pelvic organs, less anesthesia requirement, sutureless procedure, better cosmesis, less postoperative discomfort, shorter hospital stay, faster recovery, and reduced risk of adhesion formation, wound infection, and incisional hernia. [11][12][13][14] Minilaparoscopy, i.e., laparoscopy with smaller diameter endoscope is defined by the diameter of telescope by various criteria like O' Donovan criteria [15] or Unify criteria. [12] Haeusler et al. found that microlaparoscope is as accurate as conventional laparoscope (10 mm). [16] Roy et al. [17] in their study compared 5 mm telescope with 2.9 mm telescope for diagnostic laparoscopy and found both of them comparable in terms of OT, pain in the postoperative period and duration of hospitalization. No stitch was applied in 2.9 mm group. Hence, looking at the additional advantages of smaller telescope, we used 2.9 mm laparoscope in this study for doing laparoscopic chromopertubation. OH has gained popularity as an outpatient procedure for the diagnostic purposes with distinct advantages of vaginoscopic approach, no anesthesia requirement, reduced postoperative pain increased cost-effectiveness, safety, patient acceptance, and compliance. [18,19] Vaginoscopic approach during hysteroscopy further reduces the pain and discomfort to the patient. [20] Hence, OH-guided chromopertubation is evolving as less invasive modality than minilaparoscopy for assessing tubal patency. Our study was a pilot study to compare the efficacy of OH with minilaparoscopy (2.9 mm) in infertile patients for the assessment of tubal patency. BMI is an important variable in minilaparoscopy in view of operative difficulty and feasibility. In a study by Roy et al., comparing 2.9 mm and 5 mm telescopes in infertile patients, operating time increases as BMI increases. [17] In our study, mean BMI was 24.17 ± 1.46 kg/m 2 and no difficulty was encountered in any of the patient. Risquez et al. and Bauer et al. found objective reduction of picture size and clarity with minilaparoscopy. [21,22] Bauer et al. also compared microlaparoscope with conventional laparoscope and found that extent of abdominal interventions using smaller diameter laparoscopes would be a matter of experience. [22] Roy et al. compared 5 mm with 2.9 mm laparoscope and found them comparable with respect to OT with a satisfactory image quality. [17] In our study, diagnostic evaluation by minilaparoscopy with chromopertubation with 2.9 mm telescope was accomplished without any difficulty. The mean OT for OH was 3.45 ± 0.73 min, whereas it was 7.5 ± 2.24 min for minilaparoscopy which is statistically significant (P < 0.001). O' Bauer et al. found that patients who underwent diagnostic minilaparoscopy were highly satisfied and reported less post-procedural discomfort as compared to conventional laparoscopy. [22] Narrower hysteroscopes tend to lower the incidence of pain associated with OH. [23] In our study also, mean VAS score at 2 h postoperatively was 4.65 ± 1.0 suggestive of moderate pain and 2.78 ± 0.66 at discharge with a statistically significant reduction, P < 0.001. In a Roy et al.'s study, [17] no wound infection occurred in either group. In our study also, no suture was applied over abdominal wound and wound healing was good at 1-week follow-up. Incision site was barely visible and all patients accepted the procedure well. Various studies have found the role of diagnostic hysteroscopy in the assessment of tubal patency. Hysteroscopic tubal patency assessment can be done by various techniques such as determination of shift in culde sac volume pre hysteroscopy to posthysteroscopy by ultrasonography, [24,25] Parryscope method using air infusion at time of hysteroscopy which generates air bubbling effect confirming tubal patency, [25,26] selective tubal perturbation [27,28] and visualizable flow effect of hysteroscopic fluid at level of tubal ostia. [29] Torok and Major in 2012 showed that OH-guided selective chromopertubation is an effective highly reproducible technique compared to conventional laparoscopy. [27] Torok and Major [27] in a case series of 35 patients conducted an office-based study, where patients underwent OH-guided chromopertubation with methylene blue dye passed through plastic catheter, tip of which being placed at ostium with the idea that patent tube will allow dye to pass through and no blue fluid will be seen in the uterine cavity. The uterine cavity will turn blue if tubes are blocked. The findings were confirmed with laparoscopy-guided chromopertubation. They reported an accuracy of 83% with a PPV of 87.5% and NPV of 76.7% compared to conventional laparoscopy. [27] Pary et al. conducted OH with air infusion into saline (Parryscope technique) in 435 infertile patients. Rapid flow of stream of air bubbles or single large air bubble through the ostia was indicative of tubal patency. If rapid flow of air bubbles was not seen, another 40-60 s observation time was devoted to differentiate transient spasm from occlusion. It showed a high sensitivity of 98.3% for tubal patency and specificity of 69.5%-83.7% depending on the force used during chromopertubation compared with standard chromopertubation. [25] Similarly, Promberger et al. retrospectively reviewed the records of 511 patients and compared visualizable flow of saline on hysteroscopy with the outcome of laparoscopic chromopertubation. They found a sensitivity of 86.4% and specificity of 77.6% of hysteroscopy for predicting tubal patency. [29] Ott et al. compared the assessment of tubal patency at diagnostic hysteroscopy and laparoscopic chromopertubation and found that hysteroscopic flow through ostia is a reliable marker of tubal patency. Flow of air bubbles or saline toward ostium Table 5. [24][25][26][27][28][29][30][31] Direct observation of ostia and high intrauterine pressures during hysteroscopy minimizes the false-positive results secondary to spasm as compared to HSG. Promberger et al. also found that if tubes come into contact with cool saline, especially before laparoscopic chromopertubation, ostia may go into spasm leading to higher tubal occlusion rate during chromopertubation, and hence, a higher false-positive hysteroscopic flow rate. [29] Parry et al. [26] found the tubal spasm during Parryscope technique. Even pain can lead to spasm which can be overcome by the use of smaller sized hysteroscope without high pressure distension of the uterine cavity. conclusIon OH chromopertubation was found comparable to modified minilaparoscopy in diagnostic accuracy to assess tubal patency in patients with cornual block on HSG. It was 87.5% sensitive with a PPV of 95.2%. OH chromopertubation accurately identified all laparoscopy cases with a probability of 0.96 by ROC analysis. Hence, OH chromopertubation alone is an effective, precise, and minimally invasive approach with no complications to assess tubal patency in infertile patients with cornual block. OH chromopertubation can be used as an alternative to laparoscopy for assessing tubal patency with added advantages of lack of requirement of anaesthesia, minimal cost, use of non-allergenic contrast, better patient acceptance, less time required, and procedure being less invasive with high sensitivity and moderate specificity. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-08-09T13:27:26.269Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "155dfc9b5899e22426c7ff5e897dd66b54196343", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/gmit.gmit_95_20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f615981f5b8bb92d7f32cb4cdc50f6bd8fada8d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251140093
pes2o/s2orc
v3-fos-license
Full-text publications of presentations at neuroanesthesia meetings of India: A 5-year audit and analysis Backgroud and Aims: Conference presentations provide an opportunity to rapidly share findings of new research despite limitations of details and reach. Earlier studies have examined publication rates of conference presentations in anesthesia. However, conversion rate of neuroanesthesia meeting presentations to publications is unknown. We assessed the publication rate of neuroanesthesia conference presentations from India over a 5-year period and identified factors contributing to subsequent publications. Material and Methods: Conference abstracts of the Indian Society of Neuroanaesthesiology and Critical Care (ISNACC) from 2014 to 2018 were studied with regard to conversion to full-length publications. Details of presentations were obtained from abstracts published in the journal of ISNACC and details of publications were collected by searching Google and PubMed using title and author details. Results: Only 17.5% (40/229) of the abstracts presented at ISNACC conferences over a 5-year period resulted in subsequent full-text publications in peer-reviewed journals. Prospective cohort studies (OR [95% CI] 2.84 [1.05–8.56], P = 0.048), randomized trials (OR [95% CI] 2.69 [1.04 to 7.9], P = 0.053), and abstracts from public institutions (OR [95% CI] 3.44 [1.4 to 10.42], P = 0.014) were significantly associated with publications after conference presentations. Conclusion: The conversion rate of conference presentations of neuroanesthesia society of India into journal publications is significantly low. There is need for neuroanesthesia community of India to work together to improve the translation of presentations into publications. Introduction The annual medical conferences provide opportunities for academicians, clinicians, and researchers to share their new research findings with the participating audience without delay. Abstracts submitted to the conferences are mostly read by participants of the conference or/and by the readers of the respective society journals, if the conference abstracts are published. Publications of full research work in scientific journals allow meticulous peer-review, enhance credibility, result in wider dissemination and are usually considered as the end-point of research efforts. However, not all conference abstracts result in subsequent full paper publications thereby limiting the scope of meaningful application of research findings in clinical practice. Earlier studies have shown varied conversion rates of conference presentations to scientific publications. Within the specialty of anesthesia, the publication rates differed significantly between abstracts presented at Indian (5%) and American Society of Anesthesiologists meetings (22%). [1] Currently, knowledge gap exists regarding the rate of conversion of neuroanesthesia conference presentations into publications and factors that contribute to consequent publications. To understand these aspects, this study was conducted. The Indian Society of Neuroanaesthesiology and Critical Care (ISNACC) conducts its annual conference in January-February and is attended by about 400 anesthesiologists with interest in providing anesthesia and critical care services for neurosurgical patients. The conference provides a platform for trainees and practitioners to present their research work. The primary objective of this study was to assess the rate of conversion of neuroanesthesia conference presentations from India into scientific publications in peer-reviewed journals. Our secondary objective was to identify factors related to conference presentations that contributed to subsequent publications. Material and Methods The National Institute of Mental Health and Neurosciences human ethics committee granted waiver vide letter NIMHANS/IEC/2020-2021 dated 18 May 2020 as this study involved retrospective extraction of information from publically available resources. We extracted details of the conference abstracts of ISNACC for a 5-year period from 2014 to 2018 from the online archives of the Journal of Neuroanaesthesiology and Critical Care (JNACC), the official journal of ISNACC. We searched for publications till March 2020. The 2-year time period after the last conference (January 2018) was considered appropriate, as previous studies have reported median time of 18 months for publication of conference presentations. [1] The details of abstracts of conference presentations were extracted into a Microsoft Excel worksheet for analysis. The data included title of the presentation, names of first and corresponding authors, year of presentation, type of research work (case report or original research), hospital name, type of hospital (academic or nonacademic), designation of the presenting author (trainee or consultant), place of research (public or private), funding status and broad area of research. Next, we used PubMed and Google to search for full-text publications of the conference presentations using (1) title of abstract of the conference presentation and (2) full names of first and corresponding authors. We extracted data regarding publication as follows: name of the journal, time for publication from presentation, PubMed indexing of the journal and its impact factor and citations for the published papers. We explored whether certain factors such as designation of the presenting author (trainee vs. consultant), hospital type (academic vs. nonacademic), work setup (public vs. private), outcome of randomized control trial (RCT) (positive, negative or neutral) and type of presentation (case report vs. original research) were likely to be associated with conversion of conference presentation to subsequent full-text journal publication. Since this study was exploratory in nature, no formal sample size calculation was performed. A 5-year data was deemed as adequate to provide reasonable estimate of conversion rate of presentations to publications based on similar previous studies. [2,3] Data were analyzed using R software version 3.5.2. Interval scale variables are presented as mean ± standard deviation, while nominal variables are presented as frequencies and percentages. Binary logistic regression was used for prediction of publication, and results presented as estimates and odds ratios (OR) with 95% confidence intervals (CI). A value of P < 0.05 was considered as level for statistical significance. in PubMed indexed journals. The distribution of publications was almost similar in Indian and International journals 18 (45%) and 22 (55%), respectively. The mean impact factor of journals that published these papers and the average citation counts for published manuscripts were 0.65 ± 0.63 and 2.1 ± 2.74, respectively. The majority of the published RCTs reported positive outcomes compared to negative or neutral outcomes 11 (64.7%) vs. 6 (35.3%) [ Table 1]. Results The characteristics of presentations (study type, study design, nature of the hospital, setting, designation of the first author, and funding status) that resulted in subsequent publications and of those that remained unpublished at the time of assessment are shown in Table 2. Table 3]. The key areas of research in neuroanesthesia and neurocritical care in India as evidenced by presentations over a 5-year period and their subsequent publications are shown in Figure 2. Among the specific areas, neuropharmacology, neurovascular diseases and neuromonitoring were the top three areas of research that were presented during the ISNACC conferences in the 5 years that we studied. Amongst the same presentations, topics related to drugs, traumatic brain injury, monitoring, and airway were the top four specific research areas that resulted in subsequent publications. Discussion A small proportion of abstracts presented at ISNACC conferences resulted in subsequent publication as complete Anesthetists' Society in 1985 that resulted in publication was 44% at 3 years and 50% at 5 years after presentation. [4] In another study reviewing the abstracts of survey research from the annual meetings of the ASA, Association of Anaesthetists of Great Britain and Ireland and IARS from 2011 to 2014, the authors observed that 43/99 (43%), 0/76 (0%) and 7/30 (23%) abstracts, respectively, were subsequently published. [5] The publication rate of abstracts of presentations at the Turkish Society of Anaesthesiology and Reanimation (TARD) congresses between 2011 and 2014 was 42.3%. [6] The rate of publications of abstracts of Society for Obstetric Anaesthesia and Perinatology annual meetings from 2010 to 2014 was noted to be lower at 26.8%. [7] The publication rate for veterinary anesthesia conference was however high at 73.5% with the average time for publication of 24 months from presentation. [8] Another study evaluated the publication rates of abstracts of German Anaesthesia Congress (GAC) and European Society of Anaesthesiologists (ESA) meeting in 2000 and 2005. This study observed improvement in publication rates from 39% to 47% during the 5-year period for GAC but not for ESA meeting (34% and 32%). [9] In our study, we observed decrease in publication rate from 21% for presentations made in 2014 to 8% for presentations made in 2018. An earlier study evaluating publication of abstracts of ESA meeting of 1995 had observed a 42% (199/472) conversion rate with mean time to publication of 16.8 months. [10] The same group of authors evaluated the publication rates of abstracts of the Spanish Society of Anaesthesiology conference in 1992, and noted that only 17% (84/491) abstracts were published, with an average time to publication of 1.8 years. [11] Our findings demonstrate that the rate of conversion of abstracts of ISNACC meetings into publications is significantly lower than that of most meetings of other anesthesia societies. In a review of abstracts of RCTs presented at ASA meetings from 2001-2004, the authors observed that 564/1052 (53.6%) presentations proceeded to publication. Abstracts with positive study outcomes were associated with publication suggesting possibility of publication bias. [12] Majority (85%) of the publications of ISNACC conference presentations were clinical studies and the rest were case reports. The proportion of clinical studies was 73% while case reports were 2.5% in a study evaluating publication of abstracts of TARD conferences. [6] In our study, publications in national journals were 45% while for TARD abstracts this was 26%. [6] Prospective cohort and RCT designs were more likely to be published in our study than case reports. This is likely due to editorial policies of many anesthesia journals which do not publish case reports. Similarly, conversion rate of abstracts from publicly-funded hospitals was more compared to private hospitals. The probable reasons could be that publication in peer-reviewed journal forms an essential component of career advancement in publicly-funded institutions while such requirements are not applicable in private hospitals. Apart from the factors studied, the authors believe that, there could be other reasons for poor conversion rate of neuroanesthesia conference presentations to publications such as poor quality of research work, or lack of interest, incentive or time for authors in pursuing publication after conference presentation. To improve publication rates, ISNACC can initiate research methodology workshops for authors and researchers to improve their understanding of conducting good research. Secondly, scientific committees of neuroanesthesia conferences should perform rigorous peer-review of abstracts especially with regard to methodological aspects and adherence to guidelines for conference abstracts. Thirdly, providing grants for research and assistance in manuscript writing will help authors in performing good quality studies and facilitate This is the first study to assess rate of conversion of conference presentations of neuroanesthesia specialty into scientific publications. This study also assessed and informs how conference presentations and their subsequent publications have taken place over a 5-year period and factors contributing to subsequent publications. Our study however has certain limitations. Firstly, it is possible that few conference presentations might still be under consideration for publication, especially of latter years. We, however, had provided a 2-year window considering this is as adequate time for publication from presentation based on the findings in previous studies. Secondly, there could be some publications that are not available on either PubMed or Google Scholar, and hence may have been missed by us. Thirdly, we searched two search engines (Google and PubMed) independently and separately using 1] the title and 2] author names [both first and corresponding author] for identifying and matching the publication of a conference presentation. The first assessment was again verified by a second person. These measures were undertaken to ensure correct identification of publication attributable to the conference presentation and also to not miss any publication arising from presentation. Despite these intensive measures, it is possible that some publications may be missed if the first and corresponding author names were changed during subsequent publication or the title was completely different. To conclude, conversion rate of conference presentations of neuroanesthesia society of India into full-text publications in scientific journals is significantly low compared to other anesthesia societies. Prospective research and research conducted in publicly-funded institutions were more likely to result in subsequent publications. Low rate of conversion of presentations into publications results in wastage of resources, duplication of scientific research, and delays in availability of new knowledge, hence corrective measures are needed to address this issue. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2022-07-29T15:05:31.269Z
2021-11-26T00:00:00.000
{ "year": 2021, "sha1": "e5d9833592ceb284f609ca099a3532dbb1e3f7a4", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/joacp.joacp_4_21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36b4399f0f542e6765872c8d3ab5b23da110b48d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
153459161
pes2o/s2orc
v3-fos-license
Evaluation of the Factors that Influence the EU Automobile Industry during the Period of Financial Crisis Since EU automobile industry has not completely recovered after the recent financial crisis, it is purposeful to identify what factors could have determined the recession of the EU automobile industry. The article is aimed at the evaluation of the factors that influence the automobile industry in the EU during the period of financial crisis. The methods of the research include correlation analysis and multifaceted regression analysis. The research has enabled to establish the impact of macroeconomic factors on the EU automobile production whereas the factors that influence the EU automobile demand have been researched only partly due to non-stationarity of the statistical data. Although the data was differentiated to make it stationary, the differentiation too significantly changed data values and correlation coefficients to make reliable conclusions. For the comprehensive analysis, the Vector Error Correlation Model (VECM) should be applied. Introduction Automobile industry is extremely important for the EU economics since it creates nearly 13 billion workplaces and is considered to be the biggest investor in R&D. However, due to the impact of the financial crisis, this industry has experienced significant losses. In the scientific literature, automobile industry has been analyzed considering the following aspects: characteristics and historical development - Haugh, et. al. (2010), Ding, Akoorie (2013), Drauz (2013) and others; automobile sales - Dargay (2001), Muhammad, et. al. (2012), Erdem, Nazlioglu (2013) and others; the impact of particular factors on automobile industry -oil price - Kumar, Maheswaran (2013); Busse, et. al. (2009), short-run macroeconomic factors - Smusin, Makayeva, 2009 and others. However, there is a lack of the comprehensive research on the macroeconomic factors that influence the EU automobile industry. That is why this article is aimed at evaluation of the factors that influence the EU automobile industry during the period of financial crisis. The objectives of the research are as follows: 1) to identify the macroeconomic factors that influence automobile industry; 2) to present the methodology of the research; 3) to evaluate the factors that influence the EU automobile industry during the period of financial crisis. The object of the research is EU automobile industry. The methods of the research include correlation analysis and multifaceted regression analysis. Table 1. Macroeconomic factors influencing automobile production and demand (source: compiled by the authors) The factors influencing automobile production The factors influencing automobile demand Factor Author(s) Factor Author(s) GDP Madlani, Ulvestad, 2012;Haugh, et. al., 2010GDP Muhammad, et. al., 2012Ding, Akoorie, 2013 Governmental policy Madlani, Ulvestad, 2012;Drauz, 2013GDP per capita Haugh, et. al., 2010; APEC Automotive Dialogue, 2002 Exchange rate Madlani, Ulvestad, 2012;Drauz, 2013Fuel prices Muhammad, et. al., 2012Busse, et. al., 2009 Price of raw material Madlani, Ulvestad, 2012;Ford Motor Company, 2012 Interest rate Muhammad, et. al., 2012;Haugh, et. al., 2010;Erdem, Nazlioglu, 2013Petroleum price Ford Motor Company, 2012, Kumar, Maheswaran, 2013 Unemployment rate Muhammad, et. al., 2012 Interest rate Ford Motor Company, 2012 Income level Dargay, 2001;Smusin, Makayeva, 2009Public debt Ford Motor Company, 2012Inflation Muhammad, et. al., 2012APEC Automotive Dialogue, 2002Demand European Commission, 2008 Private sector consumption Haugh, et. al., 2010Petroleum price Haugh, et. al., 2010, Abu-Eisheh, Mannering, 2002 Financial state of the markets Haugh, et. al., 2010, Ding, Akoorie, 2013certainty about the future Haugh, et. al., 2010, APEC Automotive Dialogue, 2002Customers' priorities Erdem, Nazlioglu, 2013International trade Erdem, Nazlioglu, 2013Manufacturing Erdem, Nazlioglu, 2013Smusin, Makayeva, 2009 Exchange rate Ludvigson, 1998; APEC Automotive Dialogue, 2002 Real estate price Smusin, Makayeva, 2009 As it can be seen in Table 1, automobile production is influenced by GDP, governmental policies, exchange rate, price of raw material, petroleum price, interest rate, public debt and demand. Among the factors that have the significant impact on automobile demand, the main ones emphasized in the scientific literature are interest rate, petroleum prices and income level. What is more, the scientific literature contains the research on the links between different factors that have the impact on automobile industry and demand. The summary of the scientific research to analyse the strength of the links between particular macroeconomic factors and automobile demand has been presented in Table 2. Petroleum price Ludvigson, 1998 Exchange rate Busse, et. al., 2009 Petroleum price, income level, exchange rate Dargay, 2001 Income level Table 2 shows that automobile industry moves along economics. However, the variation of automobile production is bigger than that of GDP; new automobile registration is strongly positively linked with GDP and employment while the link between sales and income level as well as the one between sales and exchange rate is medium-strength positive. Negative medium-strength link was established between sales and petroleum prices while automobile demand and interest rate showed a strong negative link. Finally, automobile demand is strongly positively influenced by income level. Summarizing, it can be stated that GDP has positive impact on automobile industry since rising economics increases consumption and production. Income increase has positive impact on sales while inflation causes automobile demand to decrease. The demand also decreases due to the negative impact of unemployment rate, petroleum prices and interest rates. Increasing foreign currency rates reduce automobile demand while strong domestic currencies raise the demand due to cheaper imports. Private consumption positively affects automobile demand. All these factors can have indirect impact on automobile production since it is dependent on consumption. The methodology of the research The analysis was performed in the following stages: 1) assessment of the collected data; 2) correlation analysis; 3) model preparation and evaluation. In the first stage, the data was collected and data normality, i.e. normal distribution, was verified. Then data stationarity was verified applying autocorrelation function (ACF). For data significance verification, Q statistics, particularly Ljung-Box (LB) test, was applied. If the coefficients of Q statistics were significant, the variable was differentiated to get stationarity. If ACF stationarity failed, Augmented Dickey-Fuller (ADF) test was applied. In the second stage, the links between dependent and independent variables were verified applying regression and correlation. Researching the correlation, automobile production and demand volumes were treated as dependent variables (y) while macroeconomic factors were treated as independent variables (x). Linear correlation was calculated using Pearson correlation coefficient. For correlation verification, t (Student's) criterion was applied. In the third stage, regression model was prepared and verified. Significance of the model was verified applying Fisher criterion. Durbin-Watson statistics was applied verifying autocorrelation of the model. Also, residual errors, their normality and possible heteroscedasticity (i.e. whether the errors do not correlate with independent variables) were verified. The analysis was carried out using "Eviews" software. Evaluation of the factors that influence the EU automobile industry during the period of financial crisis The empirical research has enabled to identify the factors that influence the EU automobile production and demand during the period of financial crisis. The statistics of automobile production has been presented in annual data; the data of the period of 1997 -2012, i.e. 16 observants, has been considered.Since the values of Jaeque-Bera (JB) criterion (0.93) are higher than 0.05, the data has normal distribution. Since JB criterion for GDP is lower than 0.05, this criterion will not be further analyzed. Q statistics for automobile production and 1 month EURIBOR are higher than 0.05, so these variables are stationary, and they will not be differentiated as well as the variable of new automobile registration since their probabilities are also close to 0.05. In the second stage of the analysis, the correlation matrix has been formed (see Table 3). With the reliability level equal to 0.05, and the variables including 15 observants (one observant was lost due to differentiation), the calculated critical value of t criterion was 2.160. Comparing this value with t values for independent variables, it was established that 3 variables have lower t values than the critical one. Thus, their correlation is insignificant and the variables of real exchange rate (0.338), 1 month EURIBOR interest rate (1.656) and loan interest rate (1.817) will not be used for further research. Values t for the other variables are higher than the critical value, and probabilities lower than 5 per cent, so their correlations with the dependent variable are significant. Thus, the rest part of the variables have medium or strong link with the dependent variable. New automobile registration strongly correlates with automobile production. Independent variables GDP (0.653) and petroleum price (0.653) have a positive medium-strength link with automobile production. Growing economics determines automobile production growth. However, the positive link between petroleum price and automobile production is illogical since petroleum is one of main raw materials in this industry. What is more, higher petroleum prices cause fuel prices to rise, which, in turn, leads to automobile demand and production decline. That is why the variable of petroleum price will not be used for further research as well as the variable of steal price index which has strong correlation (0.573) with automobile production. Steal is also an important raw material in automobile industry, so higher steal prices cause more expensive automobile production, which, in turn, discourages customers from buying automobiles. Public debt is the only of the variables that is negatively linked with automobile production. The link between public debt and automobile production is also medium-strength (-0.522). Increasing public debt causes the decline in automobile production. High public debt can impede economic growth and so negatively affect automobile production. As it can be seen in the correlation matrix presented above, the variables of GDP and public debt show strong correlation (the value is higher than 0.7). That is why both of the variables cannot be used in one model. The variable of public debt has been eliminated. The completed model has been presented in Table 4. Table 4 reveals that the model is significant since the probability of F statistics is lower than 0.05. The coefficients of independent variables are significant as well. Although the probability of t statistics for GDP is slightly higher than 0.05, it is very close to this number, so it is treated as significant. Model's determination coefficient R2shows that the variables of new automobile registration and GDP explain 65.4 per cent changes of the dependent variable -automobile production. The determination coefficient, adjusted considering the variables, show that the variables of new automobile registration and GDP explain automobile production by 59.6 per cent. The coefficient of the variables in the model reveals that registration of one new automobile would increase automobile production by 0.52 automobile. The influence of GDP is difficult to estimate since this variable was differentiated. Autocorrelation in the model has been verified applying Durbin-Watson statistic. Considering the number of independent variables and observants in the model, the values of dL and dUwere selected. The model includes 2 independent variables and 15 observants; dL=0.95, and dU=1.54. In the model, d=0.81. Since this value is lower than both critical values, the model contains positive autocorrelation of errors. That is why model's determination coefficient cannot be treated as reliable. Summarizing the results, it can be stated that automobile production in the EU is strongly positively influenced by automobile demand (new automobile registration) and GDP. Also, the factor of public debt has medium-strength correlation with automobile production. Medium-strength correlation of petroleum price and steal price index with automobile production cannot not be logically substantiated. The model of multifaceted regression that includes the variables of GDP and new automobile registration explains automobile production by 59.6 per cent. However, the model contains autocorrelation, so the results are not completely reliable. For the research of automobile demand factors, the data of the quarters during the period of 2003 -2012, i.e. 40 observants, was analysed. The probability of JB criterion for the dependent variable -new automobile registration -is higher than 0.05, so the data of this variable has normal distribution. The probability of JB criterion for the other two variables -GDP and GDP per capita -is lower than 0.05, i.e. the data of these variables does not have any normal distribution; the variables have negative values and cannot be computed as a logarithm. That is why the variables of GDP and GDP per capita will not be used for further research. Verifying stationarity of the variables, it has been estimated that probabilities of Q statistics for all variables are lower than 0.05, so all the variables are non-stationary. First grade differentiation has enabled to reach stationarity for six variables: gross wages, GDP per capita, vehicle price index, petroleum price, real efficient currency exchange and household consumption expenditure. What is more, other six variables (GDP, loan interest rate, 1 month EURIBOR, unemployment rate, consumer confidence index and manufacturing index) had to be differentiated by the second grade in order to reach their stationarity. However, five variables did not reach their stationarity by Q statistics even after the second grade differentiation. Then another method, i.e. Augmented Dickey-Fuller (ADF) test, was applied. The variable of disposable income growth, differentiated by first grade ADF, reached its stationarity while the other four variables (disposable income, long-term goods consumption expenditure, consumer price index and new automobile registration) were differentiated by second grade. In the second stage of the research, the correlation matrix has been formed (see Table 5). With the reliability level equal to 0.05, and the variable including 38 observants (2 observants were lost due to differentiation), the calculated critical value of t criterion was 2.0281. Only one variable, i.e. long-term goods consumption expenditure, significantly correlates with automobile demand (new automobile registration). However, the correlation is weak (0.321). It shows that the increasing expenditure for long-term goods partly determines new automobile registration. In addition, probability of t statistics for consumer price index (0.0594) is close to 0.05, thus, this variable can be treated as significant. However, correlation of this variable with new automobile registration is illogical since increasing prices should not determine higher demand for automobiles. Thus, there remains only one significant variable which fails the formation of a multifaceted regression model, and the model of pair regression shows very low determination coefficient. Summarizing, the analysis of the macroeconomic factors that have the impact on the EU automobile demand has partly failed due to non-stationarity of the statistical data. Data differentiation significantly changes its values and correlation coefficients. Thus, in the future, the analysis of this kind should be carried out applying the Vector Error Correlation Model (VECM). Conclusions Researching the influence of macroeconomic factors, it has been established that new automobile registration and GDP have significant positive impact on the EU automobile production. Public debt has a positive medium-strength correlation with it. The factors of GDP and new automobile registration can explain 60 per cent of the changes of automobile production in the EU. The factors that influence the EU automobile demand have been researched only partly due to non-stationarity of the statistical data. Although the data was differentiated to make it stationary, the differentiation too significantly changed data values and correlation coefficients to make reliable conclusions. For the comprehensive analysis, the Vector Error Correlation Model (VECM) should be applied.
2017-09-09T13:17:35.120Z
2014-12-12T00:00:00.000
{ "year": 2014, "sha1": "3cf4e7dd98f257c87ae4fc58a8bda226a9278312", "oa_license": "CCBY", "oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/5259/5076", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3cf4e7dd98f257c87ae4fc58a8bda226a9278312", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
16170394
pes2o/s2orc
v3-fos-license
Statistical Analysis of Hie (Cold Sensation) and Hiesho (Cold Disorder) in Kampo Clinic A cold sensation (hie) is common in Japanese women and is an important treatment target in Kampo medicine. Physicians diagnose patients as having hiesho (cold disorder) when hie disturbs their daily activity. However, differences between hie and hiesho in men and women are not well described. Hie can be of three types depending on body part where patients feel hie. We aimed to clarify the characteristics of patients with hie and hiesho by analyzing data from new patients seen at the Kampo Clinic at Keio University Hospital between 2008 and 2013. We collected information about patients' subjective symptoms and their severity using visual analogue scales. Of 4,016 new patients, 2,344 complained about hie and 524 of those were diagnosed with hiesho. Hie was most common in legs/feet and combined with hands or lower back, rather than the whole body. Almost 30% of patients with hie felt upper body heat symptoms like hot flushes. Cold sensation was stronger in hiesho than non-hiesho patients. Patients with hie had more complaints. Men with hiesho had the same distribution of hie and had symptoms similar to women. The results of our study may increase awareness of hiesho and help doctors treat hie and other symptoms. Introduction In Japan, hie (cold sensation) and hiesho (cold disorder) are different terms. While hie is used to describe the subjective, uncomfortable feeling of coldness, hiesho is the diagnosis given by physicians to patients with cold sensations that disturb their daily living. Therefore, the first distinction to make is one between normal and hie groups. Those who experience hie can further be subdivided into hiesho and nonhiesho categories (Figure 1). Hiesho is the most common diagnosis given in Japanese Kampo clinics [1]. In Japanese Kampo medicine, hiesho is treated as a unique pathological condition. In contrast, cold sensation is only one of many symptoms asked about in a review of systems in Western medicine. One definition of hiesho for diagnosis is an "abnormal, subjective sensitivity to coldness in the lower back, the extremities, other localized regions of the body, or the whole body despite ambient temperatures. It lasts throughout the year for most patients, and disturbs their daily living" [2]. Hie as a subjective symptom is common in Japanese people [1] and is more common in women [3]. However, the epidemiology of this symptom is not clear in Western people. One report comparing Japanese with Brazilians indicated that 57% of Brazilian pregnant women were aware of cold sensations [4]. We think it may be common symptom in other populations as well. In 1987, Kondo and Okamura reported demographic data of 318 Japanese women with hie but had no data for men [5]. They reported that hie accompanied other uncomfortable symptoms such as shoulder stiffness, constipation, lumbago, fatigue, and hot flushes. In Kampo medicine, treatments not only target hie, but also 2 Evidence-Based Complementary and Alternative Medicine Normal Hie Hiesho Nonhiesho Figure 1: Hie and hiesho. In Japan hie (cold sensation) and hiesho (cold disorder) are different terms. While hie is the term used to describe the subjective, uncomfortable feeling of coldness, hiesho is the diagnosis given by physician to patients with cold sensation disturbing their daily living. Therefore, the first distinction is between normal and hie group. The hie group is subdivided into hiesho and non-hiesho. these accompanied symptoms. Subsequently, there are many Kampo formulas for treating hiesho. Hie has been categorized into three types based on the body part where the symptoms are experienced. We assume different pathophysiology for each type. The first type of hie is a general type due to decreased heat production from a loss of muscle volume or decreased basal metabolism. The second type of hie is peripheral, due to a disturbance of heat distribution related to decreased peripheral blood flow. The third type of hie is upper body heat-lower body coldness with associated vasomotor abnormalities. However, epidemiological information regarding these classifications are unknown. Keio University first introduced a browser-based questionnaire in 2008 that collects patient's subjective symptoms and changes in symptom severity via visual analogue scales (VAS), life styles, Western and Kampo diagnoses, and prescribed Kampo formulas. Here, we report results from the analysis of data from male and female patients and attempt to clarify the characteristics associated with hie and hiesho. We especially focus on classification of hie and accompanied symptoms because this information is important for considering the pathophysiology of hie and the appropriate Kampo formulas for treating patients with hiesho. Patient Enrollment. Patients who made their first visit to the Kampo Clinic at Keio University Hospital between May 2008 and March 2013 were included from this study. Exclusion criteria were unwillingness to enter the study and missing data regarding age and/or sex. Patients who answered only about their lifestyle or who were diagnosed as having hiesho but did not answer regarding the part of the body where they felt hie were excluded. All registered patients provided written informed consent. Patient Grouping. In this analysis, we divided patients into three groups: patients with hie with a diagnosis of hiesho (hiesho group), patients with hie without a diagnosis of hiesho (non-hiesho group), and patients without hie (Normal group). Our dataset did not include information about how physicians diagnosed patients with hiesho ( Figure 1). Assessment of Subjective Symptoms. We collected information about patients' subjective symptoms using a 128question binary questionnaire (Table 1). Among these 128 questions, 106 also had VAS when patients answered yes on the binary questionnaire. The VAS was a horizontal line, 100 mm in length, where the left-most side (0 mm) represented no symptoms and right-most side (100 mm) represented the severest symptoms. To normalize within each patient, we divided each patient's VAS by the maximum VAS possible. This is because VAS scores were different from patient to patient. In other words, each patient's original VAS values ranged from 0 to 100 but were transformed to 0 to 1 for easier comparison. Between Group Comparisons. We focused on symptoms directory related to hie to clarify the differences between the hiesho and non-hiesho groups. Here, we choose six symptoms from the directory related to hie: hie of the whole body, hie of the hands, hie of legs/feet, hie of the lower back, cold intolerance, and tendency to get frostbite. We also analyzed body part combinations where patients felt hie and five heat-related symptoms to get epidemiological information regarding hie classification. The five heat-related symptoms were as follows: heat intolerance, hot flush, heat sensation of the face, heat sensation of the hands, and heat sensation of legs/feet. Finally, we focused on accompanying symptoms and compared men and women to clarify differences between these groups. Statistical Analysis. All statistical analyses were conducted using R software, version 2.15.2 (The R Foundation for Statistical Computing; October 26, 2012). Characteristics were compared using Wilcoxon's rank sum test, two-sampletest, and test for equal proportions. We used Wilcoxon's rank sum test to compare the VAS of hie because normality did not hold. We used a significant level of 5% for all tests. Participant Information. Participants included 4,057 registered patients, 41 of whom were excluded because of missing values (one due to missing age, 19 failed to report anything regarding subjective symptoms, and 21 were missing data on the part of the body where they felt hie in spite of a diagnosis of hiesho). We used data from 4,016 patients in this analysis, including 2,344 patients with hie, and 524 of those who were diagnosed as having hiesho. 3.2. Age and Sex. We compared age and sex of patients with hie with the diagnosis of hiesho (hiesho group, = 524) and patients with hie but no diagnosis of hiesho (non-hiesho group, = 1, 820) to patients without hie (Normal group, = 1, 672). The mean age was 51.6 ± 1.5 years old for Figure 2: Rate of non-hiesho and hiesho groups in each age group. Hie (cold sensation) and hiesho (cold disorder) were uncommon in children, but almost similarly present among young and old patients. We also can see that hie and hiesho were more common in women. members of the hiesho group, 47.1 ± 0.8 years old for the nonhiesho group, and 46.2 ± 1.0 years old for the Normal group. Participant mean age in the hiesho group was significantly higher than the non-hiesho and Normal groups according to results of a -test. The number of patients in each group who fell within each age group is shown in Figure 2. Hie and hiesho were uncommon in children and rates were similar for young and old patients. With regard to sex, there were 94 men and 430 women (percentage of women: 82.1%) in the hiesho group, 342 men and 1,478 women (percentage of women: 81.2%) in the nonhiesho group, and 675 men and 997 women (percentage of women: 59.6%) in the Normal group. A test for equal proportions showed significantly more women in both the hiesho and non-hiesho groups than in the Normal group. Differences between Hiesho and Non-Hiesho Groups. We compared the location where hie symptoms occurred between the three groups. The frequencies of binary answers for the four parts of the body where patients felt hie for hiesho and non-hiesho groups are as follow: hie of the whole body: hiesho 40.1%, non-hiesho 22.4%; hie of the hands: hiesho 42.2%, non-hiesho 35.1%; hie of the legs/feet: hiesho 75.6%, non-hiesho 77.0%; and hie of the lower back: hiesho 22.3%, non-hiesho 13.8%. Except for legs/feet, the frequencies of hie were significantly higher for the hiesho group based on results of the test for equal proportions (Figure 3 upper). There were no clear differences seen regarding the distribution of hie based on patient age or sex. The frequencies of binary answers of the other two hie related symptoms for all three groups are as follows: cold intolerance: hiesho 77.7%, non-hiesho 58.0%, and Normal 16.1%; and tendency to get frostbite: hiesho 10.7%, non-hiesho 6.3%, and Normal 1.5%. The frequencies of binary answers of the two symptoms were significantly higher in the hiesho group than the non-hiesho group, which were both higher than Normal group as determined by the test for equal proportions. We also compared the differences of VAS scores for hie of each body part for members of the hiesho and nonhiesho groups using Wilcoxon's rank sum test. For every part of the body, hie was significantly worse for members of the hiesho group (Figure 3 lower) This table shows the combination of body parts where patients with hie ( = 2,344, including hiesho and non-hiesho groups) experienced their symptoms. As you can see, 30.8% felt hie in both their hands and legs/feet; that is, 84.2% of patients who felt hie in their hands also felt hie in legs/feet. Similarly, 12.2% felt hie in both their lower back and legs/feet; that is, 77.7% of patients who felt hie in their lower back also felt hie in legs/feet. In contrast, 11.3% of patients felt hie throughout their whole body and legs/feet; that is, 43% of patients who felt hie throughout whole body also felt hie in their legs/feet; this ratio was significantly lower than the former two. for cold intolerance in the hiesho group also were higher than those in the non-hiesho group, which were higher than those in the Normal group. (Figure 4). We did not separate these groups by sex, as men in the hiesho and non-hiesho groups had higher frequencies of hot flush or heat sensation of the face the same as women in hiesho group or non-hiesho group. The frequency of heat intolerance was significantly lower for the hiesho group compared to Normal group. In contrast, hot Mean number of accompanied symptoms ± SD 22.9 ± 1.0 24.5 ± 0.6 15.8 ± 0.5 Body Part Combinations of Hie and Frequencies of Heat The mean number of subjective symptoms from 122 symptoms for both hiesho and non-hiesho groups was significantly higher compared to normal group as shown by the -test. We sorted symptoms by frequency in the hiesho group. The ranking of these symptoms was almost the same between the three groups and by participants' sex. Almost all symptoms were more common in the hiesho and non-hiesho groups compared to the normal group. In women, menstrual pain also was common. Results may be affected by the number of symptoms reported by patients. flush and heat sensation of the face were significantly more frequent for members of the hiesho and non-hiesho groups compared to the Normal group as indicated by the test for equal proportions. Accompanying Symptoms. We also compared accompanying symptoms in members of the three groups. Of 122 symptoms, after removing the 6 hie related symptoms, the mean number of subjective symptoms reported was 22.9±1.0 for members of the hiesho group, 24.5 ± 0.6 for the non-hiesho group, and 15.8 ± 0.5 for the Normal group (Table 3). The mean number of subjective symptoms for both the hiesho and non-hiesho groups was significantly higher compared to the Normal group as indicated by the -test. We sorted symptoms by reporting frequency for the hiesho group. The top 10 common symptoms were as follows: shoulder stiffness, easily fatigued, neck stiffness, eyestrain, depressed mood, constipation, upper back stiffness, dry skin, flatulence, and forgetfulness. In women, menstrual pain also was common. The ranking of these symptoms was almost the same for members of both sexes and all three groups. Discussion Kampo physicians diagnose patients as having hiesho (cold disorder), when hie (cold sensation) and its associated symptoms cause disturbance in daily living. In Japanese Kampo medicine, hiesho is treated as a unique pathological condition and there are many Kampo formulas to treat it. When choosing Kampo formulas, the part of the body where hie and its accompanied symptoms are felt is important. This is why the present study has focused on the classification of hie and its comorbid symptoms. Fundamental parts of our dataset were consistent with previous reports and supported the generalizability of our data, despite our population being recruited from a Kampo clinic. It has been reported that the subjective symptom of hie was common in Japanese people and a diagnosis of hiesho was common in Japanese Kampo clinics [1,6]. Consistent with these past reports, around 60% of patients in our study reported subjective feelings of hie, and hiesho was one of the most common diagnoses in the Kampo medicine clinic where our study was conducted. It also has been reported that hie and hiesho are common in women [3], which is Evidence-Based Complementary and Alternative Medicine 7 consistent with results of the present study. The frequency of patients in our study who reported experiencing hie in their extremities was also consistent with the results of past studies from an obstetrics-and-gynecology clinic in Japan [7] and on working women in Japan [5]. Ushiroyama mentioned that women developed hie because of the existence of their pelvic organs, which affected peripheral blood flow to the legs/feet and lower back [7]. Women's pelvic organs develop after puberty and may consume blood flow of lower body. However, according to our research, the legs/feet were the most common parts of the body affected by hie for both men and women of all age groups. Thus, explanations regarding the effect of pelvic organs do not help us understand lower body hie in men and postmenopausal women. We found that patients diagnosed as having hiesho reported more severe hie symptoms. The frequencies of hie of the whole body, hands, and lower back as well as reports of cold intolerance and a tendency to get frostbite were higher in the hiesho compared to non-hiesho group. Furthermore, patients in the hiesho group were more likely to have high VAS scores regarding hie for any body part and cold intolerance compared to their non-hiesho counterparts. There were no other symptoms for which patients in the hiesho group had higher VAS scores than those in the non-hiesho group. In addition, hypothyroidism was significantly more common in the hiesho than non-hiesho group (2.5% versus 0.7%); however, most patients in the hiesho group did not have organic diseases that might cause hie (data not shown). It might be important for us to not only treat hiesho, but also to study organic diseases that can cause hie, especially in members of the hiesho group. One classification categorizes hie into the three as per the areas of the body where people report experiencing it: general, peripheral, and upper body heat-lower body coldness. At the 51st annual meeting of the Japan Society for Oriental Medicine, Kako Watanabe et al. reported the efficacy of the cold-water challenge test to divide hie into these three types (not published). They put patients' hands into cold water at 4 ∘ C for 30 seconds and measured blood flow recovery. Patients with decreased metabolism complained of whole body hie after the cold-water challenge despite normal blood flow recovery, and patients with disturbed peripheral blood flow could not recover blood flow after the cold-water challenge test. In addition, patients with upper body heat-lower body coldness recovered blood flow with fluctuation due to autonomic imbalance. In Kampo theory, the pathophysiology of these three types of hiesho has been explained as qi deficiency, blood stagnation, and qi counterflow. Our results support this classification of hie. We observed that many patients who report feeling hie in their hands or lower back also felt it in their legs/feet, and these combinations were far more frequent than the combination of whole body and legs/feet. The result supports the first two types of hie (general and peripheral). Our results also suggest that the peripheral type might be further subdivided by the type of extremity (e.g., narrowly defined extremity type, which affected hands and legs/feet, and lower body type, which affected the lower back and legs/feet). The general type of hie is thought to be related to a loss of heat production from decreased muscle volume and/or basal metabolism, and peripheral hie may be due to disturbances in heat distribution due to blood stagnation. We also found that around 20-30% of patients with hie felt upper body heat sensations such as hot flushes and heat sensation of the face, and these symptoms were significantly more common in patients with hie. This supports the existence of upper body heat-lower body coldness. This type of hie may be related to a kind of autonomic imbalance that causes vasomotor disturbances. We assume representative Western diagnosis for these three types of hie. First, one of the organic diseases that causes general hie is hypothyroidism. Due to low metabolism, patients complain about feeling cold or cold intolerance, which sometimes may be comorbid with objectively cool peripheral extremities [8]. Based on a randomized crossover trial, thyroxin did not appear effective for patients with normal thyroid function tests and symptoms of hypothyroidism including intolerance to cold [9]. Next, one of the organic diseases that causes peripheral hie is peripheral arterial disease due to arteriosclerosis [10]. It is a good adaptation of Western intervention when patients feel acute coldness with resting pain in their foot and toes by critical limb ischemia such as occlusion of an artery where blood flow cannot accommodate basal nutritional needs of the tissues [11]. However, the majority of patients feel chronic cold sensations in their legs/feet without gait disturbances and it is difficult to treat such patients in Western medicine. Finally, one of the organic diseases that causes upper body heat-lower body coldness is perimenopausal disturbance. Hot flushes with lower extremity coldness due to vasomotor disturbance is common for peri-or postmenopausal women [12]. Treatment options are limited for some patients due to side effects of hormone replacement therapy. Kampo medicine may be one treatment option for such patients and we try to apply the appropriate Kampo formulas. Our data supported that patients with hie experienced many uncomfortable symptoms, which may be aggravated by hie. It has been reported that women with hie and hiesho experienced other uncomfortable symptoms such as shoulder stiffness, constipation, lumbago, fatigue, hot flush, headache, and edema in the leg [3,5]. Our findings support these results for both the sexes; menstrual pain often was found in women with hie. Thus, treatment of hie may lead to not only its improvement, but also to the improvement of other symptoms. However, the number of symptoms experienced by patients might affect our results, as patients with hie reported about 10 more symptoms than those without hie. This suggests that patients with hie had 1.6-1.8 times more symptoms than patients without hie. We also can assume that hie is an indicator of patients with many symptoms. Thus, we may obtain more information by segregating patients with hie according to their comorbid symptoms. Conclusion The present study is important because it clarifies some of the epidemiological characteristics of patients with hie and hiesho. Specifically, we have learned the following. (1) hiesho 8 Evidence-Based Complementary and Alternative Medicine patients are those who suffer from severe hie. (2) Patients with hie may be classified roughly into three types. (3) Patients with hie experience many comorbid symptoms. (4) Men and women with hiesho have almost the same distribution of hie and its associated symptoms. Appropriate treatment options for hiesho are not available in Western medicine. Therefore, if we are more aware of hiesho, we can use Kampo formulas to treat not only the patients' hie, but their comorbid symptoms as well.
2018-04-03T04:36:47.148Z
2013-12-30T00:00:00.000
{ "year": 2013, "sha1": "b0de8414e1c2cdcb0bf790e6202a78cca07c2cf4", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2013/398458.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0de8414e1c2cdcb0bf790e6202a78cca07c2cf4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10323358
pes2o/s2orc
v3-fos-license
Sodium permeability of dog red blood cell membranes. I. Identification of regulatory sites. Divalent cations and group-specific chemical modifiers were used to modify sodium efflux in order to probe the molecular structure of sodium channels in dog red blood cells. Hg++, Ni++, Co++, and PCMBS (parachloromercuribenzene sulfonic acid), a sulfhydryl reactive reagent, induce large increases in Na+ permeability and their effects can be described by a curve which assumes 2:1 binding with the sodium channel. The sequence of affinities, as measured by the dissociation constants, reflects the reactivity of these divalent cations with sulfhydryl groups. In addition, the effects of Hg++ and PCMBS can be reversed by the addition of dithiothreitol, an SH-containing compound, to the medium. Much smaller increases in Na+ permeability are produced by Zn++ and the amino-specific reagents, TNBS (2,4,6-trinitrobenzene sulfonic acid) and SITS (4-acetamido-4'-isothiocyano-stilbene-2-2'-disulfonic acid). The Zn++ effect can be described by a curve which assumes bimolecular binding with the channel, and its effect on Na+ permeability can be reversed by the addition of glycine to the medium. The effects of Ni++ and SITS can be completely reversed by washing the cells in 0.16 M NaCl while TNBS binding is partially irreversible. Measurements of mean cell volumes (MCV) indicate that the modifier-induced increases in Na+ permeability are not caused by shrinkage of the cells. It is concluded that the movement of sodium ions through ionic channels in dog red blood cells can be enhanced by modification of amino and sulfhydryl groups. Zn++, TNBS, and SITS increase Na+ permeability by modifying amino groups in the channel while Hg++, Ni++, Co++, and PCMBS act on sulfhydryl groups. INTRODUCTION The permeability characteristics of canine red blood cells are of considerable interest because of the uniqueness of the cell type. Dog erythrocytes differ from human and most other mammalian red cells in their ionic composition and transport mechanisms. Human red cells have a high potassium-low sodium content and maintain large electrochemical gradients for these ions with a ouabain-sensitive pump. In contrast, dog erythrocytes are of a low potassium-THE JOURNAL OF GENERAL PHYSIOLOGY " VOLUME 67, 1976 " pages 563-578 563 high sodium type, and these electrolytes are more nearly in Donnan equilibrium with the plasma. No ouabain-sensitive cation flux has been measured in dog red cells (J. Hoffman, 1966;Miles and Lee, 1972;, and no (Na + K)sensitive ATPase has been found in their membranes (Chan et al., 1964). In most mammalian red blood cells osmotic swelling is prevented by the active extrusion of sodium and accumulation of potassium by such a ouabain-sensitive pump (Tosteson and Hoffman, 1960). The apparent absence of such a mechanism in dog red cells presents a problem as to the maintenance of homeostasis in these cells. Parker and Hoffman (1965) have reported that the permeability of dog red cells is volume dependent, i.e., Na ÷ permeability is decreased and K ÷ permeability is increased when the cells are swollen, while the opposite is true in shrunken cells. Miles and Lee (1972) found that cation permeability in dog red cells is energy dependent, i.e., Na + permeability decreases, while K + permeability increases in ATP-depleted cells. Yet, they found cell volume to be unaffected by energy depletion. Recently, Romualdez et al. (1972) observed that the effect of cell volume on membrane permeability is abolished by treatment with phloretin, an inhibitor of lactate production. They concluded that volume regulation in canine erythrocytes is accomplished by an energy-dependent cation carrier system. Parker (1973 b) appears to have identified this energy-dependent mechanism for the extrusion of sodium and maintenance of cell volume as a pump requiring external calcium. However, at normal cell volume this calcium-dependent sodium movement represents only 3% of the total sodium efflux. Therefore, much of the sodium transport in these cells is passive and presumably through ionic channels, i.e., sodium channels, and the regulation of sodium transport through these channels is of considerable interest. The present study is an attempt to characterize the nature of the functional groups of proteins which are located within the ionic channels and which control passive sodium permeability. One approach to ideritifying sites within a channel is to pharmacologically modify these groups and observe the effects of such modification on ionic movement. In this investigation two different m6thods of pharmacological modification were used: heavy metal ions and chemical modifiers. The effects of each of these forms of protein modification on sodium efflux were measured. An abstract containing some of the results reported here has appeared elsewhere (Castranova and Miles, 1973). MATERIALS AND METHODS Sodium Efflux Measurements Blood was obtained from adult dogs anesthetized with sodium pentobarbital (40 mg/kg body weight) by direct heart puncture. Sodium heparin (1,000 U/liter of blood) was used to prevent clotting. The blood was immediately centrifuged, and the plasma and bully coat of while blood cells were removed by suction. The red cells were then washed three times by alternate resuspension and centrifugation in ice-cold 0.16 M NaCI. The erythrocytes were loaded with radioactive sodium by incubating the cells for 3 h at 37°C in a medium containing 2~Na. The composition of this medium (pH = 7.4) was: Na + (167.33 mM), Cl-(152 mM), K + (5 raM), HPO4 = (9.32 mM), H2PO4-(1.65 raM), and glucose (5.55 raM). When loaded, the red cells were removed from the incubation medium and washed four times with ice-cold 0.16 M NaC1 as before. The loaded red cells were then placed in Erlenmeyer flasks containing a buffered solution of the following composition: Na + (146 raM), CI-(151 raM), K ÷ (5 raM), glucose (2.77 mM), and Na PIPES [piperazine-N-N'-bis (2-ethane sulfonic acid)] (5 raM). The hematocrit was always less than 5%. Na PIPES was used rather than a phosphate buffer since PIPES has been shown to have a negligible binding affinity for divalent cations such as those used here as modifiers (Good et al., 1966). The pH of the medium was set at 6.5 for all experiments because the divalent cations used precipitate, probably as the hydroxide, at pH levels above 7.0. Treatment of the red cells with divalent cations or chemical modifiers consisted of the addition of minute amounts (final concentrations of 0.005-3 raM) of the modifier to the various flasks of PIPES buffered medium before the addition of the red cells. It was found that the sodium efflux measured for red cells incubated in PIPES buffer at pH = 6.5 is comparable to that measured in phosphate buffer at the same pH. A measure of sodium permeability was obtained by determining the rate constant for sodium efflux. The counts per minute of the Z2Na in the medium at time t (Pt) was obtained by taking supernatant samples at various times and measuring the radioactivity of these samples. The counts per minute in the medium at time infinity (/5®) was estimated from the number of counts per minute in a sample of the whole suspension. The optical densities of these samples were measured at 540 nm to correct for the counts per minute due to any hemolysis which may have occurred (Crosby et al., 1954). The results indicate that sodium efflux from dog red blood cells behaves as a three-compartment system. In such a system Sha'afi and Lieb (1967) have shown that the cell interior consists of two compartments. Efflux from the first intracellular compartment (5% of the total cell sodium) is very rapid and reaches equilibrium with the plasma within 30 rain. The major pQrtion of sodium efflux is from the second intracellular compartment (95% of the total cell sodium) and is described by the equation: In (1 -(15t/P~) = -k,d, where k21 is the rate constant for efflux from the major intracellular compartment. Thus, in the present study, In (1 -(Pt/P=) was plotted against time and the rate constant for the major portion of sodium efflux was obtained from the slope of this line after 30 min. Lee and Miles (1973) were unable to demonstrate exchange diffusion of sodium in puppy red blood cells. In addition, we have been unable to demonstrate any exchange diffusion in adult dog cells (unpublished results). Therefore, the contribution of an exchange diffusion mechanism to the sodium efflux measurements is negligible. The rate constant has been shown to be a measure of sodium efflux by Sha'afi (1965), since ~ --k21 [Na+]+, where M~o a = Na efflux and [Na+]+ = internal Na concentration. It must be emphasized that these symbols refer only to the slowly exchanging component of sodium efflux. If the driving force remains constant, i.e., the cells are in the steady state, then the rate constant is also a measure of Na permeability (Pr~a), since M~ a = (Pr+a) × (driving force). In this study the sodium concentration of all cells (normal and modified) was found to remain constant during the time of the experiment. This indicates that these cells are in the steady state during the time that the flux measurements are made, and the driving force for sodium is probably constant since large changes in membrane potential are unlikely (see Discussion). Therefore, the rate constant is indeed a measure of Na ÷ permeability. The effect of pharmacological modification was measured by comparing the rate constants for Na + efflux in the presence of modifier to the rate constants in unmodified cells (controls). Mean Cell Volume Measurements The effect of the various divalent cations and chemical modifiers on mean cell volume (MCV) was determined. MCV's were calculated in the usual manner by dividing the hematocrit by the cell count. Washed red cells were incubated at 37°C in PIPES buffered medium which contained either the divalent cation or chemical modifier. The hematocrit was always less than 5% as in the case of the sodium efflux measurements. These incubations were carried out for a period of 2 h which is well into the time interval during which the rate constants for sodium efflux were measured. Then the red cells were spun down and supernatant was removed until the hematocrit was approximately 30-50%. The cells were then resuspended in the remaining medium and mean cell volume measurements were made using this suspension. Hematocrits were measured and the cell counts were determined by using a Coulter Counter (model B, Coulter Electronics, Inc., Hialeah, Fla.). The MCV's were expressed in cubic microns. Concentration,Effect Relationships Curves relating relative Na + efflux to the concentration of divalent cation or chemical modifier added to the medium were constructed by applying the following analysis. As a first approximation, it was assumed that a certain number (n) of modifier molecules (Y) bind to regulatory sites (X) within the sodium channel as described in the reaction: nY + X ~---YnX. (I) The dissociation constant (KD) is then defined as: where the brackets denote molar concentrations. The fraction of channel sites occupied by the modifier has been designated as a and is given by: This equation can be rearranged to give the following: The data from these experiments were plotted in two different ways. First, a saturation curve is obtained when a, which is measured as the fraction of the maximal change in relative sodium efflux, is plotted against the concentration of tbe modifier added to the medium. In order to obtain the best curve the experimental points were fitted to a theoretical saturation curve, given by Eq. 4, by determining the binding type (n) from the procedure described below (Hill plot), and by picking a dissociation constant (Kn) such that the sum of the squares of the deviations between the experimental points and the theoretical curve was a minimum. The equation for a saturation curve can be linearized to the form Plotting log ((l/a) -1) against log [Y] results in a straight line such that the slope (-n) indicates the binding type and the y intercept is the log of the dissociation constant. This analysis is virtually identical to that for a Hill plot (Van Holde, 1971). In this investigation the data were fitted to the best line by least squares. The binding type and Ko obtained from the Hill plots were found to be similar to that determined from theoretical saturation curves, so numbers from these two different plots are used interchangeably in the text. In this investigation it would have been ideal to correlate the changes in Na + efflux with the fraction of channel sites modified. However, for most modifiers it would be difficult to measure actual binding to the Na + channels due to the reversibility of their effects, i.e., the modifiers used in these experiments are easily washed out. In addition, there are probably so few binding sites associated with the permeability change that non-specific binding would obscure the results. It must be emphasized that this analysis of the concentration-effect relationships is used only as an approximation to the actual binding. We have assumed that a number (n) of modifier molecules react simultaneously with the Na + channel and that intermediates such as Y,_,X do not exist, which may be unlikely. Indeed, many kinetic models may fit our data. However, this analysis was used merely to distinguish 1:1 binding from other types of binding. It must also be pointed out that the absolute values of the dissociation constants are not important, but rather the emphasis should be placed on the sequence of apparent affinities of the modifiers for the Na + channel. Modification of the Sodium Channel with Divalent Cations The effect of certain heavy metal ions on the sodium permeability of canine red blood cells is illustrated in Fig. 1. This figure shows sodium efflux from untreated red cells (control) and from erythrocytes treated with heavy metal ions in concentrations which produce maximum changes in sodium permeability. Note that sodium permeability is increased to varying degrees by all the divalent cations listed in this figure. However, other divalent cations were found to have no effect on Na + efflux, e.g., Ba ++, UOz ++, and Ca ++ did not alter sodium permeability. None of the divalent cations tested caused a decrease in sodium permeability. A summary of these effects is given in Table I. The relative sodium efflux shown is the maximum increase in the rate constant for sodium efflux in treated red cells over that measured for untreated erythrocytes. For example, the rate constant in the presence of 0.5 mM Ni ++ is 14.3 times greater than the rate constant with no divalent cation added. The sequence for the increase in relative sodium permeability, i.e., Ni ++ > Cu ++ > Hg ++ > Co ++ > Zn ++, is similar to but does not follow exactly the sequence for the crystal ionic radii of these ions, i.e., Ni ++ < Cu ++ = Co ++ < Zn ÷+ < Hg ++ (Weast and Selby, 1967). In order to investigate more fully the interaction between these heavy metal ions and the channel, the concentration-effect relationships for these divalent The concentrations of divalent cations used to achieve maximum increases in Na + efflux were: 0.5 mM Ni ++, 0.1 mM Cu ++, 0.1 mM Hg ++, 3.0 mM Co ++, 0.75 mM Zn ++. cations were studied. This was done by measuring the increase in sodium efflux from red ceils treated with various concentrations of each divalent cation. It was assumed that such increases in sodium efflux result from the binding of the divalent cations to groups which affect sodium permeability as described in the Methods. The objective of these experiments was to obtain a binding number (n), i.e., the number of divalent cations which bind to each regulatory group within the channels, and to obtain a measure of the apparent affinity of the heavy metal ions for the channel, i.e., the dissociation constant (KD). An example of the type of plot used to determine n and KD is shown in Fig. 2 A. In this figure a represents the fraction of channel sites bound, and it is measured as the fraction of maximal Na + efflux achieved. This curve is a Hill plot drawn for points representing the mean values from three experiments with Co ++ . The slope of this line was used to determine the type of binding (n) while the y intercept gives the KD. The slope of -1.7 is taken to indicate that two Co ++ ions bind to each regulatory site in the sodium channel while the y intercept gives an apparent KD of 0.3 mM 2. The curve in Fig. 2 B is the theoretical saturation curve of best fit for Co ++ , drawn by assuming that two Co ++ ions bind to one site in the sodium channel with an apparent Ko of 0.25 mM z. Note that the dissociation constants derived from both types of plots are similar. The binding number and the Ko were determined for each heavy metal ion by using this type of analysis. The dissociation constants and the type of binding between each divalent cation and the sodium channel are listed in Table II. Hg ++, Ni ++, and Co ++ exhibit 2:1 binding, i.e., two heavy metal ions modify one regulatory site in the sodium channel, while Zn ++ binding follows a 1:1 relationship. Since Hg ++ is known to react strongly with sulfhydryl groups (Weed et al., 1962;Vallee and Walker, 1970), and since Ni ++ and Co ++ also exhibit second-order binding, it is not unreasonable to assume that these three cations bind to SH groups in the channel. Further evidence for this is that the sequence of apparent affinities for the divalent cations exhibiting 2:1 binding, i.e., Hg ++ < Ni +÷ < Co ++, is identical to the sequence of affinities for the binding of these divalent cations with cysteine (Martell and Sill6n, 1964) and is unrelated to the sequence of crystal ionic radii for these cations. Zn ++, on the other hand, seems to increase Na + permeability by binding with some other ligand in the channel since it exhibits 1 : 1 binding even though it is a smaller ion than Hg ++. One possibility is that the site of Zn ++ modification may be at an amino group within the sodium channel. The following experiments were designed to test this hypothesis. The dissociation constants and binding types were determined from Hill plots of the mean values from three experiments for each divalent cation, r is the correlation coefficient between the fitted line and the data. (The affinities, which were calculated from the concentration of modifier added to the medium, are apparent affinities.) Modification of the Sodium Channel with Amino and Sulfhydryl Reagents Another approach to identifying functional groups within the sodium channel involves the chemical modification of these sites with group-specific reagents. PCMBS is a reagent which binds specifically to sulfhydryl groups (Sutherland et al., 1967), while TNBS, SITS, and 2-methoxy-5-nitrotropone (MNT) are used as amino-specific reagents (Satake et al., 1960;Maddy, 1964;and Tamaoki et al., 1967). It should be noted that TNBS and SITS reactions with other groups (e.g., sulfhydryl groups) may be possible. A summary of the effects of these group-specific reagents on Na + permeability is given in Table III. As in Table I, the concentration of modifier used is that necessary to produce a maximal effect, and relative Na + efflux indicates the maximum increase in the rate constant for sodium efflux in treated red cells over the control level. Note that sodium permeability is increased to varying degrees by all of these reagents indicating that both sulfhydryl and amino groups are involved in regulating the movement of sodium through the channel. However, the MNT effect is a small one and may not be significant due to a possible effect of this reagent on cell volume, which was not measured (see section on cell volume). This small effect may also be due to limitations in the solubility of MNT. The sulfhydryl-reactive PCMBS is more effective in increasing Na + efflux than the amino reagents, TNBS and SITS. These data tend to support the previous conclusion that Ni ++, Co ++, and Hg ++, which are the most effective heavy metal ions, increase Na + permeability by interacting with sulfhydryl groups. PCMBS also exhibits 2:1 binding with the channel as do Ni ++, Co ++, and Hg ++. This suggests that the thiol regulatory unit consists of two sulfhydryl groups. On the other hand, Zn ++, which exhibits 1:1 binding with the channel and is less effective than the sulfhydryl reactive heavy metal ions, is in the same range of effectiveness as the amino reactive reagents, TNBS and SITS. Experiments indicate that the binding of SITS is most closely described by 1:1 binding, while that of TNBS is 2:1. Reversal of Binding It has been assumed that divalent cations and chemical modifiers increase Na + permeability by binding with either amino or sulfhydryl groups within the channel. If this binding is electrostatic, as seems likely for divalent cations, then it should be possible to reverse the effects of modification by dissociation of this reagent-channel complex. This type of experiment should contribute to the identification of the groups involved. One way to attempt uncoupling of the modifier from the channel is to add to the medium an excess of the functional group with which the modifier reacts. For example, the binding of PCMBS to red cell membranes has been reversed by adding either the sulfhydryl containing amino acid, cysteine (Sutherland et al., 1967), or dithiothreitol (DTT) (P. Hoffman, 1969), to the medium. In this study the reagents used to dissociate the modifier-channel complex were either an excess of sulfhydryl-containing DTT or amino-containing glycine. The experimental procedure was modified slightly for the reversal experiments. Dog red cells were incubated in a medium containing the divalent cation or chemical modifier for 1 h. Then either 5 mM DTT or 5 mM glycine was added to the medium. A half hour later the cells were washed three times in icecold 0.16 M NaCI and placed in a fresh medium free of any type of modifier. Sodium efflux was then measured in the normal manner. Since the cells are washed free of all modifiers and reagents before sodium efflux is measured, it is necessary to look also at the effect of simply washing the cells free of chemical modifiers and divalent cations. The results of the reversal experiments are summarized in Table IV. The effect of washing alone, DTT and washing, or glycine and washing on relative sodium efflux is given as the relative control value. Note that DTT treatment increases relative Na + efflux 1.2 times while glycine causes a decrease in efflux to 0.90 of that value for untreated cells. However, these are small changes and are unimportant in this type of experiment, since each column of relative Na + efflux values is related to its own control value which has been corrected to unity. Note also that washing the cells causes some dissociation of the modifier-channel complex, i.e., removal of modifier from the medium causes a decrease in the effects of all the modifiers on Na efflux. Treatment with DTT causes complete reversal in the case of PCMBS and Hg ++, but does not cause a change in the binding of MNT or Zn ++. Glycine, on the other hand, causes a reversal of the Zn ++ effect but does not affect PCMBS or Hg ++. It should be pointed out that the effect of washing with glycine on the MNT-induced increase in Na efftux is not significant. These data are in agreement with that mentioned previously, indicating the involvement of both amino and sulfhydryl groups in the regulation of sodium movement, It also confirms that PCMBS and Hg +÷ affect Na + permeability by modifying sulfhydryl groups while Zn ++ acts on amino groups in the sodium channel. Reversal of the effects of SITS, Ni ++, TNBS, and Co ++ was also attempted with the following results. SITS and Ni ÷+ effects are completely reversed by washing with ice-cold 0.16 M NaCI, suggesting very weak electrostatic binding with the channel, or incomplete bond formation in the case of SITS due to the inability of SITS to penetrate into the membrane (Maddy, I964;Knauf and Rothstein, 1971). Therefore, in these cases the action of DTT or glycine treatment has no meaning. The TNBS-channel complex is not reversed by either DTT or glycine, indicating that the binding is very strong and possibly covalent. Covalent binding of TNBS to amino groups has been previously reported (Barker, 1971). Co ++ data are not given in Table IV because when DTT and Co ++ are used together the cells seem to clump. Parker and Hoffman (1965) have shown that the cation permeability of canine red blood cells is dependent upon cell volume. They have reported that sodium permeability increases and potassium permeability decreases when dog red cells shrink while the opposite is true in swollen cells. One possibility then is that these modifiers increase sodium permeability by causing the cells to shrink rather than by directly modifying a site within the channel. In order to investigate this possibility MCV's were measured for erythrocytes which were incubated with either a divalent cation or a chemical modifier in the medium for 2 h, i.e., duplicating actual experimental conditions. The results were expressed as percent change in the MCV of chemically modified cells from that of control cells. Effect of Divalent Cations and Chemical Modifiers on Cell Volume The results of the MCV measurements are summarized in Table V. Most of the divalent cations and chemical modifiers actually cause the cells to swell rather than shrink, and this should cause a decrease rather than an increase in sodium permeability. Therefore, it can be concluded that most of these modifiers cause an increase in Na + permeability which does not result from a change in cell volume. One would probably expect swelling of the cells to be the result of large increases in Na + permeability, i.e., sodium and water should enter the cells. +33.5%---3.0% 6 SITS +10.5%-+3.7% 6 Control 0 The percent change in the MCV for each modifier is expressed relative to that with no modifier present. Minus signs indicate cell shrinkage and plus signs cell swelling. However, there seems to be no correlation between the change in permeability and the volume change. TNBS is a conspicuous example in that it causes a large increase in MCV and yet only a 3.5-fold increase in Na + permeability. The reason for this exceptionally large increase in volume is not yet understood. Two divalent cations, Cu ++ and Hg ++, cause the cells to shrink. However, in the case of Hg ++ this shrinkage is very slight and probably has no significant effect on Na + permeability. Romualdez et al. (1972) have reported that a 3% decrease in cell volume would result in an increase in sodium uptake equivalent to a relative sodium permeability of less than 1.5. Such a change is far too small to account for the 7.6-fold increase in Na + permeability in response to Hg ++ treatment. Part of the increase in Na + permeability caused by Cu ++ can probably be attributed to cell shrinkage. For this reason Cu ++ was not used in most of the experiments reported here. Finally, in all of the modified cells the intracellular sodium concentration was approximately the same as that in control cells so the driving force on sodium was approximately the same in each case. In summary, the results of the MCV measurements indicate that the enhancement of sodium permeability caused by the modifiers used in this study cannot be attributed to cell shrinkage. Other Possible Effects of Pharmacological Modification A basic assumption in this study is that the pharmacological modifiers act to increase sodium permeability by combining with sites within the sodium channels. It is imperative, therefore, that one eliminate any other actions of these modifiers that may have an effect on sodium flux. One possibility is that the modifiers cause a decrease in cell volume and in this manner increase Na ÷ permeability. However, the MCV measurements shown in Table V indicate that the enhancement of sodium permeability caused by these modifiers cannot be attributed to cell shrinkage. In addition, preliminary studies in our laboratory have also shown that potassium efflux is increased by the same kinds of pharmacological modification of the cell membrane which enhance sodium efflux. However, Parker and Hoffman (1965) have reported that the sodium and potassium permeabilities vary in opposite directions in response to a given change in cell volume. Therefore, these findings also indicate that membrane modification in this study does not increase permeability by changing cell volume. PCMBS and Hg ++ have been reported to inhibit glucose transport in human red blood cells (Weed et al., 1962). Thus, a second possibility is that the modifierinduced increases in sodium permeability reported in the present study are secondary to energy depletion due to the inhibition of glucose transport by these modifiers. However, Miles and Lee (1972) have studied transport in energydepleted canine red blood cells and have reported that sodium permeability is decreased. Yet, PCMBS, Hg ++, and the other modifiers used in the present study increase rather than decrease sodium permeability. Therefore, an effect on glucose transport cannot explain the observed changes in sodium permeability. Another possibility is that pharmacological modifiers increase sodium permeability by accelerating some active transport process. Indeed, PCMBS has been reported to affect ATPase activities (Godin and Schrier, 1972) and to alter the ouabain-dependent active transport of sodium and potassium in human erythrocytes (Rega et al., 1967). But these reported effects of PCMBS are inhibitory, i.e., PCMBS causes a decrease in ATPase activity and a decrease in active transport. Recently, Parker (1973 b) has identified a ouabain-insensitive sodium pump in dog red cells which extrudes sodium. One might suggest that cell modification may be affecting this mechanism. Yet preliminary studies from our laboratory (not reported here) indicate that cell modification induces a bidirectional increase in sodium flux, i.e., sodium influx as well as sodium efflux is enhanced. Increases in potassium efflux and influx have also been found. These bidirectional increases in sodium and potassium fluxes could not be expected if alteration of an active pump were involved since active pumps are usually unidirectional. The bidirectional increase in sodium flux induced by these modifiers could be caused by an exchange diffusion mechanism. Even though we could not demonstrate exchange diffusion in normal cells it is possible that the modifiers are capable of turning on such a mechanism. However, in two experiments (not reported here) we found that Zn ++, Hg ++, PCMBS, TNBS, and SITS increase Na ÷ efflux to the same extent in Na+-free medium as in a medium of normal Na + concentration. These experiments indicate that the modifiers do not cause an accelerated exchange diffusion process. Therefore, it is likely that pharmacological modifiers are altering the movement of sodium through some passive permeation pathway. Hoffman and Laris (1974) have accurately measured the membrane potential of erythrocytes by using a dye whose fluorescence is proportional to membrane potential. They have reported that the membrane potential of human and amphiuma red cells is some combination of the Nernst potentials for chloride and potassium with chloride being the dominant ion. Knauf and Rothstein (1971) have reported that amino reagents decrease anion permeability. As mentioned earlier, it has been found in our laboratory that membrane modification enhances potassium permeability. These shifts in ionic permeabilities could alter the membrane potential and alter the driving force on sodium. This would result in a change in sodium flux which would not require a corresponding change in sodium permeability, i.e., sodium channels need not be modified. Using medium and cell electrolyte concentrations for dog erythrocytes reported by , the Nernst potential for chloride is -11.3 mV while that for potassium is -14.0 mV. Therefore, taking the most extreme case of a shift from a totally chloride-dominated membrane to a totally potassium membrane, one would expect only a 1.24-fold change in sodium flux. This change is too small to explain the increases in sodium flux reported in the present study. Thus, it seems that the sodium channels are being modified, and that pharmacological modifiers can be used to identify ligands within these channels. One final possibility is that rather than affecting normal sodium channels these modifiers open up nonspecific holes in the membrane, especially since potassium permeability is also increased. It should be mentioned that although TNBS, SITS, and PCMBS increase sodium and potassium permeability, each of them causes a decrease in sulfate permeability (V. Castranova, unpublished results). This would not be expected if these reagents formed holes in the membrane. Thus, we take this to mean that these modifiers increase Na + permeability by acting on normal cation pathways although formation of new pathways has not been absolutely eliminated. Possible Modes of Modifier Action Sulfhydryl-specific PCMBS, Hg ++, Ni ++, and Co ++ all seem to enhance Na + permeability by affecting sulfhydryl groups. Furthermore, these modifiers all follow second-order kinetics. We take this to mean that there are two SH sites which must be modified in order to cause an increase in Na + permeability. It is believed that these sulfhydryl groups are involved in maintaining the structure of the sodium channel. Indeed, sulfhydryl groups have been reported to be capable of forming hydrogen bonds (Barker, 1971), and modification of sulfhydryl groups with PCMBS, Hg ++, Ni ++, or Co ++ can result in the removal of the hydrogen ion from the sulfur (Means and Feeney, 1971;Vallee and Walker, 1970) and thus, the elimination of these hydrogen bonds. This could result in some distortion of the sodium channel which causes an increase in sodium permeability. Sulfhydryl groups have also been shown to be involved in hydrophobic interactions of proteins (Heitmann, 1968). Such interactions have been suggested by Carter (1973) as a mechanism for the maintenance of membrane structure in human erythrocytes and may indeed be involved here. One could also imagine that these sulfhydryl reagents react with disulfide groups to increase Na + permeability, especially since disulfide groups are of great importance in the maintenance of many protein structures. For example, Hg ++ has been shown to react with such disulfide groups (Vallee and Walker, 1970). Yet it is unlikely that these groups are involved in the regulation of sodium permeability since DTT, a reagent specific foi' disulfides, has little effect on sodium flux. Finally, the results in Table I show a great variability in the maximal effects of SH-reactive agents. The reason for this variability is hot yet understood, but experiments are currently in progress to study this phenomenon. The proposed reaction mechanisms for amino reagents (Barker, 1971) indicate that the increase in sodium permeability resulting from amino modification is associated with the loss of the positive charge from the amino sites. Thus, it is possible that amino groups may normally limit sodium permeability by the electrostatic repulsion of sodium ions. Such a mechanism for ionic regulation, i.e., the limitation of sodium movement by the presence of positive amino groups in a channel, has been proposed for human red cells by Passow (1969). These amino sites are thought to lie deep within the channel since SITS, a very large molecule, has no effect on Na + permeability (Knauf and Rothstein,197 I). This is not the case in dog red cells, however, where the amino sites seem to be located superficially since SITS is effective. In conclusion, pharmacological modification of the sodium channels with heavy metal ions and chemical modifiers increases the sodium permeability of dog red blood cells. Reversibility and binding studies as well as the relative effectiveness of each modifier indicate that PCMBS, Hg ++, Ni ++, and Co ++ act on sulfhydryl sites while TNBS, SITS, and Zn ++ enhance sodium permeability by affecting amino sites within the sodium channel. Thus, these amino and sulfhydryl sites normally act as barriers to limit the movement of sodium. The amino barrier seems superficial and may be due to the electrostatic repulsion of sodium ions while sulfhydryl groups may limit sodium movement by physically constraining the channel. We are grateful to Dr. William J. Canady and Dr. Ping Lee for helpful discussions during the course of this work.
2014-10-01T00:00:00.000Z
1976-05-01T00:00:00.000
{ "year": 1976, "sha1": "4a6a63b6b287261aac0b0269bc48d7972aa27de6", "oa_license": "CCBYNCSA", "oa_url": "http://jgp.rupress.org/content/67/5/563.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "4a6a63b6b287261aac0b0269bc48d7972aa27de6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
189815574
pes2o/s2orc
v3-fos-license
Bleeding risk of small intracranial aneurysms in a population treated in a reference center ABSTRACT Large multicenter studies have shown that small intracranial aneurysms are associated with a minimal risk of bleeding. Nevertheless, other large series have shown that most ruptured aneurysms are, in fact, the smaller ones. In the present study, we questioned whether small aneurysms are indeed not dangerous. Methods: We enrolled 290 patients with newly-diagnosed aneurysms at our institution over a six-year period (43.7% ruptured). We performed multivariate analyses addressing epidemiological issues, cardiovascular diseases, and three angiographic parameters (largest aneurysm diameter, neck diameter and diameter of the nutrition vessel). Risk estimates were calculated using a logistic regression model. Aneurysm size parameters were stratified according to receiver operating characteristic (ROC) curves. Finally, we calculated odds ratios for rupture based on the ROC analysis. Results: The mean largest diameter for the ruptured versus unruptured groups was 13.3 ± 1.7 mm versus 22.2 ± 2.2 mm (p < 0.001). Multivariate analysis revealed a positive correlation between rupture and arterial hypertension (p < 0.001) and an inverse correlation with all three angiographic measurements (all p < 0.01). Aneurysms from the anterior cerebral artery bled more often (p < 0.05). According to the ROC curves, at the largest diameter of 15 mm, the sensitivity and specificity to predict rupture were 83% and 36%, respectively. Based on this stratification, we calculated the chance of rupture for aneurysms smaller than 15 mm as 46%, which dropped to 25% for larger aneurysms. Conclusion: In the population studied at our institution, small aneurysms were more prone to bleeding. Therefore, the need for intervention for small aneurysms should not be overlooked. than 5 mm 3,4,5,6,7,8 . The prevalence of intracranial aneurysms varies from 3.7% in prospective autopsy studies to 6.0% in prospective angiographic studies 9 . Moreover, they lead to a relatively high rate of morbidity-mortality, with a rate of SAH of about 1.4% per year 10 , which is in turn associated with a mortality rate of up to 50% 11 ; furthermore, half the survivors sustain irreversible brain damage 12 . Known predictors for rupture include age, hypertension, history of SAH, aneurysm size and geographic location. Certain populations, like the Finns and Japanese, have considerably higher risks of rupture (3.6-fold and 2.8-fold, respectively) 10 than other populations, and some genetic predispositions, such as polymorphisms on the SOX17 transcriptor regulation gene, endothelin receptor A gene, or cyclin-dependent kinase inhibitor genes, have been recently implicated in aneurysm formation 13 . To date, the best evidence for risk of SAH from unruptured intracranial aneurysms is derived from the International Study of Unruptured Intracranial Aneurysms (ISUIA) 14 . Based on this and other studies 2,5,9,15 , the current recommendation is that asymptomatic patients harboring aneurysms smaller than 7 mm in diameter, without a previous history of rupture, should not receive treatment 16,17 . By contrast, other studies have suggested that aneurysms from the anterior communicating complex (ACoA), as well as those with large maximal diameter/neck diameter ratios, are more prone to bleed and must, therefore, be treated differently than that recommended by the guidelines 18,19,20 . Additionally, many retrospective series from large reference centers have reported a high proportion of small aneurysms among patients admitted with SAH 3,21,22,23,24 . More specifically, up to 88% of patients with SAH had aneurysms smaller than 10 mm in diameter at the time of the original diagnosis 23 . These data support the idea that the risk of bleeding associated with small aneurysms should not be underestimated. Several authors have tried to explain the discrepancy between the ISUIA results and large single-center experiences. We suggest an important variable in this discrepancy may be selection bias. In the present study, we questioned the premise that small aneurysms do not carry a risk of rupture at the time of diagnosis. To this end, we retrospectively evaluated 290 patients referred to the University of Tübingen (Germany) over a six-year period. We propose a new method to address the relationship between aneurysm size and prevalence of rupture, based on logistic regression. Aneurysms were classified by size (small and large) based on the receiver operating characteristic (ROC) curve stratification instead of by arbitrarily defining size thresholds. METHODS For this study, we obtained clinical and radiological information for all patients diagnosed with intracranial aneurysms at the University of Tübingen, which included patient age, sex, tobacco dependence, comorbidities (e.g., diabetes mellitus, arterial hypertension or other vascular diseases), and location of aneurysm. Patients were excluded from the study if they had non-aneurysmal SAH. When patients had multiple aneurysms, ruptured intracranial aneurysms were identified by angiographic determinants or by direct observation during surgery. Digital subtraction angiography and three-dimensional computed tomography angiography were used to evaluate the morphology of intracranial aneurysms. The following measurements were independently performed by two experienced neurosurgeons (C.A.F.L. and S.T.): 1) largest aneurysm diameter; 2) neck diameter; and 3) diameter of the nutrition vessel ( Figure 1). Agreement between the two observers was quantified with the κ test. For further calculations, we used the mean values from both observers. Continuous data were expressed as means ± standard errors. To assess data distribution, we applied the Shapiro-Wilk W test. Since all measured diameters obeyed normal distribution, mean comparisons were performed with Student's t-tests. To identify the independent parameters that had significant correlations with rupture, a multivariate logistic regression analysis was performed for all aneurysms (this is shown in a clustered color map). Categorical data were compared using a contingency analysis and Pearson's chi-square test. These data are shown graphically in mosaic plots. To address the relationship between size measurements and rupture at the time of diagnosis, we applied a logistic regression model and calculated the logistic probability of rupture. The resulting curve fit the model, and we present the curve, equation and curve coefficients. Additionally, we performed a ROC analysis to determine the optimal threshold value for each of the three measurements recorded (the area under the curve [AUC] reflects the goodness of the predictor). Based on this analysis, we calculated the sensitivity and specificity of the measures at the threshold. Finally, data were categorized into two groups (large and small values) according to the ROC analysis. We then compared the prevalence of rupture and other clinical features between groups. For statistic computation and graphic representations, we used the JMP 11.1.1 software (SAS Institute Inc., NC, USA). Series characterization From January 2006 to December 2011, we collected clinical and radiological data from 290 consecutive and newlydiagnosed patients harboring intracranial aneurysms (346 aneurysms); 69% were female and 31% were male, with a mean age of 53.9 ± 14.4 years. Of the aneurysms, 43.7% were ruptured and 56.3% were unruptured. During this period, 30% of the patients were treated with endovascular procedures, while 70% were treated surgically. Arterial hypertension was the most frequent comorbidity encountered (26.3%), and tobacco dependence the second (6.2%). Forty-two patients out of 290 presented with more than one aneurysm. The most frequent localization was the middle cerebral artery (35.7%), followed by anterior cerebral artery/anterior communicating artery complex (ACA/ACoA, [33.4%]), then the internal carotid artery (including the communicating posterior segment, 18.3%), and finally, the vertebrobasilar system (11.1%). In total, 21.8% of the aneurysms in this series measured less than 5 mm in maximal diameter, 23.4% measured between 5-7 mm, 14.7% between 7-10 mm, and 34.4% were larger than 10 mm. Clinical factors associated with rupture The multivariate analysis ( Figure 2A) revealed a positive correlation between arterial hypertension and rupture (Pearson's r = 0.2276, p < 0.001), tobacco dependence and hypertension (r = 0.1576, p < 0.01), other cardiovascular diseases and hypertension (r = 0.2061, p < 0.001), age and arterial hypertension (r = 0.2824, p < 0.001). We also show in Figure 2B and Figure 2C a contingency analysis for aneurysm localization and arterial hypertension versus chance of rupture. Aneurysms of the anterior cerebral artery and patients with arterial hypertension were at higher risk of bleeding (p < 0.05 and < 0.001, respectively). In the ruptured group, 42.6% of aneurysms occurred in the ACA/ACoA, 16.3% in the internal carotid artery, 31.8% in the middle cerebral artery, and 9.3% in the vertebrobasilar system. In the unruptured group, these rates were 27.3% in the ACA/ACoA, 22.7% in the internal carotid artery, 36.9% in the middle cerebral artery, and 13.1% in the vertebrobasilar system. Therefore, there was a higher proportion of ACA/ACoA aneurysms in the ruptured group (p < 0.05, Pearson' s chi-square). Pairwise analysis revealed that the chance of rupture at the time of the original diagnosis was significantly higher for patients with a previous diagnosis of hypertension (Pearson's chi-square p < 0.001) and for aneurysms located in the ACA/ACoA complex (Pearson's chi-square p < 0.05). In the present series, we did not observe any relationship between rupture and diabetes mellitus, tobacco dependence, age, sex, or cardiovascular diseases other than hypertension. Rupture at the time of diagnosis, according to the logistic regression model To evaluate the influence of each angiographic size parameter on the chance of rupture at the time of the original diagnosis, we applied logistic regression. According to our model, the chance of rupture (r = 1) obeys the following equation (1): Where p LD refers to the chance of rupture as a function of the largest aneurysm diameter (LD is the largest diameter). The curve coefficients were -0.1628 ± 0.1431 and 0.0154 ± 0.0057 (whole model test, p < 0.05). As can be seen in Figure 3B, the bigger the largest diameter, the greater the chance of rupture at the time of diagnosis. Similarly, we also found a negative association between aneurysm neck diameter and chance of rupture. Accordingly, the chance of rupture is described by the following equation (2): Where p ND refers to the chance of rupture as a function of the aneurysm neck diameter (ND is the neck diameter). The curve coefficients were -0.1082 ± 0.1501 and 0.0354 ± 0.0121 (whole model test, p < 0.001) ( Figure 3D). Finally, we also examined the chance of rupture as a function of the diameter of the nutrition vessel, which yielded the following equation (3): pDNV (r = 1) = 1 -(1 / 1 + e (-01329 -0.0480 * DNV) ) (3) Where p DNV refers to the chance of rupture as a function of the diameter of the nutrition vessel (DNV is the diameter of the nutrition vessel). The curve coefficients were -0.1329 ± 0.1460 and 0.0480 ± 0.0160 (whole model test, p < 0.01) ( Figure 3F). On the left side of Figure 3 we demonstrate logistic probability plots of the largest aneurysm diameter ( Figure 3A), neck diameter ( Figure 3C), and diameter of the nutrition vessel ( Figure 3E). Sample stratification according to size In order to address the chance of rupture in the study population, we stratified the sample into two groups: large and small diameter aneurysms. Differently from what has previously been published, the threshold value used for this measurement was based on the ROC analysis using rupture as the outcome variable ( Figure 4). First, the AUCs show how well the variables predict rupture. For large diameter, the AUC was 0.5812, for neck diameter, AUC = 0.6150, and for diameter of the nutrition vessel, AUC = 0.5881. Since all values were less than 0.7, we concluded that size was not a good predictor of bleeding in this population. The threshold value for the largest diameter was 14.97 mm; at this value, the sensitivity to predict rupture was 82.95% and the specificity was 35.53%. For the neck diameter, the threshold was Since all values were lower than 0.7, we deduced that aneurysm size was, in fact, not a strong predictor of rupture. From the ROC tables, we calculated the values of the three diameter variables that were associated with the highest sensitivity and specificity. For the variable "largest diameter", the cut-off point was 14.97 mm; at this value, we found a sensitivity of 82.95% and specificity of 35.53% for rupture prediction. Similarly, the cut-off value for the neck diameter was 4.72 mm, with a sensitivity of 79.83% and a specificity of 41.9%. Finally, for the diameter of the nutrition vessel, the cut-off was 12.04 mm, which was associated with a sensitivity of 91.41% and specificity of 28.93%. The ROC analysis allows a more precise definition of representative groups within the sample population. Based on this analysis, we divided the sample into aneurysms with largest diameter > and ≤ 15 mm, neck diameter > and ≤ 5 mm, and diameter of the nutrition vessel > and ≤ 12 mm ( Figure 5). We observed that the chance of rupture at the time of diagnosis was 45.5% if the aneurysm was larger than 15 mm at its largest diameter and only 24.7% if it was less than or equal to 15 mm (p < 0.001, odds ratio [OR] = 0.3937, 95%CI = 0.2301 to 0.6735). Similarly, the chance was 47.5% for aneurysms with necks smaller than 5 mm and 25.0% if the neck was larger than 5 mm (p < 0.001, OR = 0.3688, 95%CI = 0.2167 to 0.6276), and 45.3% for aneurysms whose nutrition vessel was smaller than 12 mm and only 17.4% for aneurysms whose nutrition vessel was larger than 12 mm (p < 0.001, OR = 0.2541, 95%CI = 0.1301 to 0.4962). DISCUSSION The prevalence of intracranial aneurysms is relatively high (2-6%, depending on the diagnostic method used) 9 . A B C D Figure 5. Contingency analysis for probability of rupture in two groups of patients (small and large aneurysms), which were defined based on the ROC analysis ( Figure 4). In A, note a higher probability of rupture at the time of admission for patients harboring small aneurysms (large diameter less than 15 mm). For small aneurysms, the probability of rupture was 45.5%, which dropped to 24.7% if the aneurysm was larger than 15 mm in its maximal diameter (Pearson's chi-square p < 0.001). In B, a similar analysis was conducted for neck diameter; here, the probability of rupture was 47.5% for narrower necks (< 5mm) and 25.0% for wider necks (Pearson's chi-square p < 0.001). Finally, in C (nutrition vessel), the smaller nutrition (< 12 mm) was associated with 45.3% probability of rupture at the time of admission, which was much higher than the probability of 17.4% observed for patients with larger nutrition vessels (Pearson's chi-square p < 0.001). Next, we calculated the odds ratio for rupture in relation to each of the measurements. For largest diameter, OR = 0.3937 (95% CI = 0.2301 to 0.6735), for neck diameter, OR = 0.3688 (95% CI = 0.2167 to 0.6276), and for diameter of the nutrition vessel, OR = 0.2541 (95% CI = 0.1301 to 0.4962). Odds ratios below 1 indicate that, in our series, large sizes were in fact protective against rupture. Furthermore, SAH is a potentially fatal condition, with a reported mortality rate of 52% 16 to 83% 15 . The retrospective arm of the ISUIA reported that the risk of rupture of an aneurysm smaller than 10 mm in a patient with no previous SAH was only 0.05%. Compared with other studies 6,15 , the reported risk of rupture was 10 to 12 times lower than previously estimated. Based on this, the recommendation was to manage aneurysms measuring less than 10 mm expectantly. Later, detailed analyses of that study suggested that it may have suffered from methodological issues, such as selection bias 25 . In the ISUIA Part 2 2 , higher rupture rates for small aneurysms were reported. Nevertheless, the major concerns regarding selection criteria remain. The annual rupture rates for aneurysms 7 mm to 12 mm in size were 0.5% in the anterior circulation and 2.9% in the posterior circulation. It is important to note that, in that study, aneurysms arising from the posterior communicating segment of the carotid were grouped under "posterior circulation". This might have contributed to the increased rupture risk in this group. Based on the above report, mathematical algorithms have been developed to help the decision-making process regarding treatment necessity 26,27,28 . According to Mitchell and Jakubowski 26 , intervention in patients with aneurysms smaller than 10 mm with no SAH was not justified (assumption based on lost life-years in the risk calculation). These authors concluded that intervention in unruptured aneurysms was justified only for patients up to 50 years of age (see also Vindlacheruvu et al. 27 ). According to Yoshimoto 28 , mathematical risk models suggest that prophylactic treatment of unruptured aneurysms may produce some benefit for large aneurysms. Moreover, given the low treatment-related morbidity-mortality in young populations harboring small aneurysms, intervention might be justified in this group 28 . Further analysis of the available data on unruptured aneurysms culminated with the following recommendations made by the Stroke Council of the American Heart Association 16 : 1) Asymptomatic intracavernous aneurysms should not be treated. In large symptomatic ones, the treatment should be individualized. 2) All symptomatic intradural aneurysms should be treated. 3) Incidental aneurysms with a diameter less than 10 mm should not be treated. Nevertheless, lesions approaching 10 mm, those with daughter aneurysm formation, those in young patients or in individuals with a family history of SAH, deserve special consideration for treatment. 4) Aneurysms found in association with a ruptured lesion and those with a diameter greater than 10 mm deserve strong consideration for treatment, especially in young patients. In the United States, it was reported that 80% of the 28,000 aneurysmal SAHs occurred in lesions smaller than 10 mm, indicating a 0.72% to 1.36% annual rupture rate for this group 30 . Inagawa 20 reported 24% rupture in aneurysms smaller than 5 mm, 48% for aneurysms measuring between 5 mm and 10 mm, and only 28% for those larger than 10 mm from a sample of 285 patients studied. Lai et al. 31 observed even more drastic numbers: according to their experience, SAH originating from aneurysms smaller than 5 mm occurred in 68% of patients (n = 267). Joo et al. 22 observed that 71.8% of their 627 cases of ruptured aneurysms presented with lesions smaller than 7 mm in diameter, and 87.9% were smaller than 10 mm. The high mortality rate in cases of rupture 8 highlights the importance of carefully deciding whether to treat patients harboring small aneurysms. To explain the discrepancy between the reported reduced risk of rupture in small aneurysms and the higher prevalence of small aneurysms among those that rupture, one hypothesis suggests that aneurysms shrink after bleeding 21 . However, this hypothesis is not widely accepted, and some authors believe that aneurysms may even grow before rupturing 32,33 . Another hypothesis is that there seems to be a high-risk period soon after aneurysm formation, which is followed by a period of risk stabilization 26 . Although these phenomena may partially explain the controversy, we cannot ignore the possibility that methodological issues may have played an important role in the results reported in the ISUIA studies 34,35 . Selection bias has been pointed out as a major drawback in the ISUIA studies, with preferential treatment and exclusion of patients who were symptomatic, or who had aneurysms with certain morphological characteristics (e.g., daughter sacs or irregular borders), and those with a family history of SAH 1,4,5,8 . The study by Juvela et al. 5 , though observational, provides first line evidence about the natural history of unruptured intracranial aneurysms. Aneurysm size and patient age were significant predictors of aneurysmal SAH, as was active cigarette smoking (all p < 0.05). Rinkel et al. 9 published a meta-analysis about the natural history of unruptured intracranial aneurysms. Among nine studies including a total of 3,907 patients, the overall risk of rupture was 1.9% per year (0.7% for unruptured aneurysms < 10 mm and 4% for intact lesions > 10 mm). According to Dickey et al. 36 , the fact that the ISUIA and other studies mostly involved investigators from busy neurovascular centers, who routinely treated patients with larger aneurysms, created the false impression that the smaller aneurysms in their practice had a lower rupture rate 36 . Because rupture risk reflects a biological problem, it is unlikely that risk distribution changes abruptly beyond a certain threshold of aneurysm size. Supporting this idea, Dickey and Kailasnath 36 reported the "diameter-cube hypothesis, " a mathematical model that describes the rupture potential of any given aneurysm as continuously increasing on the basis of the maximum diameter of the aneurysm cubed. Here, we propose a different way to calculate risk in a sample population, based on logistic regression. Our model predicts that risk decreases logarithmically as a function of size. The low p values observed for the whole model-test support its robustness. Moreover, it has been proposed that the diameter of the nutrition vessel, or the neck diameter, either individually or in relation to the maximal diameter (size ratio), represent more reliable predictors of rupture 18,19,33 . Additionally, the ROC analysis revealed that these measurements have similar predictive power for ruptures (see AUCs). However, none of the parameters proved to be strong predictors in the present sample (AUC < 0.7). Notably, size stratification in this study was performed according to the ROC analysis, and large aneurysms with a reduced risk of bleeding (such as intracavernous, or those with calcified walls) were not excluded from the analysis. Exclusion would have generated higher risk values for large aneurysms but would have created selection bias, rendering the risk calculation inexact. In conclusion, we have presented a mathematical model to describe the chance of aneurysm rupture at the time of diagnosis, based on logarithmic regression. Our model predicts a logarithmic decay of the chance of rupture as a function of aneurysm diameter. Size stratification according to the ROC analysis revealed that, at least for the population referred to our center in southern Germany, small aneurysms are especially worrisome. It remains to be determined whether this statement is also applicable to other populations.
2019-06-15T13:07:24.357Z
2018-08-28T00:00:00.000
{ "year": 2018, "sha1": "35f3e3347c968df61d27935f90371aa457474c52", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/anp/v77n5/1678-4227-anp-77-05-0300.pdf", "oa_status": "GOLD", "pdf_src": "Thieme", "pdf_hash": "6049fea65068a8bb18e7ffa0fb162be521266c23", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38821690
pes2o/s2orc
v3-fos-license
Auto-regulation in the powerhouse Mitochondrial flashes have a central role in ensuring that ATP levels remain constant in heart cells. denosine triphosphate, or ATP for short, provides the energy that is needed for countless processes in the body. It is vital that the level of ATP in cells remains constant, especially when the demand for energy increases. This is particularly true in the heart, where energy demand can increase by a factor of 5-10 during stressful situations, yet the ATP concentration remains remarkably consistent (Balaban et al., 1986;Neely et al., 1973;Matthews et al., 1981;Allue et al., 1996). Despite decades of research, it has remained unclear how cells keep their ATP levels stable. Now, in eLife, Heping Cheng of Peking University and colleagues -with Xianhua Wang, Xing Zhang and Di Wu as joint first authors -report that a process termed mitochondrial flash or mitoflash plays a critical role in regulating ATP concentration in the heart (Wang et al., 2017). ATP production occurs in several stages inside mitochondria, where a flow of electrons down the mitochondrial 'electron transport chain' creates an electro-chemical gradient across the inner membrane of mitochondria. In the first stage, calcium is transported across the inner membrane into the inner mitochondrial chamber or matrix, and used in a process known as the citric acid cycle to generate high-energy electrons. These electrons then enter the electron transport chain and travel along the inner membrane to an enzyme called ATP-synthase that produces ATP. As a by-product of this process, molecules called reactive oxygen species are formed when electrons that leak from the electron transport chain go on to react with oxygen molecules. Although high levels of reactive oxygen species lead to cell death and disease, low levels of these species are important for regulating normal cell processes. Recent research has shown that mitochondria exhibit brief events called mitoflashes. These involve multiple concurrent changes within the mitochondria, including a burst in the production of reactive oxygen species, as well as changes in the pH of the mitochondrial matrix, the oxidative redox state and the membrane potential (Wang et al., 2008(Wang et al., , 2016. Mitoflashes depend on an intact electron transport chain and are thought to help regulate energy metabolism. Using cells isolated from mouse heart muscle, Wang et al. demonstrated that it is the frequency of the mitoflashes -rather than the amplitude -that regulates ATP production. When the heart cells were exposed to drivers of the citric acid cycle to mimic increased energy metabolism, the mitoflashes occurred more frequently, while ATP production remained constant. However, when antioxidants were applied, the frequency of mitoflashes decreased, which led to an increase in ATP production. These findings suggest that mitoflash activity responds to changes in energy metabolism to negatively regulate ATP production. When electrical stimuli were applied to make the heart cells contract more quickly and increase the demand for ATP, the frequency of mitoflashes decreased, while the cellular ATP content remained constant. It appears that when a lot of energy is needed, changes in the frequency of the mitoflashes regulate ATP production in a way that supports survival. Indeed, the results revealed that when mitoflash frequency decreased, the ATP concentration or set-point increased. This suggests that mitoflash activity may act as an ATP set-point regulator that responds to changes in energy supply and demand in order to maintain ATP homeostasis in the heart (see Figure 6 in Wang et al., 2017). Wang et al. provide the first mechanistic insight into a potential trigger that links changes in mitoflash frequency and regulation of the ATP set-point in the heart. Previous studies have identified three possible triggers of mitoflashes: calcium located in the mitochondrial matrix, reactive oxygen species and protons (Hou et al., 2013;Wang et al., 2016). Wang et al. propose that calcium is unlikely to play a significant role in the regulation of mitoflash frequency. And since electrical stimulation did not significantly change the amount of reactive oxygen species produced by the mitochondria, they focused their attention on protons as a trigger of mitoflashes. It is known that protons can leak through the ATP-synthase and return to the mitochondrial matrix, and it has been shown that a pro-survival protein called Bcl-xL plays a role in regulating this proton leak Chen et al., 2011). Now, Wang et al. show that an increase in Bcl-xL prevents proton leaks and reduces the frequency of mitoflashes, while the ATP set-point increases. When there is a decrease in Bcl-xL protein, the opposite occurs. Based on these findings, Wang et al. propose that proton leaks may be a bi-directional trigger of mitoflashes and cellular ATP homeostasis. Overall, Wang et al. demonstrate that mitoflash frequency negatively regulates ATP production in a compensatory, pro-survival manner, and that a high ATP demand induces a small and brief increase in calcium. These results are consistent with previous work characterizing mitoflashes (Wang et al., 2008) and the role of mitochondria in the development of diseases in the heart (Viola et al., 2007;Seenarain et al., 2010). Future studies may provide more insight into how mitoflashes regulate ATP homeostasis during the development of heart diseases. Helena M Viola is in the School of Human Sciences, The University of Western Australia, Crawley, Australia Livia C Hool is in the School of Human Sciences, The University of Western Australia, Crawley, Australia, and the Victor Chang Cardiac Research Institute, Sydney, Australia livia.hool@uwa.edu.au http://orcid.org/0000-0001-7758-5252 Competing interests: The authors declare that no competing interests exist. Published 10 July 2017
2018-04-03T04:55:51.545Z
2017-07-10T00:00:00.000
{ "year": 2017, "sha1": "39c311bb38903b2103dccaf7baa1fce22b5b2c17", "oa_license": "CCBY", "oa_url": "https://elifesciences.org/download/aHR0cHM6Ly9jZG4uZWxpZmVzY2llbmNlcy5vcmcvYXJ0aWNsZXMvMjg3NTcvZWxpZmUtMjg3NTctdjEucGRm/elife-28757-v1.pdf?_hash=nYtK/L5fPKFncXLWrJrrWZSUUfwrF4TOhnpqpbjBNdU=", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39c311bb38903b2103dccaf7baa1fce22b5b2c17", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235189639
pes2o/s2orc
v3-fos-license
Paeonol Induces Protective Autophagy in Retinal Photoreceptor Cells Background: Retinal photoreceptor (RP) cells are widely involved in retina-related diseases, and oxidative stress plays a critical role in retinal secondary damage. Herein, we investigated the effectiveness and potential mechanisms of autophagy of paeonol (Pae) in terms of oxidation resistance. Methods: The animal model was induced by light damage (LD) in vivo, whereas the in vitro model was established by H2O2 stimulation. The effectiveness of Pae was evaluated by hematoxylin and eosin, terminal deoxynucleotidyl transferase dUTP nick end labeling assay, immunofluorescence, transmission electron microscopy, electroretinogram, and Western blot analysis in vivo, and the underlying mechanisms of Pae were assessed by Cell Counting Kit-8 assay, reactive oxygen species (ROS) assay, and Western blot analysis in 661W cells. We mainly evaluated the effects of Pae on apoptosis and autophagy. Results: Increased apoptosis of the LD-induced and decreased autophagy of RPs were mitigated by Pae treatment. Pea, which increased the expression of mitochondrial functional protein cytochrome c, reversed the decreased cell viability and autophagy induced by oxidative stress in 661W cells. Experiments showed that autophagy was downregulated in PINK1/Parkin dependent and the BNIP3L/Nix dependent pathways under H2O2 stimulation and was upregulated by Pae treatment. Pae increased the cell viability and reduced ROS levels through autophagy. Conclusion: Pretreatment with Pae preserved RP cells by enhancing autophagy, which protected retinal function. INTRODUCTION Retinal photoreceptor cells (RPs) are essential in the process of normal visual transduction and are associated with a broad variety of vision-threating diseases such as glaucoma, age-related macular degeneration, retinopathy of prematurity and so on. The injury of RPs could be mostly attributed to constant exposure to highly oxidative environment owing to high metabolic activity, large consumption of oxygen and photochemical damage from excess light in retina (Sundermeier and Palczewski, 2016). The imbalance between elevated oxidative stress and antioxidant defense mechanisms could cause dysfunction of mitochondria and other intracellular organelles in RPs (Lefevere et al., 2017) and even trigger unreversible cell death, which highlight the importance of developing therapeutic interventions to attenuate oxidative damage and protect RPs. Paeonol (Pae; 2′-hydroxy-4′-methoxyacetophenone) is a major phenolic acid compound derived from the root bark of the Moutan Cortex and serves as a natural active ingredient in Chinese herbal medicines (Choy et al., 2018). It has been found to possess pharmacological effects including sedation, analgesia and immunoregulatory, and exert anti-tumor (Ramachandhiran et al., 2019) and anti-inflammatory response (Chen et al., 2012). Zhao et al. reported that Pae exhibited neuroprotective effect in a subacute/chronic cerebral ischemia rat model by effectively alleviating neurological impairment and neuronal loss (Cai et al., 2014). Zhou et al. also found that Paeonol intervention slowed down the pathogenic processes in a rat model of Alzheimer's disease (Zhou et al., 2011), suggesting a potential anti-oxidant effect of Pae. Another study observed therapeutic effects of Pae on Parkinson's disease in mice by decreasing the damage from oxidative stress . To the best of our knowledge, there have been no reports demonstrating the effects of Pae on RPs damage. It is known that autophagy plays essential roles both in physiological processes, such as cell growth, cell differentiation and cell death, and in pathophysiological processes, including the adaptation to oxidative stress and the maintenance of cell homeostasis. Autophagy is a cellular degradation and recycling process which provides raw materials for the reconstruction of intracellular components by recycling dysfunctional organelles and misfolded proteins. Moreover, mitochondrial related autophagy, which refers to the selective removal of mitochondria by autophagy, is of increasing interest to researchers, as mitochondrial dysfunction is closely related to various retinal diseases. Prior studies have proposed that autophagy participated in cytoprotective response to damage of RPs (Shi et al., 2016), and autophagy was also highly enriched in RPs (Boya et al., 2016). PINK1/Parkin, Nix/BNIP3L and FUNDC1 are three main signal pathways involved in the process of autophagy, with Nix and FUNDC1 promoting autophagy in response to hypoxia (Li et al., 2018;McWilliams et al., 2019), and PINK1/Parkin mediating autophagy to cope with oxidative stress and other non-hypoxic stressors. Besides, Beclin-1 is involved in the control of autophagy by regulating the initiation and nucleation phases of autophagosome formation and the process of phagocytosis and endocytic trafficking. This study sought to answer the following specific research questions: whether Pae possesses anti-oxidant effect and neuroprotective effect on RPs both in vivo and in vitro and whether the underlying mechanisms are related to autophagy in RPs. The study could provide a potential approach to protect RPs. Animal Experiments Design Animal experiments were conducted in accordance with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85-23, revised 2011) and the guidelines on the ethical use of animals of Fudan University. Adult male Sprague-Dawley (SD) rats weighing approximately 180 g (SLAC Laboratory Animal Co., Ltd., Shanghai, China) were used in the experiment. The rats were housed with a daily 12-h light/12-h dark cycle and had free access to food and water. A total of 45 SD rats were randomized into the normal control (NC) group (n 10), the light damage (LD) group (n 10), the LD + vehicle group (n 10, daily intraperitoneal injection of 1 μl DMSO), and the LD + Pae group (n 15, daily intraperitoneal injection of 1 μl Pae, 80 mg/kg). After one week of Pae intraperitoneal injection, the electroretinogram (ERG) function was detected 3 days after 24-h blue light exposure (wavelength of 400-440 nm). The rats were then sacrificed with excessive abdominal anesthesia of chloral hydrate (600 mg/kg), and eyeballs were harvested for either immediate or future use. Hematoxylin and Eosin Staining The eyeballs from each group were harvested, enucleated and immediately fixed in 4% paraformaldehyde for 24 h. Then the samples were embedded with paraffin, serially sectioned (5-μm/ section) and stained with hematoxylin and eosin (Takara: C0105S). The sections were observed using a light microscope (Leica, Wetzlar, Germany), and the slices cutting crosssectionally through the eyeball and the optic nerve were selected. The images photographed were all location matched, and the thickness of different retinal layers were measured at the points located approximately 250 μm from the optic nerve. Each selected section was measured five times and averaged to obtain the values for one sample. Fluorescence Staining 4% Paraformaldehyde-fixed and optimal cutting temperature compound-embedded rat eyes of every groups were sectioned at a thickness of 10 μm and subjected to TUNEL assay (Takara: C1089) to detect apoptotic cells. TUNEL staining was performed in accordance with the manufacturer's instructions (Zhu et al., 2010). After counterstaining with DAPI (1:2000; Life Technologies, Carlsbad, CA, United States) for 10 min, the sections were observed using a confocal microscope (Leica SP8, Hamburg, Germany). Electron Microscopy A piece of posterior pole tissue (1 mm × 1 mm × 1 mm) of eyes from each group was rapidly dissected on ice after enucleation, and fixed via 2.5% glutaraldehyde in 0.1 M phosphate buffer at 4°C for 2-4 h. After washing in 0.1 M phosphate buffer for 3 times, the samples were post-fixed with 1% osmic acid at 4°C for 2 h and washed again. The samples were then dehydrated using an ascending alcohol series, infiltrated with a 1:1 mixture of resin and acetone and embedded in epoxy resin. After being polymerized in a 60°C oven for 48 h, 60-80 nm ultrathin sections were obtained by using an ultramicrotome (Leica, Leica UC7). The sections were double stained with lead citrate and uranyl acetate (each for 15 min) and examined under a transmission electron microscope (TEM; HITACHI, HT7700). Measurement of Electroretinogram The ERG was measured to evaluate the RP function of rats before intraperitoneal injection and three days after retinal light damages separately and was recorded by an Espion Diagnosys System (Diagnosys, Littleton, MA, United States). As described in Miyai et al. (2019), after 24 h of dark adaptation, the rats were given intraperitoneal anesthesia, and their pupils were dilated with phenylephrine hydrochloride and tropicamide (0.5%). Two wire loop electrodes were placed on the corneal surface of the eyes and served as the ERG signal-recording electrodes. In addition, two subdermal needle electrodes were inserted into the base of the tail and nasal part and separately served as the ground electrode and the common reference electrode. Retinal responses were recorded for 30 min. Light stimulation was performed using a white LED following the protocol described in Miyai et al. (2019). Dark and light adaptation was performed in four steps, and the light intensity was switched from weak to strong. Electroretinographic waveforms were recorded and sampled, and the data were analyzed by using a Diagnosys digital acquisition system. The waveforms of ERG were measured from trough to peak (Lazarou et al., 2015), and the values of ERG amplitudes were compared among the four groups. Cell Counting Assay The 661W cells were seeded in 96-well plates at a density of 1 × 10 4 cells per well, and grouped as follows: control cells, cells exposed to H 2 O 2 , cells treated with Pae, and cells incubated with both H 2 O 2 and Pae. The cells were treated with different concentrations of H 2 O 2 (0-200 μM) and Pae (0-200 μM) for different time periods (0, 24, 48, 72, 96 h). After the cells were grown to approximately 80-90% confluency, Cell Counting Kit-8 (CCK-8) (Dojindo: CK04) assay was performed following the manufacturer's protocol. The absorbance of each well was read at 450 nm by a microplate reader (BioTek, United States). Western Blot Analysis Briefly, retinal tissues and cultured cells were lysed in radioimmunoprecipitation assay buffer (ASPEN, China) containing protease inhibitor cocktail (ROCHE). All sample extracts were electrophoresed by SDS-PAGE (ASPEN, China) and electrophoretically transferred to PVDF membranes. The membranes were then blocked with 5% skim milk at room temperature for 1 h, incubated with primary antibody diluent at 4°C overnight and incubated with horseradish peroxidase-coupled secondary antibodies. The blots were developed using enhanced chemiluminescence (ECL) and the signals were captured in a dark room. The immunoreactive bands were analyzed in triplicate by ImageJ, with GAPDH being used as a loading control. Mito-Sox Assay The 661W cells were seeded in 6-well plates at a density of 1 × 10 5 cells per well and treated with H 2 O 2 (100 μM) and/or Pae (50 μM) for 24 h after the cells were grown to approximately 80% confluency. Mitochondrial reactive oxygen species (ROS) were measured with a ROS Detection Kit (Takara: S0033S) according to the manufacturer's instructions. Statistical Analyses All experiments were repeated more than three times. Data were presented as mean ± standard deviation. One-way ANOVA and Bonferroni's multiple-comparisons test were performed to compare the between-group difference. All statistical tests were performed with SPSS version 20 (IBM, United States) and p < 0.05 were considered statistically significant. Blue Light-Induced Retinal Photoreceptors Loss and Retinal Dysfunction Were Mitigated by Paeonol Treatment Exposure to blue light successfully induced the LD model in SD rats. Compared with the control group, significant thinned outer nuclear layers (ONL) were observed in the LD group and LD + vehicle group, whereas the thickness of ONL was effectively preserved after the administration of Pae ( Figures 1A,B). It has been reported that apoptosis and autophagy appear to markedly accelerate three days after light injury, thus apoptosis in retina was examined by TUNEL assay three days after LD modeling. Apoptotic activity was almost absent for the control group, while an increasing number of TUNEL-positive cells were detected in the retina of the LD group and LD + vehicle group, especially located in ONL. Similarly, Paeonol treatment effectively attenuated light-induced RPs apoptosis in the LD + Pae group ( Figure 1C). The ERG reflects broad-scale retinal function, with the a-and b-waves indicating photoreceptor and second-order cell responses, respectively. As shown in Figure 2, the decreased amplitudes of a-and b-waves on scotopic ERG caused by LD were significantly mitigated by Paeonol treatment. The above results indicated that Pae exerts protective effects on RPs by ameliorating morphological and functional damage induced by blue light exposure. The visualization of double-membrane compartments via TEM was considered as the gold standard for identifying autophagosomes (Pickford et al., 2008). As shown in Figure 3A, we found that cells from the NC group contained healthy mitochondria, which were easily recognizable in the normal cytoplasm. On the contrary, numerous doublemembrane vacuoles accompanied by a few accumulated autophagic compartments were observed in cells of the retina subjected to 3 days after LD. Meanwhile, in Pae-treated retina, both newly formed and mature autophagosomes could be detected. Compared with increased autophagosome formation in the LD + Pae group, autophagy was inhibited in LD retina due to reduced autophagy induction, not by an increased autophagic flux. Light Damage-Induced Apoptosis in the Retina Was Attenuated by Paeonol Treatment As shown in Figure 3B, Western blot analysis was used to evaluate the apoptosis in retina. Cleaved caspase-3 levels in the retina of the LD and LD + vehicle groups were remarkably higher than those in the NC group, which could be reversed by Pae treatment. Meanwhile, the Bax expression levels in the LD and LD + vehicle groups were remarkedly higher than those in the NC group. In addition, Bcl-2 levels were significantly reduced in retinas of the LD and LD + vehicle groups compared with those in the NC group. Pae significantly inhibited the up-regulation of Bax and down-regulation of Bcl-2. These results indicated that Pae might exert anti-apoptotic effects in the retina. This conclusion was consistent with the results of TUNEL assay ( Figure 1C). Autophagy Was Up-Regulated by Paeonol Treatment in the Retina of Light Damage Rats LC3-II and Beclin-1 are two major autophagy markers, and the PINK1/Parkin pathway is one of the important pathways in autophagy. The expression of LC3 and Beclin-1 in the retina was measured to evaluate the alteration of autophagy (Figure 4). In the LD and LD + vehicle groups, the expression levels of LC3-I and LC3-II were notably downregulated. In particular, the conversion rate from LC3-I to LC3-II in both groups was significantly decreased compared with that in the NC group, suggesting the absence of LC3-II and LC3-I. The ratio of LC3-II to LC3-I was higher in LD + Pae group comparing with it in LD + vehicle group. PINK1 and Parkin were remarkably downregulated in the LD and LD + vehicle groups compared with the NC group. Correspondingly, as the key regulatory protein for autophagy, Beclin-1 was also reduced. Moreover, the alterations of autophagy markers and autophagy pathway were significantly mitigated by Pae treatment. The above results indicated that LD damage in retina caused the resistance of autophagosome formation, the decreased autophagic flux and the block of autophagy, which fortunately Pae can improve to a much better extent. Decreased Cell Viability and Increased Apoptosis Induced by H 2 O 2 Were Mitigated by Paeonol Treatment in 661W Cells To determine the optimal dosage of H 2 O 2 , 661W cells were cultivated with 0, 25, 50, 100, or 200 μM H 2 O 2 , and the cell viability was measured by using a CCK-8 kit. Compared with the control group, cell viability was dramatically decreased by incubating with 50, 100 and 200 μM H 2 O 2 , while no significant decrease of cell viability was observed in cells treated with 25 μM showed that in the LD and LD + vehicle groups, the expression levels of which were notably downregulated. Furthermore, the conversion rate from LC3-I to LC3-II in both groups was significantly decreased compared with that in the NC group, suggesting the absence of LC3-II and LC3-I. Pae treatment can largely reverse the damage. Western blot analysis of PINK1 and parkin in light injured RP (A, B) showed they were remarkably downregulated in the LD and LD + vehicle groups compared with the NC group, which can be significantly mitigated by Pae. The ratio of LC3-II to LC3-I was performing (C). Data show mean ± SD. (n 6-9 for each group) *p < 0.05, **p < 0.01, ***p < 0.001. Then, we examined the apoptosis-associated proteins including cleaved casepase-3, Bax and Bcl-2. The expression levels of pro-apoptotic cleaved caspase-3 and Bax were upregulated, whereas those of anti-apoptotic Bcl-2 were downregulated by H 2 O 2 stimulation compared with the control group. These apoptosis-related alternations could be reversed by Pae treatment (Figures 5D,E). The results demonstrated that apoptosis induced by H 2 O 2 could be significantly alleviated by Pae treatment in 661W cells. Paeonol Showed a Potential Antioxidative Effect in 661W Cells Subsequent experiments were conducted to determine whether Pae affected oxidative stress and autophagy in H 2 O 2 -stimulated 661W cells. By using Mito-Sox assay to detect the mitochondrial ROS, we found that the fluorescence intensity was significantly higher in the H 2 O 2 group than the control group, and the fluorescence intensity noticeably decreased in the H 2 O 2 +Pae group, suggesting the antioxidative effect and protective effect of Pae on mitochondria in 661W cells ( Figure 6A). Furthermore, the expression of major autophagy marker proteins was measured ( Figure 6B,C). After H 2 O 2 stimulation, the expression levels of LC3-I, LC3-II and Beclin-1 were significantly downregulated, and the expression level of p62 were elevated compared with the control group (all p < 0.01). However, these changes of expression level could be reversed by the administration of Pae (all p < 0.01), which implied that Pae could activate autophagosome formation and enhance the autophagic flux in H 2 O 2 -stimulated 661 W cells. Significantly, the ratio of LC3-II to LC3-I was higher in Pae group comparing with in Control group and was higher in H 2 O 2 + Pae group than H 2 O 2 group. In combination with the expression of p62 and Beclin-1, it is possible that the autophagy overactivation was occur. Both the PINK1/Parkin Dependent and BNIP3L/Nix Dependent Autophagy Were Activated by Paeonol in 661W Cells We speculated that Pae might regulate autophagy in 661W cells based on evidence obtained from above experiments. Thus, we measured the PINK1/Parkin dependent and BNIP3L/Nix dependent pathways of autophagy, respectively. The results of Western blot analysis showed that the expression of PINK1 and Parkin were significantly reduced after H 2 O 2 stimulation (both p < 0.01), but were notably augmented after Pae treatment (both p < 0.01). It was worth noting that PINK1 and Parkin were dramatically upregulated by Pae treatment alone when compared with the control group (both p < 0.01) ( Figure 7A). We further detected the expression levels of critical downstream proteins in PINK1/Parkin pathway and found H 2 O 2 -induced decreases of DNP52 and Optineurin were significantly reversed by Pae (both p < 0.01). In accordance with findings of PINK1 and Parkin, Pae treatment alone significantly augmented Optineurin ( Figure 7A). The results showed that Pae could activated autophagy via the PINK1/Parkin pathway and might act on Optineurin directly. In addition, we evaluated the BNIP3L/Nix dependent pathway of autophagy. Significantly, the expression levels of BNIP3 and BNIP3L/Nix were both decreased in the H 2 O 2 group (both p < 0.01) but increased after Pae treatment (both p < 0.01) ( Figure 7B), which indicated that Pae could activate autophagy via BNIP3L/Nix dependent pathway of autophagy in 661W cells as well. DISCUSSION As light-sensitive neurons, RPs are essential to the formation of vision, and the death and dysfunction of RPs could lead to irreversible vision loss and even blindness. However, the efficacy of several current strategies to protect photosensitive neurons remains unsatisfactory. RPs were known to have the highest density of mitochondria in the outer retina (Salminen et al., 2013). Mitochondria play an important role both in physiological functions including cellular metabolism, cell survival, and intracellular homeostasis and in various pathological conditions. Thus, we aimed to explore whether Paeonol, an anti-oxidant drug, could protect RPs from lightinduced and H 2 O 2 -induced oxidative stress by regulating apoptosis and autophagy. In this study, we found that Pae protects photoreceptors from oxidative stress-induced damage and preserved the number and function of photoreceptors. Consistent with previous studies (Russo et al., 2011;D'Adamo et al., 2020), the thickness of the ONL was reduced and the number of surviving RPs was also decreased in rats after LD stimulation. It could be assumed that Pae maintained the thickness of the ONL by increasing the number of RPs. Moreover, the results of TUNEL staining demonstrated that the percentage of apoptotic cells was decreased after Pae treatment and the results of TEM indicated that Pae-treated RPs had healthier mitochondria and greater number of mitophagosomes LDinduced RPs. The amplitudes of a-wave and b-wave are well correlated with the thickness of ONL and provide a direct and objective assessment of the variation in RP function (Jiang and Steinle, 2010;Scholz et al., 2015). Pae showed a protective effect on preserving the function of RPs function, with the reduced amplitudes of a-and b-waves in LD eyes being ameliorating after the administration of Pae. The ratio of LC3-II to LC3-I was presented. The ratio was higher in Pae group comparing with in Control group and was higher in H 2 O 2 + Pae group than H 2 O 2 group. Data show mean ± SD. (n 6-9 for each group) *p < 0.05, **p < 0.01, ***p < 0.001. Scale bar: 100 μm. Frontiers in Pharmacology | www.frontiersin.org May 2021 | Volume 12 | Article 667959 We further explored the potential mechanism of the protective effects. Oxidative stress is a key factor in the secondary pathological process of outer retinal diseases. Considering that ROS accumulation can result in oxidative stress, which is mainly produced by H 2 O 2 diffusion via aquaporins through biological membranes (Tan et al., 2004), we induced oxidative stress in cells using H 2 O 2 . Oxidative stress-induced apoptosis leads to further injury and dysfunction of photosensitive neurons in outer retinal diseases. As an RP cell line that is sensitive to oxidative stress, the apoptosis of 661W cells can be stimulated by H 2 O 2 (Weymouth and Vingrys, 2008). Thus, we used 661W cells to demonstrate the effects of Pae in vitro. The results showed that Pae can mitigate cellular apoptosis and impaired viability owing to H 2 O 2 stimulation. In current study, the expression levels of Bax and cleaved casepase-3 were elevated, whereas the expression levels of Bcl-2 were downregulated after H 2 O 2 stimulation, which were consistent with the pattern of Bcl-2-regulated pathway of mitochondria (Klionsky et al., 2016). Pae mitigated the expression of these proteins, showing that Pae decreased apoptosis by restraining the intrinsic apoptosis pathway in 661W cells. In contrast to our study, one earlier study considered Pae as a dose-dependent cell apoptosis inducer in tumor (Stone et al., 2008). Therefore, Pae should be employed carefully in present and further relevant researches are required. Mitochondrial damage is strongly associated with oxidative stress (Kim et al., 2020). In order to investigate whether the damage was related to mitochondrial function, we deteced ROS levels, which was mainly produced in the mitochondrial electron transport chain with state III to state IV transformations in the organism. In the present study, ROS was significantly increased under H 2 O 2 stimulation and largely alleviated by Pae treatment in vitro. Furthermore, we found that autophagy can be stimulated by Pae through the PINK1/Parkin dependent pathway and the BNIP3L/Nix dependent pathway in 661W cells. Various pathways have been identified for different autophagy environments, such as the FUNDC1 pathway, the BNIP3L/Nix pathway, and the PINK1/Parkin pathway. The models of selective autophagy dictate the receptors, including major ones Optineurin and NDP52, link cargo to autophagosomal membranes, ubiquitination is recognized by these selective transporters. In our study, the expression levels of PINK1 and Parkin were upregulated, which could in turn trigger the overexpression of downstream proteins. As expected, SQSTM1/P62, Optineurin, and NDP52 were all downregulated under H 2 O 2 -induced injury and upregulated by Pae treatment, indicating that autophagy was increased in non-hypoxia-induced pathway in 661W cells. In addition, the expression levels of FUNDC1, BNIP3, and BNIP3L/
2021-05-26T13:27:57.758Z
2021-05-26T00:00:00.000
{ "year": 2021, "sha1": "fb3fd10a865005ed157f44ba8d79d8e88cf562a1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.667959/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb3fd10a865005ed157f44ba8d79d8e88cf562a1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }