id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
249887166 | pes2o/s2orc | v3-fos-license | Management of Congenital Chylothorax of the Newborn
Dear Editor, Bellini et al. [1] raise concerns regarding the timetable of the management algorithm of congenital chylothorax (CCT) of the newborn recently published in the Journal [2]. Additionally, the authors alert about nonreversible measures including pleurodesis and ligation of the thoracic duct. Furthermore, they discuss what is causal associated with CCT and what is a consequence of CCT. Before going into detail, I want to annotate that we were only collecting and analyzing cases with CCT diagnosed at least within the first 28 days of life and not any form of chylothorax related to surgery or injury of any other reason [2]. Prenatal intervention improved perinatal condition and postnatal outcome of CCT in infants <35 weeks of gestational age; thus, our recommendation was that in experienced centers prenatal interventions including pleurodesis might be justified [3, 4]. Most cases of CCT have quite variable courses of disease. Especially, those with high-output pleural fluid losses of more than 100 mL per day struggling on the ventilator with need for pleural drainages and total parenteral nutrition are at high risk for infectious and metabolic complications. This group of infants consists of a majority of preterm born neonates. Thus, too long conservative management of CCT might be harmful and might increase the risk of death. Mortality rates are reported to be as high as 30–70% [2, 5]. At the same time of publication of our systematic analysis of cases between 1990 and 2018, Rocha et al. [6] published a comparable review including algorithms for both congenital and posttraumatic chylothorax. They state that in case of no response after 1 week of conservative treatment and drainage >10 mL/kg/day or persistent drainage of large volumes (>100 mL per day), for a period of 5 consecutive days or severe metabolic and nutritional complications, which are difficult to control, an invasive approach is recommended. This might include thoracic duct ligation, pleural abrasion, pleurodesis, thoracic duct embolization, pleuro-peritoneal shunts, or diaphragmatic fenestration. Thereafter, conservative measures should be gradually reduced provided surgical treatment was successful. Progression in the invasiveness of treatment options is determined by the response to previous treatments [4]. Al-Tawil et al. [7] also recommend that surgery should be considered if conservative management of CCT fails after 4–5 weeks. Another recent review advises in case of severe and long-lasting CCT surgical intervention involving unilateral or bilateral pleurectomy and
Karger@karger.com www.karger.com/res DOI: 10.1159/000525377 Dear Editor, Bellini et al. [1] raise concerns regarding the timetable of the management algorithm of congenital chylothorax (CCT) of the newborn recently published in the Journal [2]. Additionally, the authors alert about nonreversible measures including pleurodesis and ligation of the thoracic duct. Furthermore, they discuss what is causal associated with CCT and what is a consequence of CCT.
Before going into detail, I want to annotate that we were only collecting and analyzing cases with CCT diagnosed at least within the first 28 days of life and not any form of chylothorax related to surgery or injury of any other reason [2]. Prenatal intervention improved perinatal condition and postnatal outcome of CCT in infants <35 weeks of gestational age; thus, our recommendation was that in experienced centers prenatal interventions including pleurodesis might be justified [3,4].
Most cases of CCT have quite variable courses of disease. Especially, those with high-output pleural fluid losses of more than 100 mL per day struggling on the ventilator with need for pleural drainages and total parenteral nutrition are at high risk for infectious and metabolic complications. This group of infants consists of a majority of preterm born neonates. Thus, too long conservative management of CCT might be harmful and might increase the risk of death. Mortality rates are reported to be as high as 30-70% [2,5].
At the same time of publication of our systematic analysis of cases between 1990 and 2018, Rocha et al. [6] published a comparable review including algorithms for both congenital and posttraumatic chylothorax. They state that in case of no response after 1 week of conservative treatment and drainage >10 mL/kg/day or persistent drainage of large volumes (>100 mL per day), for a period of 5 consecutive days or severe metabolic and nutritional complications, which are difficult to control, an invasive approach is recommended. This might include thoracic duct ligation, pleural abrasion, pleurodesis, thoracic duct embolization, pleuro-peritoneal shunts, or diaphragmatic fenestration. Thereafter, conservative measures should be gradually reduced provided surgical treatment was successful. Progression in the invasiveness of treatment options is determined by the response to previous treatments [4]. Al-Tawil et al. [7] also recommend that surgery should be considered if conservative management of CCT fails after 4-5 weeks. Another recent review advises in case of severe and long-lasting CCT surgical intervention involving unilateral or bilateral pleurectomy and thoracic duct ligation, with or without pleurodesis [8]. The authors state that early identification and successful treatment are warranted by a timely cross-disciplinary approach to care. Hence, we feel that our recommendation regarding the time point of surgical interventions is realistic and correct in the context of a potentially lifethreatening disease.
We collected all entities of diagnoses being associated with CCT irrespective of being causal or a consequence. In addition, the authors correctly state that BPD is a consequence of long-term mechanical ventilation in a preterm infant with complicated CCT and not a direct consequence of CCT [1].
There is limited experience with lymphatic studies in neonates, and lymphatic investigations are not available universally [5]. Interestingly, the group of Bellini et al. [1] is the only one experienced in lymphatic studies in CCT of the neonate. In conclusion, we feel that our systematic analyses of all newborn with CCT lead to a stringent timetable with an algorithm quite helpful for the clinician faced with this rare condition.
Conflict of Interest Statement
The author has no conflicts of interest to declare.
Funding Sources
The author declares that there was no funding.
Author Contributions
Prof. Dr. Bernhard Resch wrote the manuscript without any other support as a reply to a letter from Bellini et al. | 2022-06-22T06:17:08.119Z | 2022-06-20T00:00:00.000 | {
"year": 2022,
"sha1": "49847fa3805c7caf5a9c5afdcadd454cf89de1ca",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/525377",
"oa_status": "HYBRID",
"pdf_src": "Karger",
"pdf_hash": "a1360b46880cf1d6c43e5d586c770865cdcbddb7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225692710 | pes2o/s2orc | v3-fos-license | Main Directions and Normative Areas of Agricultural Land Protection
The subject of the considerations presented in the article is the shape of modern regulations on the protection of agricultural land. The analysis is conducted primarily in the field of protective provisions regarding the so-called quantitative and qualitative protection of agricultural land, as well as economic instruments correlated with these provisions for the protection of agricultural land. Other regulatory areas, including predominantly the existing and discussed solutions of European Union law regarding closer linking of agricultural producer support instruments with requirements in the field of environmental protection, within the scope of the so-called conditionality and greening. The subject of the study is not only the analysis of traditional directions of agricultural land protection as a means of production in agriculture, but also an attempt to determine the areas and directions for further development of this regulation.
In the contemporary debate on the causes, complexity and importance of anthropogenic impact on biosphere resources, attention is increasingly drawn to the complex relationship between the natural environment and agriculture. On the one hand, it seems that this is primarily due to the growing awareness that it is the limited character of natural resources which constitute the main barrier of increasing agricultural production, and on the other hand, it is impacted by the awareness of the intensity of agriculture's influence on the natural environment 1 . The significance 112 of this issue is increasing in view of not only the expected, but already observed, deterioration of climate conditions for agricultural activities on a global scale, with a simultaneous increase in the human population 2 , which obviously translates into an increase in the demand for agricultural products. Continuous expansion -on a global scale -of agricultural land takes place mainly at the expense of natural resources, and the growing intensification of agricultural production and concentration of crops of the most desirable plant species (e.g. soybeans or wheat) inevitably conflicts with the protection of biodiversity. Attention should also be paid to the surprisingly high share of agriculture in global greenhouse gas emissions contributing to climate change (incidentally, also significantly affecting agriculture itself) 3 . As a consequence, agricultural issues seem to be one of the most important and most current ecological, sociological and economic problems of modern times. It should be noted that this issue is extremely broad and multifaceted, as it covers not only the general laws of the biosphere, social and economic principles of agriculture, but also the complex relationships between them. An example of these relations is the matter of the protection of agricultural land, which is undoubtedly both one of the most important and, so far, irreplaceable means of agricultural production, as well as an important element of the natural environment.
The axiological foundations of modern legal regulations related to the protection of agricultural lands were primarily concerned with ensuring an appropriate area of adequate quality of land needed for conducting agricultural production activities. Both the general acreage of this type of land and their quality, understood as usefulness for conducting production activity in agriculture, determine the so-called food security, which is understood as creating the conditions for ensuring the right quantity and quality of food products for humanity 4 . However, due to the fact that agricultural land, being the most characteristic and in principle irreplaceable, means of production for agriculture, is also an important element of the natural environment. As such, the protection of agricultural land is also justified by the legislator's responsible for 92% of the global trace of fresh water, understood as the amount of water consumed or contaminated, taking into account the full chain of the production process of a given product or service, i.e. the amount of water necessary to produce it (including direct and indirect water consumption of the product). 2 A. Kagan, Oddziaływanie rolnictwa na środowisko naturalne, "Zagadnienia Ekonomiki Rolnej" 2011, nr 3, pp. 99-115 and literature cited therein.
3 The share of agriculture (including forestry and non-agricultural land use) in global greenhouse gas emissions in 2015 was 24% -see J. Pawlak, Poziom i struktura emisji gazów cieplarnianych w rolnictwie, "Problemy Inżynierii Rolniczej" 2017, z. 4, p. 56 pursuit of a number of values indicated in the Polish Constitution 5 , such as e.g. environmental protection, the principle of sustainable development, or the pursuit of policies that ensure ecological security for present and future generations 6 .
The progressing industrialization and urbanization, both on a national and global scale, as well as natural processes (or quasi-natural, since they are largely anthropogenic), such as the desertification of large areas due to global warming, or the increased frequency of natural disasters -drought, floods and other violent meteorological phenomena, have a negative impact on the natural environment, and at the same time reduce the area of land suitable for agricultural production. At the same time, demographic processes, constant civilization development of societies, an increase in industrialization and urbanization processes, and recent decades also globalization processes have been conducive to a steady increase in demand not only for agricultural food products, but also for agricultural products for other purposes. For example, the development of renewable energy based on subsidies common in highly developed countries, including on the so-called energy crops contributes to a further increase in the demand for arable land. Rapeseed production related to the production of esters added to diesel oil, or even the production of so-called bio-diesel, the production of bioethanol or biomass (used for so-called co-combustion or as a contribution to a biogas plant) is associated with the need to allocate large areas of arable land for this purpose. It is worth noting that in some countries supporting the development of solar energy, as well as construction of large solar power plants, it also often happens at the expense of arable land.
Referring to the demographic phenomena causing a steady increase in the demand for food which is the basic agricultural product, it should be pointed out that the number of Polish residents only increased in the last 40 years by about 4 million people. According to the Central Statistical Office data, in Poland in 1976 there were about 34.5 million people, and in 2016 -38.4 million people 7 . At the same time, in a similar period, the percentage share of arable land in the total Pobrane z czasopisma Studia Iuridica Lublinensia http://studiaiuridica.umcs.pl Data: 15/09/2021 22:15:21 U M C S land area in Poland dropped from 60.3% to 51.6% 8 . In turn, the world's population increased from 4.453 billion in 1980 to 7.550 billion in 2017 9 , so the increase reached almost 3.1 billion people (i.e. by about 70%). A clear trend of shrinking arable land in the world is noticeable in an even shorter period. Only in 2005-2015 the arable land area decreased globally from 4,940 to 4,868 million ha 10 , i.e. by as much as 72 million ha 11 . At present, several major directions of agricultural land protection can be distinguished in Polish legislation. The basic legal act regulating the issues of the protection of agricultural and forest land is the Act of 3 February 1995 on the protection of agricultural and forest land. The normative grounds for the protection of agricultural land can also be found in the Act on Environmental Protection and in the Act on Nature Protection. It should be noted, however, that legal regulations in the field of protection of agricultural and forest land, as indicated by the evolution of normative solutions, are not directly subordinated to legal regulations in the field of nature and environmental protection 12 . Aside from the protection constituted by the Act on the Protection of Agricultural and Forest Land, the provisions of which focus on the so-called quantitative and qualitative protection, it is also important to protect agricultural structures consisting not so much in the protection of the land itself, as in maintaining the right conditions for its use in a rational manner and the manner consistent with the adopted system of political assumptions regarding agricultural sector. From this point of view, it seems reasonable that the grounds for the protection of agricultural land are both the provisions of the Act on Shaping the Agricultural System and the provisions of the Act of 21 August 1997 on Real Estate Management 13 regarding among others also restrictions on the division of agricultural real estate, as well as the provision of Article 213 of the Civil Code concerning the abolition of joint ownership of agricultural properties. The issue of the protection of agricultural land is also subject to regulations promoting the proper use of agricultural land. An example of regulations in this respect are the provisions implementing the so-called cross-compliance conditioning the disbursement of direct payments for maintaining the land in good condition. Finally, regulations regarding the protection of agricultural land can also be found in specialized sectoral 8 laws related to the functioning of the mining industry, as well as the implementation of public investments of particular importance. It should be noted that the legal protection of agricultural land uses many solutions similar to those characteristic of environmental protection regulations. This applies, above all, to the problem of opposing the dominance of negative factors of urbanization and industrialization. However, these solutions do not always converge. Regulations on environmental protection will prefer the development of organic farming -based on natural methods of farming, also taking into account biological progress, but not directly related to the protection of the soil's productive properties. On the other hand, the protection of agricultural species and the pursuit of increasing the efficiency of agricultural production requires the use of agrotechnical measures, including agrochemical ones, entailing not only the use of fertilizers or plant protection products, but above all, the creation of monocultures that are natural from the point of view of agricultural management, which is clearly in contradiction to the pursuit of protection of biodiversity. This example highlights the differences between legal regulations regarding environmental protection and the protection of agricultural land. The objectives of both legal regulations seem to be very similar, but they are not the same. The protection of agricultural land is primarily related to the protection of land productivity, while in principle it abstracts from other issues characteristic of regulations concerning environmental protection and nature protection.
On the systemic level, it is justified to claim that regulations regarding the protection of agricultural land constitute a lex specialis in relation to legal regulations in the field of environmental protection. At the same time, however, to the extent that they regulate the use of agricultural land for non-agricultural purposes, they seem to be more closely related to the provisions of the Act on Spatial Planning. In contrast to the environmental legal regulations of the above-mentioned statutes, this Act is regulated in principle solely for the purpose of protecting the land resources of Polish agriculture.
Despite the significant impact on the environmental law of European Union law 14 , to date, the issue of agricultural land protection has not been covered by the European Union by a comprehensive legal act regulating aspects of land protection as an agricultural means of production. Although in the past work was undertaken on a draft of a directive of the European Parliament and of the Council on establishing a single framework for soil protection 15 , it was finally decided to adopt a partial solution consisting of the introduction of regulation, the main subject of which is soil protection against industrial pollution, leaving protection currently outside the scope of regulation soil against other hazards, in particular those regarding the unfavourable transformation of landmass, as well as before allocating agricultural land for non-agricultural purposes. In this context, however, the adoption of Directive 2010/75/EU of 24 November 2010 on industrial emissions 16 , which also covers soil protection against pollution, should be noted. This directive introduces, for the first time in EU law, the definition of the concept of soil, which is considered to be the top layer of the earth's crust situated between the parent rock and a surface consisting of mineral particles, organic matter, water, air and living organisms (Article 3 (21) of the Directive) 17 . Although the scope of the regulation covered by the directive is quite wide, it should be emphasized that the main goal of the EU legislator was to harmonize national provisions in the field of soil protection, mainly targeting the pollution originating from industrial emissions 18 ; at the same time, the soil protection issues in the context of the protection of agricultural land as an agricultural means of production still remain outside the scope comprehensive regulation of EU law 19 .
The legal protection of agricultural land in Poland has a relatively short tradition. Apart from earlier fragmentary regulations, it was initiated by the entry into The legal protection of agricultural land resulting from the Act on the Protection of Agricultural and Forest Land is functional and comprehensive, although it should be noted that the provisions of this Act were closely linked to the provisions of the Act on Spatial Planning and Development. As a consequence, arable and forest land is protected under a separate legal regulation, the aim of which is to protect land primarily as a means of production.
The subject of protection constituted under the provisions of the Act on the Protection of Agricultural and Forestry Land is agricultural land. The statutory definition of this concept formulated in Article 2 of the Act, abstracts from the ownership criterion and refers primarily to the physicochemical features of the soil and its administrative and legal classification made in the land registry. It should be noted that when determining the subject of protection, the legislator intentionally did not use the concept of real estate formulated in the Civil Code. The concept of agricultural land within the meaning of the provisions of the Act on the Protection of Agricultural and Forestry Land does not contain elements indicating forms of ownership. It is an expression of a supra-proprietary approach to the concept of agricultural land. This is mainly due to the fact that currently, applicable legal regulations (as well as previous regulations) protect agricultural and forest land against the effects of urbanizing and industrializing factors with the help of administrative and legal mechanisms, regardless of whose ownership they are and regardless of which units are included in the production, or even regardless of whether the land in question is part of any production unit. As a consequence, the view seems to be justified that the concept of arable land within the meaning of protection legislation is presented as superior to such a concept of "agricultural land". According to the Act, the protection covers: lands specified in the land register as arable land 25 ; lands under fishponds and other water reservoirs, used solely for the needs of agriculture; lands under residential buildings constituting farms and other buildings and equipment used exclusively for agricultural production and agri-food processing; lands for buildings and equipment used directly for agricultural production recognized as a special department, in accordance with the provisions on personal income tax and corporate income tax. The Act also protects: lands of rural parks and under wooded fields and shrubs, including windbreaks and anti-erosion devices; lands of allotment gardens and botanical gardens, under the devices: water drainage, flood and fire protection, agriculture water supply, sewage, and utilization sewage and waste for agriculture and rural residents; reclaimed lands for agriculture, peat bogs, and ponds; lands under access roads to agricultural land. Legal protection of agricultural land based on the provisions of the Act on the Protection of Agricultural and Forestry Land is implemented in two areas: 1) as quantitative protection and 2) as so-called qualitative protection. Quantitative protection is directed at the regulation of activities that may result in a decrease in the area of arable land as a result of their allocation for non-agricultural purposes. This category of protective measures also includes measures aimed at reclamation of agricultural land previously used for other purposes. In turn, activities in the field of qualitative protection are mainly aimed at preventing deterioration (degradation) or loss (devastation) of the value in use of agricultural land, in particular as a result of deterioration of natural conditions or as a result of changes in the environment and industrial activity, as well as defective agricultural activity. Qualitative protection also includes activities promoting remediation understood as improving the physical and chemical properties of the soil, regulating water relations, restoring soil, strengthening the slopes and rebuilding the construction of necessary access roads to agricultural land. Activities intended to preserve peat bogs and waterholes as natural water reservoirs as well as to limit changes in the natural surface formation of the earth also fall into the same category 26 .
The provisions on quantitative protection introduce a special control regime regarding the allocation of agricultural land for non-agricultural purposes, supplementing the provisions of the Act on Spatial Planning and Development 27 in this respect. In particular, the legislator pointed out that non-agricultural designation may be granted primarily for land qualified as wasteland in the land register, and if there are none -other lands with the lowest production suitability. Detailed legal regulation was also applied to the procedure for changing the use of agricultural land for other purposes, as well as measures aimed at the actual exclusion of land from agricultural production. The regulatory model for quantitative protection introduced by the Act, therefore, includes two stages of control: 1) control over land use for another purpose and 2) control over land exclusion from agricultural production.
It should also be noted that the intensity of protection is a derivative of the quality of the land that is the subject of this protection, while the measure of this quality -understood as usefulness for agricultural production -is the so-called 26 Activities aimed at protecting forest land can be systematized in a similar way. Also in relation to these lands, actions are taken on the one hand to reduce their use for non-forest purposes (quantitative protection) as well as to prevent degradation and devastation of forest land and on the other hand damage to forest stands and forest production resulting from non-forest activities and mass movements, land, activities aimed at restoring the utility value of land that has lost the character of forest land as a result of non-forest activities and improving its value in use, measures to prevent the decrease in the productivity of these lands, as well as limiting changes in the natural shape of the earth's surface (qualitative protection). Pobrane z czasopisma Studia Iuridica Lublinensia http://studiaiuridica.umcs.pl Data: 15/09/2021 22:15:21 U M C S soil quality class 28 . Thus, as a rule 29 , changes in the purpose of agricultural lands constituting arable lands of classes I-III, as well as forest land can be made only under local spatial development plans adopted on the basis and in the manner provided for by the Act on Spatial Planning. Admittedly, both the adoption and amendment of local spatial development plans, constituting an act of local law, fall within the competence of the municipal bodies 30 referred to as the so-called planning authority of the municipality, it is the provisions of the Act on the Protection of Agricultural and Forest Land that the scope of this power with respect to the best agricultural land is significantly limited. According to the disposition of Article 7 (2) of the Act, the use of agricultural land constituting arable lands of classes I-III for non-agricultural purposes -requires the consent of the minister competent for rural development 31 . Such consent is expressed in the form of an administrative decision 32 . An application for the same shall be submitted by the head of the municipality acting on behalf of the municipality and should contain detailed justification for the need to change the designation of land for non-agricultural purposes, a list of the areas of such land, taking into account the bonitation classes of agricultural land and land habitat types, as well as the economic justification for the proposed purpose, taking into account in particular: the sum of receivables and annual fees for lands designed for non-agricultural and non-forest purposes, and the expected extent of losses that will be incurred by agriculture as a result of the negative impact of investments located on lands designed for non-agricultural purposes and not forested.
Municipal bodies have much more freedom in making decisions regarding the allocation of agricultural land with lower agricultural suitability for non-agricultural purposes (i.e. agricultural land of lower valuation classes) 33 . A change in the 28 Soil classification of lands is carried out on the basis of the provisions of the regulation of the Council of Ministers of 12 September 2012 on Soil Classification of Lands (Journal of Laws 2012, item 1246). 29 The exception to this rule concerns the so-called compact development areas (in Polish: obszar zwartej zabudowy) (Article 7 (2a)).
30 These competences are divided between the head of the commune, mayor or city president, who is primarily responsible for developing the draft plan and conducting proceedings aimed at agreeing the proposed solutions with the competent authorities, as well as the council of the municipality (city), whose competences include primarily initiating the planning procedure and first of all, adopting resolutions on the approval of the project (adoption) of the local spatial development plan. purpose of such land for non-agricultural purposes may be made by means of an administrative act -a decision on the location of a public purpose investment or a zoning (outline planning) decision. In this case, the basis for making decisions in this regard will be the provisions of Chapter 5 of the Act on Spatial Planning and Development.
As mentioned, the impact of protective mechanisms in the field of quantitative protection is not limited to controlling changes in the use of agricultural land for other purposes. The subject of legal regulation is also the conditions under which, even after a formal change in the use of arable land for non-agricultural purposes, it is possible to actually commence land use other than agricultural, including arable land. Also, in this case, the legislator differentiates the scope of protection from the quality of the protected land. Thus, the exclusion from the production of land produced from soils of mineral and organic origin classified in classes I-IIIb and agricultural lands of classes IV-VI produced from soils of organic origin, may take place after the competent starost 34 has issued a decision authorizing such exclusion 35 .
Protective measures in the field of quantitative protection of agricultural land include not only strictly administrative rationing, understood as actions in the field of I permit -I do not permit. Quantitative protection of excluding agricultural land from production is also implemented on the basis of economic instruments specified in the Act. The recipient of the permit to exclude land from agricultural production is obliged to pay the fees and charges specified in the Act -annual fees and charges. The amount and method of calculating these fees were shaped in such a way that even in cases where the exclusion of land from agricultural production was formally possible (i.e. after the land was changed for non-agricultural designation and after obtaining permission to start non-agricultural activities on it) the person intending to make such an exemption shall make an assessment of its merits from an economic point of view. Depending on the type of use and the bonitation class, the one-off payment for excluding 1 ha of agricultural land from production may vary from approx. PLN 88,000 (6 th class land) to approx. PLN 440,000 (1 st class land), so these fees are very steep. However, the rigor of this solution is mitigated in cases where the land excluded from production has a high market value (after its exclusion). Pursuant to the provisions of the Act, the amount of the indicated one-off payment is reduced by the value of land on the day it is excluded from agricultural production. Consequently, if the land has a high market value, the number of payments can be significantly reduced, and if the value of the land excluded from production is at least equal to the amount of the payment, this fee will not be due U M C S at all. This mechanism is to prevent the exclusion from agricultural production of land which market value (after exclusion from production) will be lower than the amounts due indicated in the Act. It should be noted, however, that regardless of whether the amount of the one-off payment will be due in a given case, the person excluding agricultural land from production is obliged to pay the so-called annual fees for non-agricultural (and non-forest) use of land excluded from production, in the amount of 10% of the amount of one-off payment. These fees are payable for 10 years in the event of permanent exclusion of land from production, and in the event of a temporary exclusion -for the period of this exclusion which should be no longer than 20 years 36 .
Finally, it should be mentioned that irrespective of the above-mentioned fees, the starosta issuing the permit to exclude land from agricultural production may impose on the person making such exclusion the obligation to remove and use for the purposes of improving the value in use of land a layer of humus soil from the agricultural land of classes I, II, IIIa, IIIb, III, IVa and IV and peat bogs.
Another manner of protective measures provided for in the Act on the Protection of Agricultural and Forestry land is the so-called qualitative protection. It mainly consists in preventing soil degradation and provides the basis for issuing a decision ordering reclamation measures. The Act in Article 20 (1) introduces the principle that the rehabilitation of agricultural land is primarily the responsibility of the person who led to the loss or reduction of the value in use of these lands. Reclamation of land, devastated or degraded by unknown people, or as a result of natural disasters or mass movements of land is carried out by the starosta.
The obligation to counteract soil degradation, in particular preventing erosion and mass soil movements, is primarily the responsibility of the land owner. As part of activities aimed at protecting agricultural land, the starosta may order the land owner to afforest or shrub the land or establish permanent grassland on it. Land owners with anti-erosion facilities are required to maintain and keep them in working order. It should also be noted that the provisions of the Act on the Protection of Agricultural Land provide the basis for determining specific management rules for agricultural land located in areas of limited use that can be designated around plants polluting the environment 37 . For this type of land, a management plan for these lands is being developed, at the cost of the operator of the industrial facility 36 More broadly, see J. Bieluk, Instrumenty finansowe ochrony gruntów rolnych i leśnych, "Acta Universitatis Wratislaviensis. Prawo" 2015, nr 3656, pp. 13-24. 37 The rules for creating restricted use areas are currently regulated by the Act of 27 April 2001 -Environmental Protection Law (Journal of Laws 2008, No. 25, item 150 as amended). Pursuant to the provisions of this Act, a limited use area is created around specific plants or facilities, if the ecological review or the environmental impact assessment of the project, or the post-implementation analysis show that despite the application of available technical, technological and organizational solutions, environmental quality standards outside the factory or other facility cannot be met. (a person whose activity may result in the loss or reduction of the value in use of the land). If the owner of the land in the protection zone suffered damage in the form of a reduction in the level of agricultural production, the operator of the industrial facility is obliged to pay appropriate compensation. In these zones, periodic testing of soil and plant contamination levels is also carried out, and contaminated land is excluded from production, with the consequences of such a decision being charged to the operator of the industrial facility responsible for the contamination.
As it results from the above review, the main directions and regulatory areas for the protection of agricultural land in Poland are determined by the Act on the Protection of Agricultural and Forest Land. This regulation, in terms of the protection of agricultural land from the point of view of what these lands provide as part of agricultural production, is undoubtedly comprehensive and functionally consistent. Additionally, what is worth emphasizing is the relative stability of the regulation -it has been in force for almost 25 years, and the changes introduced in it during this period did not change its basic assumptions. At the same time, it should be noted that the issue of agricultural land protection -in fact also in terms of their productive nature -is also subject to other regulations. In addition to the above-mentioned provisions on the protection of agricultural structures, one can indicate here the provisions related to direct payments to agricultural producers, including the so-called cross-compliance, as well as provisions regarding the so--called greening, i.e. the use of agricultural practices beneficial for the climate and the environment by agricultural producers 38 . Greening provisions are a mandatory component of the current direct payment system that has been introduced to improve environmental performance.
Greening is carried out by diversifying crops, maintaining permanent grasslands and maintaining ecological focus areas. These activities are undoubtedly convergent with the assumptions for the protection of agricultural land, but in this case, it is clearly seen that this protection not only takes into account the strictly productive features of agricultural land, but also takes more into account the needs of environmental protection. It should be noted that similar solutions are also to be maintained Pobrane z czasopisma Studia Iuridica Lublinensia http://studiaiuridica.umcs.pl Data: 15/09/2021 22:15:21 U M C S in the new financial perspective of the European Union. Under the assumptions, the new "conditionality" framework of the future CAP is intended to support better soil quality and protection, as well as increased carbon absorption through better land use and crop management. It should be expected that the beneficiaries farmers' obligations regarding subsidies will be maintained, regarding compliance with Good Agricultural and Environmental Condition (GAEC) principles related to direct support of their income. GAEC principles are to include protection of peat bogs and wetlands, crop rotation (which is to replace crop diversity), minimal land use for cultivation to reduce soil degradation, and soil coverage. It seems that, apart from maintaining the current regulation of the Act on the Protection of Agricultural and Forest Land in the field of "traditional" directions for the protection of agricultural land, i.e. quantitative and qualitative protection, it should be expected that further development of legal regulations in the field of agricultural land protection will lead to closer linking this regulation with environmental protection regulations, at least in the scope of using more environmentally friendly and more sustainable ways of conducting agricultural production. At the same time, however, taking into account the impact of EU law on the legal situation of agricultural producers, and thus on the legal situation of owners and holders of agricultural land, it should be expected that further development of regulations concerning the protection of agricultural land will take place precisely in the area of EU law. | 2020-10-30T09:04:23.958Z | 2020-06-21T00:00:00.000 | {
"year": 2020,
"sha1": "c43197f6209c843b9294026f6fcab3c8ab59f11c",
"oa_license": "CCBY",
"oa_url": "https://journals.umcs.pl/sil/article/download/10468/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ce11541e234d7db1da77a6328c0300d97a9f274c",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
212751490 | pes2o/s2orc | v3-fos-license | Excess Mortality Due to External Causes in Women in the South African Mining Industry: 2013–2015
Mining is a recognized high-risk industry with a relatively high occurrence of occupational injuries and disease. In this study, we looked at the differences in mortality between male and female miners in South Africa. Data from Statistics South Africa regarding occupation and cause of death in the combined years 2013–2015 were analyzed. Proportional mortality ratios (PMRs) were calculated to investigate excess mortality due to external causes of death by sex in miners and in manufacturing laborers. Results: Women miners died at a significantly younger age on average (44 years) than all women (60 years), women manufacturers (53 years), and male miners (55 yrs). There was a significantly increased proportion of deaths due to external causes in women miners (12.4%) compared to all women (4.8%) and women manufacturers (4.6%). Significantly increased PMRs were seen in car occupant accidents (467, 95% confidence interval (CI) 151–1447), firearm discharge (464, 95% CI 220–974), and contact with blunt objects (2220 95% CI 833–5915). Conclusion: This descriptive study showed excess deaths in women miners due to external causes. Road accidents, firearm discharge, and contact with blunt objects PMRs were significantly increased. Further research is required to confirm the underlying reasons for external causes of death and to develop recommendations to protect women miners.
Introduction
In 2002, the Mineral and Petroleum Resources Development Act (MPRDA) of 2002 and the Mine Health and Safety Act of 1996 opened mines in South Africa to women [1]. Following this, the 2004 South African Mining Charter recognised that in South Africa, women were historically prevented from participating in the mainstream economy. Thus, plans were put in place to "ensure higher levels of inclusiveness and advancement of women" [2]. In 2017, the Minerals Council South Africa Women in Mining report showed that 12% of miners were women, with 14.9% in top management [3].
Mining globally and in South Africa is considered a hazardous industry with a number of risks, from exposure to crystalline silica to accidents and ground movements [4,5]. In China, increased all-cause and cause-specific mortality of miners was found in a large follow-up cohort study of silica-exposed and silica-non-exposed workers [6]. Further, in the USA, fatal injury rates in mining were found to be four times higher than the average for all industries [7]. In South Africa, studies reported miners to be at increased risk of Tuberculosis (TB) and Human Immunodeficiency Virus (HIV) [8,9], respiratory diseases [10][11][12][13], and injuries [14][15][16]. In a 2008 mortality cohort study of platinum miners in South Africa, external causes were found to be the second most common cause of death after HIV [15]. Road traffic accidents were the most common cause of unnatural death in the study (38%), followed by homicide (30%) and occupational injuries (17%). Despite this finding, men in the cohort exhibited lower rates of unnatural deaths overall than the average South African population (Incidence rate ratio 0.89, 95% confidence interval (CI) 0.82-0.95) [15]. A recent study by Bloch et al. (2018) found that women miners did not have excess mortality compared to male miners. They reported an excess of mortality overall in miners which was 20% higher than the general public, although they found that the mortality rate decreased over time [16]. Although research on mortality in mining generally focuses on men, recent reports by Benya et al., 2019 highlighted the violence faced by women due to a masculine occupational culture in mining [17,18].
Thus, with the dangers associated with working in the mining industry and the additional vulnerability of women miners, we aimed to determine the extent (prevalence) and cause of mortality in women miners in South Africa using the national death registry data.
Materials and Methods
Mortality, usual occupation, and industry data, as captured from death certificates in South Africa from 2013 to 2015, were retrieved from the website for Statistics South Africa (www.statssa.gov.za). The data, consisting of deaths due to underlying cause, which was coded by Stats SA using the 10th International Classification of Diseases Codes, were used to calculate the proportions of death according to the main cause of death for each occupational group. The occupation was coded by Statistics South Africa using the South African Standard Classification of Occupations (SASCO) list [19]. The information for this field was obtained from the open question asked on page one of the DHA-1663A form, part A, question 19, i.e., "What was the usual occupation of the deceased (the type of work done during most of life)?" [20]. This analysis was limited to three years (2013-2015), as the occupation was not coded in detail from 1997 to 2013, therefore the miners could not be identified. Detailed coding for occupation in 2016 mortality data is not currently available. These data are publicly available and an ethics waiver was received from the University of the Witwatersrand Human Ethics Committee for the secondary analysis of these data.
Occupations included the group miners, including mining supervisors, mining and mineral processing plant operators, metal processing operators, stationary plant operators, machinery mechanics, electrical installers (within the mining industry), building finishers (within the mining industry), mining laborers, and other elementary workers (within the mining industry). For occupational groups not specific to the mining industry, only those who also reported mining as the industry they worked in were included. The largest occupation reported in the mining industry for this dataset was mining and mineral processing (61%).
Male and female manufacturing laborers were chosen as a comparison group for miners as they were classified in the same occupation subgroup (construction, manufacturing, and mining are grouped together in an occupational subgroup in SASCO). These workers also often undertake manual labor and are exposed to chemical hazards in their work, although these hazards are of a different nature. Manufacturing workers are likely to be of similar social and socio-economic status in South Africa, thus, they may exhibit differences due to their occupations. For comparison, men and women who were reported to be unemployed were included, while those who presented no information regarding occupation were excluded.
Data Management
The South African mortality data from the years 2013-2015 were combined to investigate the mortality experience of women miners due to the small numbers of women working in the mines as miners. The variables used in the analysis were cleaned by recoding any unknown values in age, marital status, education, or sex to "missing" so that the numbers used to code this information were not included in calculations. Deaths of persons aged 14 and below were removed from the dataset, as this analysis focused on the possible underlying causes of death linked to occupation or exposure from work. In South Africa, 15 years of age is the legal working age. This study was limited to those who died 15 years and older. A cut-off of 65 years (retirement age) was not used to account for causes of death with long lag phases, such as cancer and pneumoconiosis. Marital status was condensed into ever married or never married. Education was summarized into four groups, namely, none, primary school, high school, or tertiary education (Table 1). Both the minor occupation (coded to three digits) and the industry were used to identify mining industry workers, manufacturing laborers, and those who were unemployed. The proportion by occupation for underlying group causes (ICD10) with proportions above 1% for ages 15 and above are presented in Figure 1. The external causes of death analyses in Table 2 include all external causes of death reported in women miners and the total external causes of death, which also corresponded to the most common external causes of death in the entire dataset. Mortality in section P00-P96 was not reported, as this was specific to the perinatal period. of death with long lag phases, such as cancer and pneumoconiosis. Marital status was condensed into ever married or never married. Education was summarized into four groups, namely, none, primary school, high school, or tertiary education (Table 1). Both the minor occupation (coded to three digits) and the industry were used to identify mining industry workers, manufacturing laborers, and those who were unemployed. The proportion by occupation for underlying group causes (ICD10) with proportions above 1% for ages 15 and above are presented in Figure 1. The external causes of death analyses in Table 2 include all external causes of death reported in women miners and the total external causes of death, which also corresponded to the most common external causes of death in the entire dataset. Mortality in section P00-P96 was not reported, as this was specific to the perinatal period.
Statistical Analysis
Data analysis was conducted using STATA software v16 (Stata Statistical Software: Release 16 SE, StataCorp. 2017. StataCorp LLC, College Station, TX, USA) and Microsoft Excel 2010. Student's ttests were used to compare means, while Wilcoxson rank sum (RS) was used to compare medians and proportion tests were used to compare proportions within the groups. The alpha level was set at 0.05.
Proportional mortality ratios (PMRs) were calculated for the external causes of death for women miners, women manufacturing labourers, male miners, and male manufacturers using total deaths in all reported working women and all reported working men as denominators. The number of current workers in each of these categories is not currently available; thus, it was not possible to calculate mortality rates. The 95% confidence intervals (95% CI) were calculated for each estimate using the method for calculating 95% CIs for rates.
Statistical Analysis
Data analysis was conducted using STATA software v16 (Stata Statistical Software: Release 16 SE, StataCorp. 2017. StataCorp LLC, College Station, TX, USA) and Microsoft Excel 2010. Student's t-tests were used to compare means, while Wilcoxson rank sum (RS) was used to compare medians and proportion tests were used to compare proportions within the groups. The alpha level was set at 0.05.
Proportional mortality ratios (PMRs) were calculated for the external causes of death for women miners, women manufacturing labourers, male miners, and male manufacturers using total deaths in all reported working women and all reported working men as denominators. The number of current workers in each of these categories is not currently available; thus, it was not possible to calculate mortality rates. The 95% confidence intervals (95% CI) were calculated for each estimate using the method for calculating 95% CIs for rates.
Number of deaths due to a specific cause in a population Total number of deaths from all causes in the same population
(1) Proportionate mortality ratio (PMR): Proportionate mortality for a specific cause in the population of interest Proportional mortality for the same cause in the general population × 100 A PMR above 100 was considered increased and significant if the 95% CI did not include 100. A multivariate logistic regression was performed for sensitivity analysis, using backward regression with the available variables. Variables were retained in the model if they were significant.
Results
There were approximately 1,247,000 registered deaths from 2013 to 2015. A total of 8362 (0.69%) deaths in miners were identified. The demographic characteristics for miners and manufacturing laborers were compared to those of all men and women of working age (Table 1). Generally, women died at a significantly older average age than men 9 (difference of eight yrs, Wilcoxon RS p < 0.001). In contrast, female miners died younger than men (-9 yrs, Wilcoxon RS p < 0.0001), male miners (-11 years, Wilcoxon RS p < 0.0001), female manufacturing laborers (-9 yrs, t-test p < 0.0001) and male manufacturing laborers (-8 years, Wilcoxon RS p < 0.0001). Women miners died sixteen years younger on average than all women (p < 0.0001), while male miners died at an older age than men in general (1.4 years, Wilcoxon RS p < 0.0149) and were older than manufacturing laborers.
Women miners were more educated than women in general (proportions test, p < 0.0001), and achieved a higher level of education than male miners (proportions test p < 0.0001). Women miners were generally less likely to be married than men and significantly less likely than male miners (proportions test p < 0.0001). There was no significant difference in smoking history between women miners and women in general (proportions test p = 0.3044); however, women miners were less likely to smoke than men in general and male miners (proportions test p < 0.0001).
Underlying Causes of Death
All causes of death were investigated, i.e., not only occupational deaths, as occupation may have an impact on health over and above occupational exposures. The medium quality of the data also limited any specific analyses [21]. Investigating the underlying causes of death in female miners is complicated, as women in South Africa generally exhibit different mortality patterns to men and miners display different patterns to non-miners. Thus, Figure 1 compares the proportions of deaths in both women and male miners and non-miners.
Women miners and women manufacturing laborers showed different patterns regarding causes of death compared to women in general. Working women displayed increased proportions of deaths due to infectious diseases, with a 3.5% increase in women miners and 5.2% in women manufacturing laborers. Male miners and male manufacturing laborers also showed increases in infectious diseases deaths compared to all men. Male miners showed a 3.6% increase, which was similar to women miners, and male manufacturing laborers showed an increase of 5.5%, which was similar to women manufacturing laborers. Both groups of employed men and women exhibited reduced proportions of ill-defined deaths. Patterns showed a reduction of 6.4% in women miners and 8.3% in women manufacturing laborers, while male miners and male manufacturing laborers were 3.7% and 4.9% less likely to die of ill-defined causes, respectively. Following this pattern, women and male miners showed decreased proportions of deaths (6.4% and 1.7% reductions, respectively) due to diseases of the circulatory system compared to all men and women, while male and female manufacturing laborers exhibited smaller decreases (0.2% and 2.1% respectively).
Contrary to the diseases above, women and male miners both suffered increased deaths due to diseases of the respiratory system (3% and 2.6% increases, respectively). However, male and female manufacturing laborers showed slight decreases in respiratory disease deaths (0.5% and 0.8%, respectively).
Women miners showed an increased proportion of deaths due to external causes compared to all women (7.6% increase) and to women manufacturing laborers (7.7% increase), while male miners and male manufacturing laborers demonstrated similar proportions to all men. We reported all external causes of death, not only those reported as occupational incidents.
External causes of death in this dataset counted for 10.5% of overall deaths, with men accounting for 78% of these deaths. Women miners suffered more than double the percentage of deaths (12.4%) due to external causes compared to all women (4.75%) and women manufacturing laborers (4.6%). The specific external causes of death were investigated further and are presented in Table 2.
We calculated proportional mortality ratios (PMRs) for all working women as the comparison group against women miners, women manufacturing laborers and unemployed women. All working men were the comparison group for the PMR calculations of male miners, male manufacturing laborers, and unemployed men. The PMRs indicated whether the proportion of deaths due to a specific external cause were high or low for the particular industry compared to all industries.
Women miners presented a significantly increased PMR of 183 compared to all working women regarding total external causes of death. In contrast to this, the comparison groups of male miners, women, male manufacturing laborers, and unemployed women exhibited significantly decreased PMRs for external causes of death; only unemployed men showed a significantly increased PMR of 152.
The underlying causes were further investigated by focusing in detail on the causes reported in women miners. Women and male miners showed significantly increased PMRs for transport accidents, with the largest PMR of 467 reported for women miner car occupant accidents and a PMR of 165 for male miners. These deaths showed nonsignificant or decreased PMRs in the comparison groups. Women miners also exhibited a significantly increased PMR 243 for unspecified vehicle accidents, while unemployed men and women were protected and manufacturing laborers similar to employed workers in this regard. Unexpectedly, women miners had a significantly increased PMR of 464 for deaths due to firearm discharge. Firearm deaths in male miners were nonsignificantly reduced with a PMR 85, and nonsignificantly reduced in both women and male manufacturing laborers. The only other group with an increased PMR (186) for firearm deaths was unemployed men. The PMR for exposure to unspecified forces was significantly increased in women miners with a PMR 218, while reductions were observed in male miners, unemployed females, and male and female manufacturing workers. Again, only unemployed males showed a significant increase in PMR, which was 122.
For deaths due to contact with a blunt object, women miners exhibited a significantly increased PMR of 2220, similar to that of unemployed women who showed a PMR of 1180, while women manufacturing laborers, male miners, and male manufacturing laborers did not exhibit significant increases.
In the sensitivity analysis, which is presented in Table 3, there is an increase in the odds ratio of women miners compared to male miners after adjusting for age, marital status, and education, although there was poor model fit.
Discussion
This study provided a profile of causes of death among women miners in South Africa with a focus on external causes of death. We reported excess unnatural deaths in women miners from transport accidents, firearm discharge, and contact with a blunt object at younger ages than both men and women in general.
Women only recently were able to be employed as miners in the South African mining industry. It was and still is thought that mining is not a woman's job [17], although by 2017 at least 12% of miners were female. In a study by Bloch et al. (2018) using mining recruiter data, ex-miner mortality from 2001-2013 was investigated, where 5.1% of ex-miners were female [16]. In our study of registered deaths from 2013 to 2015, a similar proportion (5.7%) of miners were female. Bloch et al. (2018) found no excess mortality in women ex-miners, while an average of 20% excess mortality was found in men. Their finding of a lack of excess mortality in women miners was limited by the fact that the mortality rate in ex-miners decreased over the years in the study and women were only employed in the later years. Also, as women miners generally were not exposed to silica for the same length of time as male ex-miners, they were less likely to exhibit increased deaths due to respiratory disease. Bloch and colleagues (2011) did find greater excess mortality amongst the youngest miners; it was further suggested, based on the findings of Lim et al. (2011), that violence and accidents play an important role in the deaths in this group [15].
It is commonly accepted that women generally live longer than men in developed and many developing countries [22,23]. Some studies demonstrated that this difference was likely to be mediated by social, behavioural, and environmental factors [22,24]. Contrary to these reports, this study found that South African women miners died at a significantly younger age on average than all South African working women, women manufacturing laborers, all working men, and male miners, while all women and all working women died at older ages than all men and all working men, following the expected trend. Tobacco consumption was identified as the most contributary factor behind the sex differences in mortality observed in many countries [24]. However, men's participation in risky jobs was also recognised. Recently, these behaviours have been changing, as more women smoke and participate in work in risky industries, which is expected to reduce the mortality difference between men and women [23]. Despite more women taking up smoking, this behavior is unlikely to explain the reduction in longevity seen here in women miners, as the proportion of women miners who smoked was similar to that of all women and all working women and significantly lower than men. Our finding of a "reduction in median age at death" to 44 years in women miners was similar to the findings of a recent autopsy study by Kgokong et al. (2018), who reported that South African female miner's mean ages at death were 37.8 and 39 years in the gold and platinum industries, respectively. They also reported a high incidence of unnatural deaths, which was similar to our findings [25]. The Pathaut database (hosted by the National Institute for Occupational Health) which collects autopsy results for South African miners whose organs are sent for autopsy, is biased toward miners who die during employment; despite this, the findings of this analysis regarding the national women miner mortality data support the findings of the Pathaut database analysis.
In this national dataset, women miners suffered from a high proportion of deaths due to infectious disease compared to all women and all men. This could be linked to the increased prevalence of HIV and TB in the mining industry or better access to health care, as seen in the reduction of ill-defined natural causes of death [9,26,27]. Women miners also suffered an increased proportion of deaths due to respiratory disease compared to all working women and women manufacturing laborers, while male miners showed a similar increase in respiratory disease deaths. This association between mining and respiratory disease is well described in a range of national and international literature. [13,[26][27][28][29]. Women and male miners appear to be protected from endocrine and metabolic diseases and circulatory system deaths compared to women and men manufacturing laborers and all women and all men. This may be partly due to the physical nature of the work [30,31]. Finally, women miners exhibited a substantially significant increase in the proportion of deaths due to external causes compared to all women, working women, and women manufacturing laborers, while male miners showed a similar proportion to all men and male manufacturing laborers.
Injury-related mortality in South Africa accounted for 12% of deaths and 16% of years of life lost in the first Burden of Disease Study in 2000 [32], primarily due to high mortality rates from road traffic injuries and homicides, which were approximately twice and eight times higher than global figures, respectively [33,34]. The second Burden of Disease Study reported that 9.6% of deaths were due to injuries, which was similar to the 10.53% of deaths classified as external causes of mortality in our data [35]. In 2015, Matzopoulos et al. reported elevated age-standardised mortality based on forensic post-mortem investigations, specifically, 38.4 years for homicide (95% CI: 33.8-43.0) and 36.1 years for road traffic injury (95% CI: 30.9-41.3) in South Africans, which were similar to our findings for men and women miners [34]. Despite the reduction in deaths due to mining accidents seen in recent years in developed countries and in South Africa [15,36], mining presents one of the highest rates of fatal occupational accidents among the industrial sectors [37].
Women miners exhibited increased PMRs for most types of vehicle-related accidents, firearm discharge deaths, and contact with a blunt object compared to women and men manufacturing laborers, male miners, and unemployed women. Transportation accidents were previously found to be a leading cause of occupational fatalities in the mining sector compared to all other industries, and often in younger workers. Janicak et al. (2011) suggested that younger workers may have less experience and/or may be assigned more hazardous jobs [38]. This combined with the risks inherent in traveling long distances as migrant workers visiting home for weekends and holidays may explain this increase [15] and may contribute to the increased numbers of younger women miners appearing in our dataset. The recent report from Action Aid 2019 described an increase in gender-based violence in mining communities as the industry attracts large numbers of migrant men as workers, which can force some women into transactional sex to survive and contribute to an unsettled community. The report identified substance abuse as one factor in mining community violence [39]. The recent review by Botha (2016) on the continued harassment and exploitation of women miners supports the validity of the increased PMRs for firearm discharge and contact with a blunt object in women miners in our study, along with the findings of the Action Aid report [40].
Women miners were less likely to die from unspecified natural causes, but more likely to die from unspecified external causes. Exposure to unspecified forces is the same as ill-defined regarding disease, and should only be used when there is absolutely no information on the cause of death. Literature reported that women are at increased risk of sexual harassment and violence underground and women are also often exposed to equipment not designed for them [3, 18,40] Vehicle accidents were the only external cause of death where male miners exhibited a significantly increased PMR, which corresponds to the findings of Lim et al. 2011 who determined a high mortality rate for their cohort of male miners in a platinum mine due to unnatural causes relative to the world average, but a lower rate than South African men in general. Lim et al. also reported that road traffic accidents were the most common cause of unnatural death in the mining cohort.
Strengths and Limitations of This Study
The South African death registry provides coverage of the entire country, with approximately 93% of all deaths registered [41]. All death records with industry and occupation data were included in the study, therefore, the limitations regarding reporting the underlying cause of death are expected to be similar across all occupation groups. The large number of records available allow for analysis of more specific occupation and industry groups and causes of death. This study provides baseline (surveillance) data regarding mortality in women miners and compares them to similarly employed women and men, thus reducing the impact of the healthy worker effect, as both groups are likely to suffer loss to employment of ill workers. The PMRs used were considered to be valid when the classification of death in the two populations was similar. This dataset demonstrated this when comparing miners to manufacturing laborers, as seen in the lower level of ill-defined deaths in both occupations. Mortality data is limited by poor reporting practices, over-reporting of deaths, and incomplete forms, therefore, it is rated as medium quality [21,41,42]. A previous study by validated the use of South African mortality data for occupational mortality studies [43]. Further limitations of the data include that the length of employment or other employment were not collected, although information on the usual or longest-held occupation and industry were available. Misclassification may have been a source of bias due to inaccurate reporting of usual occupation and industry and cause of death. While the degree of misclassification of cause of death varies by disease, fatal chronic disease such as lung cancer is more accurately classified than many other causes of death. Under-reporting of homicides in vital registration data also exists compared to forensic post-mortem data in South Africa [34].
Conclusions
In this descriptive study, the mortality rates of women in the mining industry were both similar and different to those of men in the mining industry, possibly due to not only biological differences but also social and workplace cultural differences. An excess of deaths from unnatural causes was seen in women miners, which are generally considered to be largely preventable and occur in younger people, resulting in a substantial loss of potential life and a lower age of death in women miners. Thus, these deaths in women miners need to be investigated to describe risk factors in order to develop controls and prevention efforts.
Author Contributions: All three authors contributed substantially to the study and the individual roles regarding the preparation of the manuscript were as follows: Conceptualization, K.S.W.; methodology, K.S.W., and T.K.; formal analysis, K.S.W.; writing-original draft preparation, K.S.W.; writing-review and editing, N.N. and T.K.; visualization, K.S.W.; supervision, N.N. All authors read and agreed to the published version of the manuscript.
Funding: This research received no external funding. | 2020-03-19T10:17:53.054Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "541201e583dfa22666a2b6b09732519113a440aa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/6/1875/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9c4d350865c1e99a1e236ecdfe83755a3e4bd04",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59401037 | pes2o/s2orc | v3-fos-license | RILL AND GULLY EROSION RISK OF LATERITIC TERRAIN IN SOUTH-WESTERN BIRBHUM DISTRICT, WEST BENGAL, INDIA Risco a erosão em ravinas e voçorocas nos terrenos lateríticos de South-Western Birbhum District, West Bengal, India
It is a known fact that no part of the earth surface is free from threat.It applies to Birbhum District, West Bengal, Indian Lateritic Terrain also. The existing terrain is characterized by mainly climatogenetic processes. Though the impact of climate change is vital in the shaping of the lesser topographies in the study-area. The study-area is characterized by micro landforms e. g. rills, gullies, water falls, terraces, gorges type features and limestone topographic type features. The denudational processes are very significant in the area in general but the differential erosion is evident in particular. It resembles the topographies with the African and the Brazillian Highlands. This paper interprets the rill and gully erosion risk in the lateritic terrain and their consequences in regional sustainable development and environmental management
INTRODUCTION
In response to today's worldwide issues of land degradation and its sustainability, multi disciplinery geomorphic perceptions of river catchment or watersheds with remote sensing techniques and also with non cyclic dynamic equilibrium concept are being recognized in wider extent.In India, increasing population growth, worsening plight of the poor, low landman ratio, urbanization with the quest for immediate gains to meet the growing demands are responsible for degraded landscape ecology as noted in India.Degraded lands account for about 2 billion ha(15%) n the world, 39.0% in Asia, and about 9.4% in India.Degraded lands in India covers about 187.7 million ha or 57.1% of its total area (Chandra, 2006).Moreover economic development are still often found to be done at their environmental cost or not to be matched up to expectations.
Lateritic soilscapes are ecologically fragile because of its inherent constraints of acidity, nutrient loss, chemical impairment, crusting ,water erosion and poor water holding capacity as these are highly weathered and leached soil and enriched with ox ides of iron and aluminum in tropics (Jha.et.al,2008).Therefore, their recognitions, spatial distribution, degradation status and management at basin or catchment or watershed level are vital not only to restore already degraded lateritic terrain but also to prevent their further degradation.The drainage basin or watershed is actually an ideal geomorphic unit for effective land -water resource management, controlling runoff and sediment yield, enhancing ground water storage, mitigation of erosion hazards or other natural disaster and its overall sustainable development.Hence drainage basin oriented applied geomorphic apprehension is essentially requisite for effective watershed planning and management.
OBJECTIVES
In the context of above point of views, the present study aims at determining rill and gully erosion risk of drainage basins in lateritic landscape.Objectives are: drainage basin and its lateritic confinement wise morphometric characterizations to infer erosion intensity; determination of risk of rill and gully erosion hazards in terms of their kind, extent and degree as manifested in morphology and morphometric characteristic of geomorphic features in hydrogeomorphic units and land-use practices within lateritc confinement of the basins; rill -gully erosion risk based classification of drainage basins; geomorphic prioritization with preparation of action plan.
DATA BASE AND METHODOLOGY
Integrated approach has been adopted by using Precision geocoded P6 and LISS III on 1:50000, December 2006, Toposheets of 73M and 73P series on 1:50,000 (SOI), daily, monthly and annual rainfall data for the period of 10 years ( basin wise rainfall are computed form isohyte maps of the study area as these data are available for only 7 sub substations), Census map & data 2001, Cadastral map, Soil map of NBSS &LUP, Geological map (GSI) and field data of pre and post monsoon period.
Visual interpretation of satellite imageries along with the said collateral materials have been applied for the identification and delineation of sample of basins with varying extent of lateritic exposure and rill-gully networks, land uses .Here lateritic exposure itself is one of the hydrogeomorphic units.45sub catchments(42 III order sub-basins and 3 II order sub-basins) of tributary basins of two main river systems(the Ajay.R&the Mauyrakshi.R) of the study area have been taken into consideration.Morphometric analysis of linear areal and relief of each entire sample basin and its lateritic exposures have been done on the basis of satellite imageries, toposheets and field data.Sample basins are divided in to grids of 1Km 2 and rill-gully affected lateritic patches into grids of 100m 2 in this regard 100m 2 grids have been chosen to have better morphometric reading from field.Fournier index is also used as an erosion index.In addition to it, soil loss t/ha by universal soil loss (USLE) has been estimated in different non arable land use/cover (TAB.1).Various thematic maps according to the obtained values of morphometric attributes, annual erosion loss, adverse land use of rill gully affected lateric terrain as obtained from satellite imageries and field.All these maps of rill gully erosion risk parameters are rated and integrated to generate map of rill and gully erosion risk based classified basins with their priority status.Surveying instruments also-used
STUDY AREA
Study area lying between 23º04'27"N and 24º07'47"N; 87º05'28"E and 87º50'30"E forms a part of the lower Ganga, referred to as the self of lateritic alluvium locally known as Rahr Bengal (Spate, 1967); Biswas (p.158, 2002); Jha (p.20, 2005).It is bounded by Bardhaman Murshidabad districts and Jharkhand in the south, the north, the east and west respectively.Administratively it is comprised of 7 CD blocks and 1167 villages under 10 police stations of Suri and Bolpur Sub divisions (FIG.2).The area with mean annual temperature 26ºC and mean annual rainfall 1462.73mm is characterized by sub humid tropical/monsoon climate.The said area is composed of the following geological formations: 1.Recent Alluvium (Kandi Formation), 2. Older Alluvim(Rampurhat Formation), 3. Literate (Pliocene-Pliestocene), 4. Rajmahal Trap (Jurassic to Cretaceous), 5. Gondwana Super (Dubrajpur, Ranigang, Barren measure and Barakar formations) and 6.Archaean-proterozoic. Alluvial plain in the east and erosional plain with a few hillocks in the west constitute its major physiography.The general elevation varies between 34m and 157m.Altitude between 40m and 80m occupies most of the area.Most of the rill and gully affected lateritic exposures are profound in this altitudinal zone.Altitude higher than 120m is only confined to the western fringe of Rajnagar and Khorasol Blocks having insignificant lateritc exposures.The rivers -Ajay, the Mayurakshi and their tributaries drain the area with general slope from west to south-east.Laterite -latirtic soil, alluvium (older and younger) and red soils of varying texture are found in the area.Natural vegetation like Sal (Shorea robusta), Palash (Butea monosperma), Arjun (Terminalia arjuna), Sonajhuri, Eucalyptus, Mango, Bamboo etc. commonly grow here.
The study area has considerable constraints of rill-gully erosion specially in exposed lateritic patches as noticed intesely in the Ajay-Maurakshi interfluves 45 sample sub basins or micro watersheds of the Kopai, Bakreswar and Dwaraka basins (Tributaries of the Mayurakshi.R) and the Hingla basin (Tributary of the Ajay River.).
RESULTS AND ANALYSIS
The study area belonging to the Rahr Bengal have significant extent of lateritic landscape degraded by varying combinations of rill, gully & stream network.Lateritic exposures affected by rills and gully erosion are very distinct in the eastern part of the study area particularly in the central Bolpur-Sriniketan, south eastern Illambazar, eastern Dubrajpur, western Suri-I and southern-eastern MD bazar blocks (TAB.2).These are mainly observed in the 3 rd and 2nd order sub basins of the Kopai.N, Bakreswar N, Kuskarani .N, Dwaraka N and other few very small sub basins of the Ajay & the Mauyrakshi Rivers (TAB. 2 & FIG.3).
On the contrary, lateritic exposures are small and scattered in nature and mostly subjected to rill erosion along with small or insignificant gullies in the remaining part of the study area (TAB.3).In respose to the extent of lateritic exoposures, 18 sample basins out of the total sample basins (45) are efficacious in rill and gully erosion whereas remaining 27 sample basins are mainly subjected to rill erosion as noted in satellite imageries and field survey (FIG.9).Their propensity is high in non arable lands like protected and reserve forests, mining waste, barren terrain and also marginal agri-cultural plots.The basin 1 possesses maximum num-ber of villages ( 16).Maximum lateritic villages (7) is noted in the basin-33.
In the study area four types of gullies are identified such as very shallow (less than 1.5 m), shallow (1.5m3.0m) and moderately deep (3.0m-4.5m)and deep (>4.5m) gullies.All these types are distincty found in Bolpur-Sriniketan block.On the contrary other sample basins represent shallow and moderately deep gullies.
Hence foregoing discussion makes it clear that there are variation ins erosive potential in accordance with the extent of laterite exposure along with the integrated effectiveness of magnitudes of drainage attributes, soil loss, vegetation and adverse land use caused by local people and government polices (FIG 12).Moreover man induced modified lateritic basins with moderately fine 1 st order and over all drainage frequency produce more sediment yield (FIG. 4,5,6,7,8 & 12).
Characteristics of sample basins with rills induced lateritic confinement (devoid of significant gully
Majority of the sample basins (27 in number out of the total 45) are more susceptible to the process of rilling than significant gullying.The lateritic coverage in these basins varies between 0.23 km 2 and 6.22 km 2 .It is insignificant (below 1km 2 ) in 8 basins (7, 8, 10, 29, 35 and others).Most of the basins susceptible to this processes in their lateritic enclosures are frequent on the granite-gneissic-gently undulating plain across the rainfall regime of 1400mm and rainfall intensity of more than 100 (Fournier index) lying in the western Dubrajpur, Khayrasol, Md Bazar and Rajnagar blocks (FIG 3,6,10 & 13).On average, these basins are characterised by low magnitude of relative relief (1.82 /100m 2 -2.2/100m 2 ), dissection index (0.01/ 100 m 2 -0.02/ 100m 2 ), gently undulating slope (1.62%/100m 2 -3.83%/100m 2 ), poor -moderate drainage frequency (1.65/ km 2 -4.2 km 2 ;1 st order 1.1/ km 2 -4.52/ km 2 ;2.13/ 100m 2 -4.01/ 100m 2 ), coarse -moderate density (1.34 km 2 -3.23km/ km 2 ;0.46km/ km 2 -1.96km/ km 2 ;0.48m/ 100 m 2 -2.16 m/100m 2 ), moderate bifurcation ratio (2.0-4.6), elongation ratio (0.65-0.90) and very low relief ratio.All these morphometric magnitudes (FIG 7 and TAB. 5) and dominance of sandy loam texture with its moderate erodibility and low relief ratio motivate mostly moderate annual soil loss (12.45-23.13t/ha)and moderate state of erosion as shown in TAB. 3, 5 & and FIG. 8 &12.On average these basins are relatively larger in size and more elongated in shape than basins in lateritic patches in proximity to older and younger alluvium geomorphic units.Basins 16, 17 and 19 maintain their moderate erodibity inspite of the considerable depletion of protected forest and frequent existence of barren and scrubby patches in lateritic enclosure (Plate 1).It indicates prevalence of considerable infiltration capacity, permeability and limited runoff yield as reflected in their coarse-moderate morphometric magnitude of relief and drainage attributes and elongated shape and light texture-sandy loamy soil of lateritic profile.Consequently most of these basin experience moderate soil loss or moderate state of erosion (FIG.7).The linear relation between soil erodibility, drainage density and mean annual precipitation -rainfall erosivity on laterite are 0.51,0.42and 0.53 respectively.Rill induced lateritic surfaces in 27 basins varies between 4.04% and 97.73% out of the total area of the basins.Few basins (7, 8, 9, 10, 11 & 45) register low rill erosion having mean annual soil loss below 12t/ha.Hence It can be said that basin with significant lateric exposures, high rainfall and the heavier soil texture (clay/clay loam) are more susceptible to the rills and gullies than those basins with the relatively smaller extent of laterites, less mean annual rainfall and lighter soil texture-sandy loam.Basin with considerable rills but insignifican gullies attain mostly moderate state of erosion.Maximum number of rill and gully effected lateritic villages are located in the basin 12.
Risk of Rill and Gully erosion
According to the forgoing analysis, current status of rill and gully erosion of sample basins can be classified into three categories like least, moderate and severe erosion risks reflecting varying type, extent, potentialities and limitations (FIG 10,11,12 & 13) as given below.
Least rill and gully erosion risk: 8 sample basins covering 11 villages belong to this category having lateritic coverage between 0.23 km 2 and 3.20 km 2 basin area between1.02km 2 and 9.22 km 2 .They are characterized by stable terrain with least mean annual soil loss t/ha between7.39and 11.97, least area affected (non agricultural land) between 4.04 km 2 and 19.61 km 2 and least terrain deformations.FIG. 13 resembles their existence both over level and gentle slope from east to west.Least risk of lateritic exposures coincides with low morphometric magnitude (either per km 2 or 100 m 2 ) , mean annual rainfall between 1300mm-1550mm,Fournier index between 99 and 103.22 and soil erodibility between 0.18-0.26 .Lateritic exposures of these basins have least limitation for land use.Their geomorphic priority is low as lateritic landscape under this category is economically viable and even can be used for agricultural purpose ofcourse other than paddy cultivation.
Moderate rill and gully erosion risk: Majority of sample basins (26 in number) occupying 59 affected villages attain moderate risk.Their basin area and lateritic coverage vary between 2.21 km 2 & 14.61 km 2 and 14.61 5.13% & 97.73% respectively.Basins under moderate risk are usually subjected to the process of rilling .In these cases, the process of gullying is insignificant.They are characterized by moderate morphometric magnitude to some extent , sandy loam soil texture, mean annual soil loss between 12.43 t/ha &23.13 t/ha ,area under combinations of rills and rills with insignificant gullies -between 5.13% &48.41% and least to moderate terrain deformations.Some of these basins like 16, 17, 19 etc are evidenced by human induced alteration of lateritic topography by clearing protected forest, extracting morram from lateritic profile etc.Such erosion risk is mostly prevalent nearly across mean annual rainfall regime of 1450mm -&1500mm and Fournier index of 99.These basins have moderate priority and limitations for their rehabilitation and landuse.Economic viability of these basins can be increased by moderate leveling of rills and small insignificant gullies, social and agro forestry on barren surfaces and revegetation of depleting forest considerable area can be brought under the dry farming after the moderate reclamations.
Severe rill and gully erosion risk: 11 out of 45 sample basins are characterized by severe rill and gully erosion .It includes 27 villages.This risk is prevalent in the sample basins lying on the right bank of the kopai and Bakreswar nadi, left bank the Ajay and Mayurakshi rivers in Bolpur-Sriniketan, Dubrajpur and MD.Bazar block.Of course these basins are very small in size(1.21km 2 -4.34 km 2 ) and less elongated or circular in shape.They have appreciable extent of lateritic exposures(63.14%-100.0%).Particularly sample basins 3-6 in the Kopai catchment attain spectacular dimention of severity affecting 24.55%-76.42% of the area.Basin-4 have only the largest gully acquiring the highest geomorphic magnitude in all respects (TAB. 6,FIG. 9,10.).Sample basins of high severity are characterized by severe mean annual soil loss (24.00-28.23 t/ha) and strong terrain deformations corresponding to high mean annual rainfall(mm), higher soil erodibility with clay -clay loam texture.These basins are actually the reflection of accelerated soil erosion.Infact, inherent fragilility of lateritic landscape and adverse impact of degradation of forest, irrigation canal, urbanization, morrum and china clay mining make them more severe in character.They are prone to the backflow from nearby river in rainy season.These basins have high priority for reclamation in view of severe terrain limitations.These basins can be managed by taking initiatives for horticulture, social forestry and tourism.Hence from above discussion it is clear that significant extent of laterite exposures, mean annual rainfall,drainage morphometric magnitude, soil erodibility, top soil loss, susceptibility to rill and gully erosion are directly related.Inherent characteristics of lateritic landscape coupled with human intervention are responsible for varying degree of rill and gully erosion risk.Moderate erosion risk dominates the study area and it is mainly related to the lateritic landscape affected by rill erosion.On the contrary basins with both rills and gully networks have severe erosion risk.
FIGURE 2 .
FIGURE 2. Location of Study area.
FIGURE 3 .
FIGURE 3. Locations of sample drainage basins on the Lateritic Terrain.
FIGURE 4 .
FIGURE 4. Morphometric Characteriscs of Sample Basins with their Varying Extent of Lateritic Exposures.
FIGURE 5 .
FIGURE 5. First order drainage characteristics of sample basins.
FIGURE 11 .
FIGURE 11.Erosion risk, annual top soil loss and rill and gully affected lateritic surfaces in sample basins.
FIGURE 12 .
FIGURE 12. Landscape Profiles and Lateritic Surface Under Varying Severity of Rills and Gullies in Sample Basins.
FIGURE 13 .
FIGURE 13.Riil and Gully Erosion Risk in Sample Drainage Basins.
TABLE 1 .
Techniques used Schumm(1956)e(%)Slope in %-x/yX100 ;x-vertical drop between successive contours, y-Horizontal distance on respective scale Drainage frequency of a basin "Nu/Au Nu-total number of stream segments of all orderHorton(1932)Au-Basin area Drainage density of a basin "Lu/Au Lu-total length of stream segments cumulated for eachHorton(1932)stream order Au-Basin area Elongation ratio d/Lb, d-diameter of the circle of the same area as basin,Schumm(1956)and field survey on 1:50000 Satellite and field survey on the basis of observation points as obtained from satellite imageries.
TABLE 2 .
Geomorphometric characteristics of sample drainage basins having signinificant extent of rill-gully affected lateritic exposures.Rill and gully erosion risk of lateritic terrain in South-Western Birbhum District, West Bengal, India V. C.Jha, S. Kapat
TABLE 4 .
Chracteristics of rill and gully induced lateritic surface in sample basins
Magnitudes of morphometric attributes in combination with soil texture,soil erodibility, soil loss, extent of rill& gully aff ected area
FIGURE 8. Soil texture and soil erodibility of lateritic surfaces in sample basins.FIGURE 9. Severaly gully erosion risk FIGURE 10.Moderate gully erosion risk
TABLE 6 .
Temporal changes in the largest gullyand its sub gullies confined to Ballavpur lateritic patch. | 2018-12-22T00:12:54.820Z | 2009-08-01T00:00:00.000 | {
"year": 2009,
"sha1": "fa8ed127ecec1aa3c73d7ad9ef59d98844213967",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/sn/a/zR5njDVDsyn4LtqYVYJ8nkv/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4aa36b3cb7a4092ba656098e63216992c269b9b7",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geology",
"Geography"
]
} |
7066021 | pes2o/s2orc | v3-fos-license | How Insightful Is ‘Insight’? New Caledonian Crows Do Not Attend to Object Weight during Spontaneous Stone Dropping
It is highly difficult to pinpoint what is going through an animal’s mind when it appears to solve a problem by ‘insight’. Here, we searched for an information processing error during the emergence of seemingly insightful stone dropping in New Caledonian crows. We presented these birds with the platform apparatus, where a heavy object needs to be dropped down a tube and onto a platform in order to trigger the release of food. Our results show New Caledonian crows exhibit a weight inattention error: they do not attend to the weight of an object when innovating stone dropping. This suggests that these crows do not use an understanding of force when solving the platform task in a seemingly insightful manner. Our findings showcase the power of the signature-testing approach, where experiments search for information processing biases, errors and limits, in order to make strong inferences about the functioning of animal minds.
Introduction
When faced with a difficult problem, humans can often spend a prolonged period of time trying to solve the problem without success, only for the solution to arrive suddenly and unexpectedly, often accompanied by a subjective 'aha' moment. Such 'insightful' problem solving is an implicit process. where the problem is restructured following the impasse [1,2], though it is not yet clear exactly what cognitive mechanisms humans use during this process [3,4]. Since Kohler's research with apes at the beginning of the 20 th century [5], there has been debate over whether or not animals are also capable of insight [1,6,7]. How insight is defined in the animal cognition literature varies; one widely used definition, from Thorpe [8], states that insight occurs with "the sudden production of a new adaptive response not arrived at by trial-anderror behaviour", while more recent definitions are more mechanistic, emphasising the importance of concepts such as mental models, means-end understanding, and causal knowledge in producing insightful behaviour [9][10][11].
A number of different bird behaviours fit these definitions of insight, including string pulling [11], solutions of the Aesop's fable task [6] and solutions of the von Bayern paradigm [12]. In the string pulling task, birds have to pull up a string which is hanging from a perch in order to obtain the reward at the end of it. Some birds, such as keas [13], neo-tropical parrots [14], and corvids [11], can solve this task on their first trial. This fits Thorpe's definition, as adaptive behaviour is produced without any evidence of trial and error learning. However, an alternative explanation for this success is a perceptual-motor feedback loop [15]. Pulling and stepping on the string can act as a reinforcer, as it moves the food closer to the bird, and so provides reinforcement for the bird to repeat these behaviours. In one experiment exploring this alternative explanation [15], the crows' visual access to the food was restricted, and in another, the strings were laid horizontally and looped so that the initial pulls on the strings did not move the food [16]. In both experiments, the feedback loop was disrupted and the crows' performance on the task was drastically reduced.
Another example of potential animal insight is the Aesop's Fable paradigm, where animals have to raise the water level in a container via displacement in order to obtain a reward. Chimpanzees and orang-utans have been shown to solve this task by spitting additional water into the container [17,18], whilst rooks dropped stones into the container in order to raise the water level [6]. The cognitive mechanisms behind both the apes' and rooks' success are unclear. For the apes, it is difficult to know whether they planned a solution based on a causal understanding of displacement or tried out various behaviours in their behavioural repertoire until they happened upon a solution. For the rooks, it is unclear if subjects were not simply repeating a learned response, given they had learnt to stone-drop in a previous experiment [19].
Solution of the von Bayern paradigm is also an example of seemingly 'insightful' behaviour. Similarly to the results of the Aesop's fable experiments, after receiving experience of pushing down a platform with their beak, New Caledonian crows will then drop stones down a tube positioned above the platform in order to trigger it [12]. This spontaneous innovation of stone dropping can be described as 'insightful' as it emerges without trial and error learning and so fits Thorpe's definition. Two hypotheses have been proposed to explain the crows' performances [12,20,21]. One possibility is that the birds use an understanding of force; that is, they learn from pushing the platform with their beak that pressure needs to be applied to the platform for it to trigger. By coupling this understanding with knowledge that heavy, falling objects also exert sufficient force, the birds realise that stone dropping gains them the out-ofreach food. An alternate possibility is that the birds learn that contact between their beak and the platform leads to it triggering. They then attempt to recreate this contact by dropping an object external to their body onto the platform.
As can be seen from the above studies, there are a number of competing hypotheses for many recent examples of animal 'insight'. This has led to a growing call for comparative psychologists to move beyond describing seemingly 'insightful' behaviour in non-human animals [19,22,7,23]. Instead it has been suggested that researchers attempt to pinpoint the actual cognitive mechanisms being used by an animal during problem solving [21,[24][25][26]. One powerful way to do this is to use the 'signature-testing approach' [26]. Inspired by Alan Turing's work on machine intelligence [27], this attempts to make inferences about thought processes by not only examining problem solving successes, but also the information processing biases, errors and limits made by an individual. If a potential cognitive mechanism does not predict the presence of these observed signatures it is unlikely to be generating the behaviour in question. Thus the presence and absence of these signatures constrains the type of cognitive mechanism that can be producing behaviour, allowing for stronger inferences to be made about the type of cognitive process used by an animal during problem solving.
One particular cognitive signature, the 'weight inattention error', can be used to test between the two hypotheses explaining crows' 'insightful' performances on the von Bayern paradigm. The contact hypothesis predicts that crows should be insensitive to object weight as their understanding of the task is based only on the need for contact between an object and the platform. In contrast, if the crows have an understanding of force, they should understand that the weight of an object is important for successfully collapsing the platform and so prefer heavy objects to light objects when innovating stone dropping. Therefore, determining whether this information processing error is present or absent in New Caledonian crows is a powerful way to explore the cognitive mechanisms behind crows' 'insightful' solutions of this task. In this study, we first gave New Caledonian crows experience of pushing a platform with their beaks, as in von Bayern et al [12], before examining whether our subjects chose heavy, functional objects when they began spontaneously dropping stones onto a platform, or if they made the weight inattention error.
Ethics statement
This study was conducted under approval from the University of Auckland ethics committee (reference no. R602). The Province Sud granted us permission to work on Grande Terre, New Caledonia and to capture and release crows (Permit No.2962-2015/ARR/DENV). All birds were released at their site of capture at the end of testing.
Subjects
Subjects were twelve wild-caught New Caledonian crows, captured from various sites across Grande Terre. The birds were housed in an eleven-cage outdoor aviary before being released. All cages were at least 2m 2 x 3m. The crows were housed in the aviary on three separate occasions. The first group were housed in the aviary from April to November 2014, the second group were housed in the aviary from April to August 2014 and the final group were housed there from April to August 2016. All crows were captured by whoosh nets. The area around the whoosh net was baited until groups of crows were feeding regularly on it. The whoosh net was then released when a family group was present. Sexing and ageing were carried out using methods from Kenward et al, 2004 [28] Six of the twelve crows were adults. Nine of the birds were male: four juveniles and five adults. Three of the birds were female: two juveniles and 1 adult (see Table 1 for details.) Crows were fed a diet of papaya, meat, dog biscuits and egg daily, and had access to water ad libitum.
Materials
The platform apparatus was a Perspex box (180x110x85mm) with a 90mm tube (inner diameter = 40mm) on top and a platform inside held up by a magnet. This apparatus functions as follows: when a heavy object is dropped down the tube it falls onto the platform, which causes the platform to collapse and the meat sitting on the platform to fall out of the apparatus ( Fig 1B). The blocks were 30mm x 20mm x 10mm, weighed either 15g or 1g and were coloured either purple or pink. These colours were chosen because crows are tetrachromatic and so can discriminate colour [29][30][31], the colours pink and purple did not have any obvious ecological correlates which may induce preference or aversion, and because past work had shown the crows could distinguish between them (AHT pers. obv.) The plastic tubes used to habituate the crows to moving the blocks were 120mm x 120mm x 60mm, while the Perspex tubes used for the object preference pre-test were 150mm high, with an inner diameter of 40mm.
Procedure
Birds were randomly allocated to two groups. For Group 1 the purple blocks were heavy across all stages of our experiment, while for Group 2 the pink blocks were heavy. While the heavy blocks were of sufficient weight to trigger the platform apparatus, the light ones were not.
Habituation
Crows were habituated to the purple and pink blocks by placing food underneath two blocks, one of each colour. Habituation continued until neophobic responses towards the objects stopped. They were then habituated to moving the blocks with their beaks by placing the blocks in clear plastic containers with food underneath so that the birds had to move the blocks out of the way to access the food.
Object handling pre-test
To ensure the crows had no preference to handle blocks of one colour or weight we gave them 20 object handling trials. Here, food was placed in a plastic bottle top with a 3cm handle. This food holder was then placed in a vertical, Perspex tube and three blocks of one colour were stacked on top of it ( Fig 1A). Crows had to pull three blocks out of the tube to be able to pull the container out by its handle and so gain the food. A second tube was set up 30cm away, with three blocks of the other colour stacked on top of the baited container. Block type was pseudo-randomised between tubes, across trials. While both tubes were always baited, the crows were only allowed to gain food from one tube. If the crows interacted with one tube and then attempted to interact with the other tube, the experimenter entered and removed the second tube before the crow could access the food. If crows had a preference for objects of a particular colour or weight, we expected them to choose the tube containing the objects they preferred at above chance levels.
von Bayern platform pushing procedure Crows were tested following the methodology outlined in von Bayern et al. [12]. The birds were habituated to the platform apparatus by placing meat beside and on top of the apparatus. The length of time required for the birds to habituate to the apparatus varied among individuals but occurred over the course of 1-4 habituation sessions across 1-2 days. Crows received habituation trials until they were comfortable approaching the apparatus. After habituation to the platform apparatus, the crows were trained with this apparatus without the 90mm tube on top. Here, crows had to push the platform with their beaks to gain food placed out-of-reach on the platform. Crows initially learnt to take food from the box when the platform was open and then closed. Once, they were confident at taking food from the box, a small bit of meat was placed between the magnets holding the platform and another piece of meat was placed out of reach of the crows. The birds' pecking at the trapped meat caused the platform to collapse and allowed the crows access to both pieces of meat. As the birds became more comfortable collapsing the platform, the size of the meat placed between the magnets was reduced. When the birds were confidently collapsing the platform, the meat between the magnets was removed and the birds had to continue to push the platform with their beaks to retrieve the inaccessible meat. As in von Bayern et al. [12], once crows had pushed the platform with their beak to gain the reward 30 times they began the experiment.
Block dropping experiment
During testing, the tube was placed back on the platform apparatus and, in each trial, 10 blocks, 5 of each colour were arranged around it, with one block of each colour placed in alternating pairs (Fig 2). Within each pair the position of each block was randomised across trials. While the heavy objects were of sufficient weight to trigger the platform apparatus if dropped down the tube, the light objects were not. A test trial began when a crow landed on the table and ended when the crows got the food or after 3 minutes. Crows were given 3 trials initially. At the end of these 3 trials, if the crows had not solved the task, they were given another 5 trials of pushing the platform with their beak. They were then given an experimental trial. If they did not drop a block again, this pattern was then repeated a further time before testing ended. For crows that did solve the task, testing ended once they had gained the reward by dropping the block in 10 trials.
Stone dropping training and preference test
Crows that did not complete the experiment were subsequently given training where they learnt to drop a stone down the tube. Training consisted of the crows learning to nudge the stone from the edge of the tube onto the platform. Initially meat was placed on the stone so that, as the crows took the meat, the stone fell into the tube. Over time, as the crows learnt to nudge the stone, the stone was placed progressively further from the tube. When the crows were reliably nudging the stone onto the platform, the stone was placed on the ground so that the crows had to pick it up and drop it in. Once the crows had picked up the stone from the ground and used it to gain the reward 10 times, they were then given the experimental test again, to see if they had developed a preference for the heavy block after learning to stone drop.
Object handling pre-test
In the object handling pre-test, birds chose to pull the light blocks out of their tube slightly more often than pulling blocks from the tube containing heavy blocks, doing so in 128 of the 240 trials given (20 trials per bird). This preference was not significant (Binomial test p = 0.167; Table 1). Individually, only one bird had a significant preference for either block; picking the heavy block 15 times out of 20 (Binomial test p = 0.041.) The other birds had no significant preference for either block (see Table 1 for details.)
Block dropping experiment
During the block dropping experiment, only two of the twelve crows produced the target block dropping behaviour. On her second trial, D4R dropped one light block down the tube, and then dropped one heavy block, triggering the platform. Across the 10 experimental trials this bird dropped the heavy object 12 times and the light object 22 times and so showed a non-significant trend to drop the light block (Binomial test p = 0.062, see S1 Video for example). D3B dropped the light object once on the first trial and then stopped stone dropping, though he did interact with the blocks on a further 5 trials. During these trials he interacted significantly more with the light block than the heavy block (18/24 contacts Binomial test (p = 0.012). The remaining ten birds did not drop the blocks down the tube across the experiment, though seven individuals did interact with them. Nineteen of the 30 total contacts with the blocks were with the light one (Binomial test p = 0.100). Individual crow interactions are summarized in Table 2.
Stone dropping training and preference test
In the subsequent test, all of the birds that did not complete the experiment learnt to stone drop after shaping. All of these birds then showed a significant preference for the heavy block after having successfully stone dropped 10 times, with three crows scoring 10/10 immediately (see Table 3 for details). As a group the crows showed a significant preference to drop the heavy block, choosing it on 125/140 trials (Binomial test p <0.001).
Discussion
The two birds in our study that spontaneously dropped blocks onto the platform to release food did not choose to drop heavy blocks onto a platform and so clearly did not attend to information on object weight (the weight inattention error). One bird dropped the stone only once, likely because the initial dropping of the light block was not followed by reward. The other crow continued stone dropping for the 10 trials of the experiment, without showing any preference for heavy blocks. In contrast, the ten birds that did not complete the experimental task all showed significant preferences for the heavy block after being trained to stone drop. These birds' behaviour demonstrates that the errors made by the two crows who innovated block dropping were not due to the crows being unable to detect a difference in weight between the two blocks provided, or being unable to inhibit responses towards the lighter blocks. After experience, these crows discriminated between the blocks and were able to immediately inhibit actions towards the non-functional one with three crows scoring 10/10 at this task. Instead, it seems the crows lacked the necessary experience to know that block weight was an important piece of information that needed to be attended to. With experience of stone dropping they did attend to weight, and so made functional choices. Our findings therefore demonstrate that attention to weight is not necessary for the production of seemingly 'insightful' stone-dropping in New Caledonian crows. Instead, these crows make a specific information processing error when creating spontaneous stone dropping: the weight inattention error. The proportion of birds who showed spontaneous stone-dropping was lower in our experiment than the original von Bayern et al experiment [12]. One possible reason for this difference is that the crows in the original study may have learned to insert their beak into the tube during the stone dropping phase due to having to insert their beak into a 3cm tube in order to collapse the platform in the training phase. In contrast, in our study, the tube was removed from the apparatus when the birds had to collapse the platform with their beak. While this small difference in apparatus design should have had no effect on innovation rates if the crows were using an understanding of force, it could have if stone dropping is innovated through simpler mechanisms, as our results here suggest. Another possible reason for the difference in the number of birds which innovated stone dropping in the two experiments is that two of the three birds that spontaneously stone dropped in the von Bayern et al experiment were raised in captivity. In a similar manner to the "captivity bias" seen in apes, where captive apes show greater use of tools than wild apes [32], it is possible that these captive crows experienced reduced task demands compared to the wild birds in our study. In particular, they may have devoted less of their cognitive resources to vigilance behaviour, due to their lack of experience of life in the wild and thus found the task easier to solve. While stone dropping was only innovated by two of the twelve birds tested, these results do suggest that the anecdote reported in von Bayern et al [12], of a crow dropping a feather down the tube, was not due to play or a lack of alternative options, but reflected a limit in their understanding of the task. The presence of the weight inattention error suggests that the beak pushing experience given to the crows in our study did not lead to them developing a full causal understanding of the stone dropping task. Clearly, an understanding of force is not necessary for New Caledonian crows to spontaneously solve this problem. This raises the possibility that these crows lack the ability to reason about invisible forces more generally, though given the suggestive evidence that this species can reason about hidden causal agents ( [33]; but see [34][35][36][37]), and the weight discriminations made by birds trained to stone drop here, further research is required. Caution is also required because of the role that ontogeny and experience may play in the development of an understanding of force. While we tested 12 birds, of which 6 were adult, only two juveniles innovated stone dropping. While these two juveniles both made the weight inattention error, it is possible that if an adult had innovated stone dropping, it may have attended to weight. Similarly, it is possible that, if the crows had had more experience of force, either through increased experience of beak pushing, or from having to attend to the force of their actions in other contexts, they might not have made the weight inattention error. Thus further work is required before we can make any strong conclusions about the crows' use of an understanding of force outside of the specific context of innovating stone dropping, where it is clear that such understanding is not necessary for this 'insightful' behaviour to emerge.
It is still unclear exactly what New Caledonian crows are thinking when innovating stone dropping. One possibility is that the crows pick up objects close to the apparatus and insert them into the tube in an attempt to increase their reach [38]. When they find the block is not a sufficiently long tool they drop it, thus triggering the platform. Another possibility is that the crows have an abstract concept of contact, in that they learn that contact between their beak and the platform leads to reward, and then generalise this relationship to objects external to their body. A third possibility is that the crows have a causal concept of contact, where they understand that contact between their beak and the platform causes a specific effect: that the platform will collapse and so lead to the release of food. Recent results suggesting that New Caledonian crows are not capable of producing causal interventions provide some evidence against this third possibility [39], though research is clearly needed to test between these possibilities.
Our findings show how the signature-testing approach can allow us to make stronger inferences about cognition than searching for 'insight'. By designing experiments that explicitly search for key cognitive signatures it is possible to constrain the types of cognitive mechanism that could potentially be producing a behaviour. This can allow us to rule out cognitive mechanisms that do not fit the patterns of observed signatures generated by a non-human animal. It also allows us to make stronger inferences about whether different kinds of minds think in the same way. Searching for the weight inattention error in children and primates, for example, would be a first step in examining how similar the cognition of these groups is to corvids during 'insightful' problem solving. Such research offers a highly promising avenue for exploring the degree to which intelligence evolves in a convergent manner across distantly related species.
Supporting Information S1 Data Set. Data set for the birds' preferences and interactions with the heavy and light blocks.
(XLSX) S1 Video. Example of the weight inattention error. Video clip showing D4R showing the weight inattention error. The subject initially drops a light block into the apparatus (which is too light to collapse the platform) before dropping in a heavy block and causing the platform to collapse. (3GP) | 2017-07-28T18:42:19.716Z | 2016-12-14T00:00:00.000 | {
"year": 2016,
"sha1": "7ad0ede91607850395c4cb2ae02790a9f12e85bf",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0167419&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a2ee3cc74e141e5f52c37a99d6fbce1b5d53188",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
29282100 | pes2o/s2orc | v3-fos-license | Membrane Binding and Conformational Properties of Peptides Representing the NH, Terminus of Influenza HA-2*
Synthetic peptides representing amino acid residues 1-16 and 1-20, a proposed fusogenic region of the HA-2 subunit of influenza virus hemagglutinin, bind to phosphatidylcholine vesicles with submicromolar dissociation constants. The 1-20, but not the 1-16, pep- tide appears to adopt a helical conformation when bound to vesicles and cooperatively promotes vesicle fusion. Influenza viruses enter cells by receptor-mediated endocy-tosis followed by a low pH-induced fusion of viral and endo-soma1 membranes (1). Both cell surface receptor binding (2) and membrane fusion activities (3) reside on a single large viral membrane glycoprotein, hemagglutinin (HA),’ the amino acid sequence of which is known for many different strains
Influenza viruses enter cells by receptor-mediated endocytosis followed by a low pH-induced fusion of viral and endo-soma1 membranes (1). Both cell surface receptor binding (2) and membrane fusion activities (3) reside on a single large viral membrane glycoprotein, hemagglutinin (HA),' the amino acid sequence of which is known for many different strains (reviewed in Ref. 4). Enzymatic cleavage of HA from intact membranes (5) yields a soluble form (BHA) lacking the COOH-terminal amino acids 176-221 (6). BHA's crystal structure (7) shows it to be a trimer, each monomer containing the two subunits HA-1 and HA-2. The NH2 terminus of HA-2 is positioned within about 30 A of the viral membrane attachment site and has been postulated to be involved in the membrane fusion activity of HA (8,9). One proposed mechanism for this (8) involves a conformational change in HA induced by the lowered pH (-5) within the endosome, which releases the NH2-terminal region of HA-2, allowing it to interact with the endosomal membrane. Demonstration of low pH-induced aggregation and exposure of cryptic proteolysis sites (8), as well as lipid vesicle binding (9) of BHA, support this mechanism, but uncertainty remains concerning the details involved in the HA conformational change and its relationship to subsequent membrane fusion. In particular, the exposed HA-2 NH2-terminal region could function solely to bind viral and endosomal membranes close enough to allow independent fusogenic processes, possibly involving other regions of the HA molecule, to become effective. Alternatively, the HA-2 NH2 terminus could itself be active in destabilizing either or both membranes, in effect acting as a fusogenic agent or facilitating the action of others.
Examination of the HA-2 NH2-terminal sequences from different strains of influenza ( Fig. 1) shows that while the sequences of these peptides are somewhat variable (65% overall sequence homology), the nature of each residue is absolutely conserved. Hydrophobic residues (B in the consensus sequence, Fig. l), hydrophilic (X, usually acidic, and if not * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The abbreviations used are; HA, hemagglutinin; BHA, enzymecleaved HA. acidic, neutral), and small (G, glycine or asparagine) residues occupy invariant positions in the aligned sequences. Further, in an a-helical conformation (Fig. 2), a spatial segregation of hydrophobic, hydrophilic, and small residues occurs which resembles that of known membrane-perturbing peptides (10)(11)(12)(13). To further investigate this resemblance, we undertook a study of the membrane interaction properties of the HA-2 NH2 terminus in isolation from complicating influences of the parent HA molecule. Because helix formation in isolated peptides is known to be length dependent (14), we synthesized peptides H-16 and H-20, corresponding to the first 16 and 20 residues, respectively, of the "B" strain of viral HA-2 in The results support our proposed structural analogy and provide indirect support to previously proposed mechanisms for the fusogenic activity of influenza virus hemagglutinin.
RESULTS
Peptide Synthesis-Initial attempts to prepare peptides from the NH2 terminus of HA-2 using the standard solidphase method failed to yield peptides which could be purified to homogeneity. Consequently, the solution-phase strategy outlined in Scheme I was employed. Glycine residues at regular intervals allowed H-20 to be divided into three glycylterminated segments resulting in rapid and racemization-free coupling. The segments were prepared using a p-nitrobenzophenone oxime resin and methodology described previously Binding to Phospholipid Bilayers--In aqueous solution H-16 and H-20 give fluorescence spectra similar to tryptophan dissolved in water, with a maximum at 346 nm (Fig. 3). Upon addition of sonicated small unilamellar 1-palmitoyl-2 oleyl phosphatidylcholine vesicles, the fluorescence emission maximum shifts to the blue for each of the two peptides, although the shift is greater for H-20 (17 nm) than for H-16 (12 nm). Blue shifts of this magnitude have been observed when amphiphilic tryptophan-containing peptides interact with phospholipid bilayers (22) and are consistent with the indole moiety becoming partially immersed in the membrane (23). (16)(17)(18)(19).
6500
This is an Open Access article under the CC BY license.
Various strains of idh?nza hemag-VIC/75
gives a consensus sequence in which B denotes a hydrophobic residue, G refers to a glycyl (or in one case asparagine) residue, and X a hydrophilic, usually PPV/34 acidic, residue. Boxes enclose the invariant residues.
FIG. 2. Schematic helical representations of the NHz-ter-minal20 residues of theX-31 variant of influenza hemagglutinin HA-2 (the variant for which an x-ray structure exists).
The cylindrical drawing (left) illustrates the positions from which specific amino acids would project if the sequence forms a helix. The right illustration is a helical net diagram of the same sequence. The domain indicated by vertical ~t c h i~ is predominan~ly oecupied by hydrophobic residues. The acidic domain is denoted by horizontal hatching, and the glycine-rich region is not hatched. tively. These values should be considered as lower limits since up to 2-fold reductions in ellipticity have been observed in the spectra of membrane-bound proteins (25,26). Thus, H-16 is calculated to be at most 10% helical, while H-20 is between 45% and fully helical when both are bound to vesicles at neutral pH.
Upon acidification to pH 5 with acetic acid, the ellipticity at 222 nm increased in magnitude by -3000 deg cm2/dmol for both H-16 and H-20. This is consistent with them becoming slightly more structured at low pH. Fig. 4b shows that the fractional change in the ellipticity at 222 nm for H-20 (at neutral pH) as a function of the vesicle concentration closely follows the binding isotherm obtained fluorometrically. Thus, the change in the tryptophan environment occurs concomitantly with the conformational change which the molecule undergoes when it binds to vesicles. The change in ellipticity for H-16 when it binds to vesicles is too small to allow accurate determination of a binding isotherm.
Vesicle Fusion Actiuity-The ability of the peptides to promote fusion was assessed using 1-palmitoyl-2 oleyl phosphatidylcholine vesicles to which 0.3-0.6 mol % of the fluorescent probe lipids N-4-nitrobenzo-2-oxa-1,3-diazole phosphatidylethanolamine and N-(lissamine rhodamine B sulfonyl)-dioleyl phosphatidylethanolamine had been added (21). approach to a level which appears to be considerably below that expected for complete fusion. The rate of the initial burst, as well as the level attained within 30 min, depends on the concentration of peptide added. This suggests that the peptide loses its ability to rapidly fuse vesicles after the initial burst is complete. Further evidence for this was seen in experiments where either fresh vesicles or peptide were added subsequent to the initial burst. In these experiments, the concentration of vesicles and peptides was chosen so that the peptide would be virtually entirely bound and the peptide binding sites not entirely saturated. Addition of a second equivalent of vesicles failed to produce an increased fusion rate whereas addition of a second aliquot of peptide did, provided that the first burst had not significantly depleted the unfused vesicle population. Fig. 8 illustrates the effects on the initial rate when H-16 (a) or H-20 (b) is added in increasing concentrations while maintaining the vesicle concentration constant. For comparison, the fractional saturation of the peptide binding sites on the vesicles is plotted along with the rate data. In contrast to the binding curve which shows a typical noncooperative isotherm, the rate data for H-20 are sigmoidal, indicating a significant degree of cooperativity. The rate of vesicle fusion induced by the H-20 peptide saturates at high peptide concentrations as saturation of the binding sites for the peptides on the vesicles is approached, suggesting that the bound form of the peptide is responsible for the fusion. Analysis of the limited data in Fig. 8b indicates that the initial rate is dependent on between the third and the fourth power of the bound peptide concentration. In contrast with H-20, H-16 was found to be a very poor fusogenic agent, even at concentrations which saturate the vesicle's binding sites.
Measurements of the rate of fusion at low pH were complicated by the fact that the H-20 peptide precipitates rapidly when added to aqueous solution at pH <6.0. Nevertheless at low peptide concentrations (<3 PM) it was possible to measure a fusion rate for H-20 of 0.6%/mm at pH 5.0 (pH 7 PBS buffer acidified with glacial acetic acid), which was indistinguishable from that measured at pH 7.1 under the same conditions (0.08 mM lipid). This, together with the observation that acidification caused no additonal fusion in the experiment shown in Fig. 5 (Miniprint), indicates that H-20 fuses vesicles with a rate which is independent of pH between 5.0 and 7.0.
DISCUSSION
The general hypothesis that some physiologically relevant membrane fusion processes are mediated by hydrophobic segments of proteins has been reviewed previously (32). The results we have obtained with a specific peptide sequence from a protein of known fusogenic activity add to this body of evidence. Our finding that H-20 is fusogenic at both pH 5.0 and 7.0 whereas the parent protein is active only at pH <6.0 supports mechanisms (8,34) which involve pH-induced protein conformational changes as a major pH-dependent phenomenon involved in activation of HA and BHA. It has been hypothesized that the HA-2 NHz-terminal peptide is unavailable for interaction with membranes at neutral pH and only becomes available after the pH-induced conformational change. A similar mechanism has been proposed to explain the pH dependence of clathrin-induced fusion of small dioleyl phosphatidylcholine vesicles (33). We find it interesting that H-20 is fusogenic while H-16 is not. The circular dichroism results suggest a conformational explanation for the differences in the fusogenic potencies of H-16 and H-20; H-20 appears to bind in a helical conformation while H-16 probably binds in a more extended configuration(s), suggesting that helix formation is required for fusion activity. The high hydrophobicity of the last 4 residues might also contribute to the fusogenic activity due to H-20, but this effect alone, as measured by the difference in H-16 and H-20 K d values, appears to be small. The sequence of H-20 suggests why the full 20-residue peptide might be required for helix formation. The first 16 residues of this peptide contain 6 glycine residues, an amino acid which by any measure (28,29) is highly destabilizing to the helical conformation. Furthermore, helix formation is highly cooperative with respect to chain length (14). When the chain is extended from 16 to 20 residues, 3 residues which favor helix initiation (methionine, isoleucine, alanine) are added, and the chain length is increased; both changes act to increase the molecule's potential for helix formation. Also, these residues increase the length of the hydrophobic face of the helix and hence add to the helical stability at an apolar/water interface (19).
If a helix is indeed required for fusogenic activity, the question arises as to why the helix formed by H-20 contains so many helix-disrupting glycine residues. We believe the answer lies in the complex structural, dynamic, and functional requirements for this segment of HA-2. In the crystal structure of the high pH form of the protein the first 20 residues fold into a nonhelical conformation. In this conformation the hydrophobic residues are directed toward the interior of the protein and contribute to the hydrophobic structural core of the protein. The glycines appear to be important for stabilizing the turn conformations. When the pH of the medium is lowered, this peptide presumably must have enough conformational flexibility to disengage itself from the protein and adopt a helical conformation when it interacts with membranes. The sequence observed probably represents a compromise between the optimal sequences for each of these requirements.
The qualitative kinetics of vesicle fusion induced by H-20 provide some insight into the possible mechanisms by which this process occurs. For a given peptide concentration, the fusion does not go to completion but rather appears to approach an asymptotic limit which increases with peptide concentration. The initial "burst" phase of the kinetics does not appear to be due to the initial encounter of the peptide with the vesicles, as binding was essentially instantaneous in the fluorescence experiments used to determine binding constants. Whether the decrease in rate following the initial burst reflects a competing process (e.g. a conformational change) or product inhibition (e.g. the peptide binds preferentially to the fused uersw unfused vesicles) has not yet been determined. In any case, the fact that the peptide is only transiently active as a fusogen is consistent with it playing a role in uiuo in the fusion process. At low pH, hemagglutinin causes fusion of the viral membrane with the endosomal membrane but then does not appear to induce further fusogenic events which would be toxic to the cell. Also supportive of an in uiuo role for the NHz-terminal region of HA-2 is the third to fourth order dependence of the bound peptide concentration on the rate of fusion, which demonstrates that the cooperation of at least 3-4 peptides is required to lower the energy barrier for fusion. This is consistent with the fact that hemagglutinin is a trimer of identical subunits.
The above results demonstrate that the NH,-terminal 20 residues of HA-2 can play an important role in the fusion of virus with endosomal membrane. However, the rate of vesicle fusion induced by this peptide is approximately one-fifth the rate reported for fusion of comparable concentrations of dioleyl phospatidylcholine vesicles by the low pH form of the intact virus (27). This suggests that the NH, terminus is a segment which works in concert with other portions of HA and accompanying proteins to effect fusion, a conclusion in accord with genetic studies (31).
Our results show that although the H-20 peptide is fusogenic at neutral pH, it does have significant pH-dependent properties which might contribute to a pH dependence of the physiological fusion process. It has an extremely high potential to aggregate and precipitate at pH 5 (explaining the formation of rosettes when BHA is acidified (9)), and its interactions with vesicles appear to be strengthened at this pH. This difference in affinity as the pH is lowered should be accentuated with membranes composed of more biologically representative (acidic) lipids. Such considerations could provide a rationale for the extreme steepness of the pH uersw hemolysis curves reported by Daniels et al. (31). By superposing several pH-dependent events (conformational changes, heightened affinity for membranes, etc.), each with a pK, near 5.5, it is possible to create a highly cooperative transition with respect to pH.
In conclusion, our results support the idea that the NH2 terminus of HA, exposed by a low pH conformational change, plays an active role in the fusogenic process. In addition, we suggest that this process involves formation of a membranebinding helix which helps promote fusion by destabilizing the membrane. This is consistent with the results of Gething and co-workers (34) who have used site-directed mutagenesis to alter the sequence of the NH2-terminal segment of HA-2. They find that replacement of the first two glycines with glutamates, which would disrupt the orderly segregation of hydrophobic, small, and acidic residues in the helical structure of Fig. 2, leads to proteins with lowered fusogenic activities. | 2018-04-03T01:24:05.881Z | 1987-05-15T00:00:00.000 | {
"year": 1987,
"sha1": "eb3ed54aa1cb5e3a0a8c0247ad86e3ac83a9816a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(18)48270-1",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "681403d237b7226040f092c2a3d5dc19491f3f3b",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
208515826 | pes2o/s2orc | v3-fos-license | Analysis of whole genome-wide microRNA transcriptome profiling in invasive pituitary adenomas and non-invasive pituitary adenomas
Background Dysregulation of microRNAs (miRNAs) plays a critical role during the occurrence and progress of pituitary adenomas (PAs). However, the roles of miRNAs in the invasiveness of PA are poorly understood. This study aims to more comprehensively and specific define the relationship between altered miRNA and PA invasion. Methods The differential expression of miRNAs (DEMs) between invasive PAs (IPAs) and non-invasive PAs (NPAs) was explored by RNA sequencing and which functions were analyzed by gene ontology (GO) as well as Kyoto Encyclopedia of Genes and Genomes (KEGG). The miRNA-mRNA network was predicted with bioinformatics. Results We identified 31 upregulated miRNAs and 24 downregulated miRNAs in IPAs compared with NPAs. GO analysis and KEGG pathway analysis showed the DEMs were mainly associated with cell proliferation and cell cycle pathway. In addition, on the count of predicted miRNA-mRNA network, two hub miRNAs were identified. Conclusions Our results demonstrate the miRNA-mRNA network in detail, which suggest that miRNA may be a promising target in diagnosis and therapy for IPAs.
Background
PA is one of the most common intracranial tumors with an incidence of 10-15% [1]. Generally, PAs are considered benign, but some of them are invasive, which invade the adjacent structures such as sphenoid sinus, cavernous sinus, and diaphragma sellae [2][3][4]. Therefore, IPAs are not only more difficult to complete surgical resection, but also more likely to recurrence after surgery.
MiRNAs are single-strand non-coding RNAs of approximately 19-23 nt, which regulate gene expression at the post-transcriptional level [5,6], and they can also act as tumor suppressor genes or oncogenes in various tumors [7]. For instance, miR-193b exerts tumor suppressive effects in human acute myeloid leukemia by inducing tumor cell apoptosis and G1/S arrest [8], while miR-210-3p plays an oncogene role in prostate cancer by promoting cancer cell epithelial-mesenchymal transition and bone metastasis via NF-κB signaling pathway [9]. Altered expression of many miRNAs has been described in PAs, and specific miRNA signatures are related to clinical and therapeutic characteristics of the tumors [10]. However, comprehensive and specific researches of relationships between miRNAs and invasiveness of PAs are still rare.
In order to better understand the mechanism of invasiveness in PAs, it is necessary to clarify the miRNA regulatory network in IPAs. In this study, we detected DEMs in IPAs and NPAs by RNA sequencing, and established the co-expression network contain miRNAs and predicted target genes by Cytoscape. In addition, the expression of the most upregulated miR-665 and the most downregulated miR-149-3p in IPAs was screen out. Moreover, we explained the potential functions of the two key miRNAs in invasive behavior of PAs by GO analysis and KEGG pathway analysis.
Patients and samples
Seven tumor samples were obtained from patients with PAs who underwent operation at the Department of Neurosurgery, 1st Affiliated Hospital of Kunming Medical University for identification of miRNAs by highthroughput sequencing. None of these patients has been received radiotherapy or chemotherapy before surgery. Tumor samples were divided into 2 groups according to invasive behavior proved by surgical findings and pathology: IPA and NPA. All patients were informed according to inform consent approved by the Ethics Boardof the 1st Affiliated Hospital of Kunming Medical University. Immediately following separation, the fresh tumor samples were placed in sterile, RNase-free 2.0-mL cryotubes. Then, samples were soaked in Trizol and stored at − 80°C for following analysis.
RNA isolation, library preparation, and sequencing analysis
Total RNA was extracted from tissue samples by Trizol regent. The integrity of total RNA was detected by agarose electrophoresis and which was quantified by Nano-Drop spectrophotometer. Then, the sequencing sample library was constructed by the following steps: ribosomal RNA removal, fragmentation, first-strand complementary DNA (cDNA) synthesis, second-strand cDNA synthesis, terminal repair 3′ terminal addition, ligation, and enrichment. The libraries were sequence on an Illumina Hiseq 2500/2000 platform.
MiRNA expression analysis
MiRNA expression levels were estimated by the TPM (transcript per million) through the following criteria: Normalized expression = mapped readcount/Total reads × 10 6 [11]. All data were analyzed using the DESeq2 R package (1.8.3). log2FC > 1 and p < 0.05 were considered as the cutoff values for DEMs screening [12].
MiRNA-mRNA network construction
Based upon results of DEM analysis and target gene prediction, the miRNA-mRNA pairs were extracted to construct the miRNA-mRNA regulatory network. Then, the regulatory network was visualized using Cytoscape_v3.5.1.
Target gene prediction, gene ontology, and pathway enrichment analysis
Target genes of the DEMs were predicted using major online tools, including miRanda (http://miranda.org.uk/), PITA (http://genie.weizmann.ac.il/pubs/mir07/mir07_data.html), and RNAhybrid (https://bibiserv.cebitec.uni-bielefeld.de/rnahybrid/) [13]. In order to analyze the main functions of the predicted target genes for the DEMs, we performed GO analysis [14]. Moreover, KEGG [15] pathway enrichment analysis was used to find out the significant pathway of predicted target genes for the DEMs. A Go term or KEGG pathway with FDR < 0.05 was considered statistically significant. Top 10 enriched GO terms and pathways of DEMs were ranked by enrichment score (− log10(p value)).
Quality assessment of sequencing data
The results showed the patterns of gene expression among the samples were similar, and the Pearson correlation between samples is similar in the two groups ( Fig. 1a, b). In view of the fact that the length of human miR-NAs is generally 19-23 nt, the analysis of the length of reads of each sample shows that 19-23 nt accounts for a higher proportion of each of 2 groups (Fig. 1c, d). And reads of the length region were included in the analysis range. In addition, by comparing and analyzing the error rate among all samples, we found that the read error rate at 19-23 nt was far less than 0.05% (Fig. 1e, f). All above results indicate the sequencing data is reliable for further bioinformation analysis.
Apportionment and annotation of DEMs
After summarizing and classifying the sequence reads into different RNA categories, such as miRNA, sn/ snoRNA, tRNA, and rRNAs, the pie chart is drawn to annotate and classify the total reads. The proportion of known miRNAs (NIN = 63.77%, INV = 56.92%) and newly discovered miRNAs (NIN = 0.04%, INV = 0.02%) can be obtained (Fig. 2a, b).
Next, Venn diagram of the DEMs in the two groups was drawn by FunRich3.1.3. A total of 136 DEMs were found in IPA, and 187 DEMs were found in NPA (Fig. 2c).
And then, the volcano map was used to infer the overall distribution of miRNAs. ▢Log2 FC > 1 and p < 0.05 were used as thresholds to screen the DEMs. A total of 31 significantly upregulated and 24 significantly downregulated miRNAs were screened out (Fig. 2d).
At last, these 55 DEMs were used to construct a hierarchical clustering analysis map (Fig. 2e).
GO and KEGG pathway analysis
In order to explore the functions of DEMs, target genes of these miRNAs were predicted by miRanda, PITA, and RNAhybrid. GO analysis and KEGG pathway analysis were used to their target pool. As results, the results of the top 10 enriched biological process (BP) terms showed that the target gene of DEMs was associated with cell proliferation, cell cycle, and apoptosis process. The top 10 KEGG pathways showed the DEMs might be involved in CDK4/6 signaling pathway, PI3K-Akt signaling pathway, and apoptosis pathways (Fig. 3).
MiRNA-mRNA network
Based on the DEMs and the prediction of target genes. The miRNA-mRNA network was generated by Cytoscape(v3.5.1). As we all know, hub nodes play important roles in biological networks. According to the degree of DEMs calculated by Cytoscape, a total of 12 miRNAs with higher values were identified (Table 1). Then the miRNA-mRNA subnetwork was generated, which included 5 upregulated miRNAs, 7 downregulated miR-NAs, and 258 predicted target genes (Fig. 4).
Discussion
Although PAs are classified into multiple subgroups based on the histological structure, pathological types, and hormone secretion [16,17], the definition of clinically IPAs currently differs in the literature. In according to the 4th edition of the WHO classification of endocrine tumors published in 2017, the invasiveness of PAs can be evaluated through the tumor proliferative capacity by mitotic count and Ki-67 index [18], indicating that the tumor proliferation capacity is closely related to invasion of PAs.
MiRNAs, mediate post-transcriptional regulation, play an important role in epigenetic regulation. Its precursor is cleavage by Dicer enzyme, and then combined with AGO protein and other components to form RISC (RNA immuno-silencing complex) to play a key role in silencing or degradation of target mRNA [19,20]. Some researchers have reported the regulating functions of miRNAs in PAs [21]. Upregulated miR-34a can significantly inhibit the proliferation of PA cells GH4C1 and promote apoptosis by regulating SOX7 [22]. Overexpression of miR-16 inhibits the proliferation of PA cells HP75 and promotes apoptosis by targeting HMGA2 expression [23]. However, the comprehensive and specific effects of miRNAs in PA invasion behavior are still rarely reported [24]. Thus, we recognized 55DEMs in IPAs by RNA sequencing analysis. Then, according to Cytoscape calculation, a miRNA-mRNA co-expression network including 5 upregulated miRNAs, 7 downregulated miRNAs, and 258 predicted target genes was generated in IPAs.
To further understand the potential function of miR-NAs, GO and KEEG pathway analysis were applied for groups (a, b). Venn distribution of differential miRNAs, there are 136 differentially expressed miRNAs in IPA and 187 differentially expressed miRNAs in NPA (c). Volcanic map of differential miRNA, blue dots for no significant difference of miRNAs, red for significant upregulation of miRNAs, and green for significant downregulation of miRNAs (d). Cluster analysis of differentially expressed sRNAs, red for high expression miRNAs, blue for low expression miRNAs (e) . 4 The subnetwork of differentially expressed miRNAs and targeted mRNAs, red for upregulated miRNAs, blue for downregulated miRNAs, and yellow for targeted mRNAs analyzing their possible biological functions and the signaling pathways in IPAs. The outcome of GO analysis showed that the target genes for the DEMs in IPAs were enriched for genes associated with cell proliferation, cell cycle, and apoptosis process, which is consistent with the active proliferation characteristics of IPA cells. Meanwhile, KEEG pathway analysis showed that the DEMs might be involved in CDK4/6 signaling pathway, PI3K-Akt signaling pathway, and apoptosis pathways. It is noted that the enriched terms of target genes for the DEMs from KEEG analysis were consistent with GO analysis results, and they are all closely related to cell proliferation. As mentioned above, tumor proliferation capacity is the most important feature of IPAs. Additionally, the miRNA-mRNA network was constructed and the outcome showed that some hub miR-NAs play important roles in IPAs. The hub miRNAs were identified by the degree of DEMs calculated by Cytoscape. Intriguingly, we found that these miRNAs have similar biological functions. For instance, hsa-miR-665, the most upregulated node in IPAs, can promote tumor cell proliferation or cell cycle progression in hepatocellular carcinoma [25]. In contrast, it was found that overexpression of has-miR-149-3p, the most downregulated node in IPAs, can inhibit the proliferation and invasion of tumor cells in bladder cancer and renal epithelial cell carcinoma [26].
Conclusions
In this study, we identified the DEMs related to the invasive behavior of PAs. Further study is needed to confirm the exact relationship between DEMs and invasive behavior of PAs and clarify the molecular mechanism of miRNA affecting the invasion of PAs. | 2019-12-02T14:24:31.381Z | 2019-12-02T00:00:00.000 | {
"year": 2019,
"sha1": "52964d0fcea120e0b05bfa62660a4d9286bf962a",
"oa_license": "CCBY",
"oa_url": "https://cnjournal.biomedcentral.com/track/pdf/10.1186/s41016-019-0177-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a99a1a6d7b74eb540250849819df93e899e30563",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
256907580 | pes2o/s2orc | v3-fos-license | Multi-layered NiOy/NbOx/NiOy fast drift-free threshold switch with high Ion/Ioff ratio for selector application
NbO2 has the potential for a variety of electronic applications due to its electrically induced insulator-to-metal transition (IMT) characteristic. In this study, we find that the IMT behavior of NbO2 follows the field-induced nucleation by investigating the delay time dependency at various voltages and temperatures. Based on the investigation, we reveal that the origin of leakage current in NbOx is partly due to insufficient Schottky barrier height originating from interface defects between the electrodes and NbOx layer. The leakage current problem can be addressed by inserting thin NiOy barrier layers. The NiOy inserted NbOx device is drift-free and exhibits high Ion/Ioff ratio (>5400), fast switching speed (<2 ns), and high operating temperature (>453 K) characteristics which are highly suitable to selector application for x-point memory arrays. We show that NbOx device with NiOx interlayers in series with resistive random access memory (ReRAM) device demonstrates improved readout margin (>29 word lines) suitable for x-point memory array application.
performance as a selector device. We revealed that insufficient Schottky barrier height between electrode and NbO 2 formed as a result of interfacial defects, which increased the conductivity of insulating state.
Interface defects were successfully suppressed and a higher Schottky barrier can be formed by inserting a thin NiO y barrier layer between electrode and sputtered-NbO x (W/NiO y /NbO x /NiO y /W). As a result, the leakage current of W/NiO y /NbO x /NiO y /W device was significantly decreased and the device exhibited high I on /I off ratio (>5400). The W/NiO y /NbO x /NiO y /W device exhibits very fast transition speed (<2 ns) and excellent operating thermal stability (>453 K). The W/NiO y /NbO x /NiO y /W device can have very fast delay time (<30 ns) and is drift-free, which are highly suitable attributes for selector application in x-point memory array.
Result and Discussions
The cross-sectional transmission electron microscopy (TEM) image shows film structure and crystalline state of both MBE and sputter deposited films (Fig. 1). The films were analyzed by in-situ X-ray photoelectron spectroscopy (XPS) to confirm that the correct phase and composition were achieved. Details of the MBE growth and XPS phase identification are found in ref. 17 and XPS results for the sputtered films are shown in Supplementary Fig. S1 18,19 . The MBE NbO 2 film was polycrystalline with a lattice constant of 3.4 Å. This corresponds to the d-spacing of the (400) planes of the insulating body-centered tetragonal NbO 2 phase 20 . On the other hand, the sputter-deposited NbO x film was amorphous as-grown.
In comparison with the sputter-deposited NbO x film, the electroforming process was mostly eliminated in the MBE-deposited NbO 2 film, as shown in Supplementary Fig. S2. In the case of the sputter-deposited film, the pristine state of the film was amorphous. Therefore, electroforming was needed to form crystalline tetragonal NbO 2 regions within the amorphous matrix to exhibit IMT 16 . On the other hand, electroforming was not needed in the case of MBE-deposited NbO 2 film because the pristine state of the film was already polycrystalline tetragonal NbO 2 . The difference in electroforming and IMT process between sputter-and MBE-deposited films is summarized in Supplementary Fig. S3. Because there is no longer any need for electroforming, the IMT process in MBE-deposited NbO 2 films can be precisely analyzed.
The mechanism of IMT of NbO 2 under E-field was widely interpreted by Joule-heating model, and this model suggests that electrically induced Joule-heating generate the sufficient heat over IMT temperature of NbO 2 (1080 K) 8,10 . However, the IMT temperature of NbO 2 (1080 K) is much higher than the temperature that can achieved by Joule-heating within the insulating state of NbO 2 9 . Therefore, several researches proposed that the mechanism of IMT under E-field is the result of thermal runaway model [13][14][15] . These researches simulated the conductivity of NbO 2 device as a function of temperature and E-field by fitting the I-V characteristics with Poole-Frenkel model. They showed that IMT can take place far below IMT temperature of NbO 2 (1080 K) by thermal runaway, which successfully resolved the main drawback of classical Joule-heating IMT model.
We take a different perspective by using field-induced nucleation theory to explain IMT mechanism in this research. Devices that abruptly change their resistance at a certain electric field, such as phase change random access memory (PRAM) or VO 2 -based IMT devices, energetically favor metallic nuclei with a cylindrical shape upon nucleation via the applied electric field [21][22][23][24][25] . Similarly, the field-induced IMT of NbO 2 is expected to result of a Peierls transition of conductive NbO 2 (metallic, i.e. rutile NbO 2 ) regions formed as cylindrical shape nuclei within an insulating NbO 2 matrix (tetragonal, distorted rutile NbO 2 ) under the influence of an electric field 21,22 . The formation of nuclei is favorable and forms a conductive path through the insulating host material. The free energy the system ∆G consist of: Here, σ and μ are the surface tension and the chemical potential difference between the two NbO 2 phases, respectively. The transition energy barrier is lowered by an external electric field where ε is the dielectric constant of the host and n is the depolarizing factor ( = n 1 3 for a sphere)) as shown in Fig. 2(a). If we assume that spherical nuclei exist in zero-field (W E = 0), then the surface area and volume of the nuclei can be defined as A = 4πR 2 and Ω = 4πR 3 /3, respectively. By using the differential form of the free energy at zero-field (∆ = π σ − π µ ), we can define the energy barrier at zero-field ( π = σ µ W 16 /3 0 3 2 ) and the equivalent radius of the nuclei (R 0 = 2σ/μ). However, because the nuclei with cylindrical shape are energetically more favorable than spherical ones when an E-field is applied 22 , it is preferable to modify Equation (1) as follows: with R being the cylinder radius and h its height. Following from Eq. (2), the reduced barrier energy W(E) under E-field is given by: Here, E 0 is the voltage acceleration factor of the first order, independent of the external voltage or temperature, and its conventional value is 1 MV/cm 22 , d is the thickness of film; and α is geometric factor of cylinder radius compared with equivalent radius of the nuclei at zero-field (R = αR 0 ) where 0.1 ≤ α < 0.5. We assume α = 0.5 because this value corresponds to the maximum barrier 23 . The theory predicts the delay time between application of the field and the switching event expressed as 24 : The value of τ d for the film was measured by using rising ramp pulses that can minimize RC delay effect and can reveal the delay time with various voltages (V A ) and temperatures 26 . τ d is defined as the point where I D (V D /50 Ω) suddenly increases, as shown in Fig. 2(b). Here, τ d decreases exponentially with V A and temperature. Figure 2(c) shows that the relation between temperature/V A and τ d can be described by an Arrhenius plot, which follows Equation (4). We found that the experimentally determined value of the zero field barrier W 0 , which is 47-63 meV, agrees well with the calculated minimum energy pathway (MEP) found between rutile and tetragonal NbO 2 during the Peierls transition, which is 43 meV 27 . The alternative mechanism of diffusion or electromigration of oxygen (vacancies) has also been discussed in terms of the IMT in niobium oxides 11,28,29 . The values of the diffusion barrier height of roughly 290-550 meV deduced from the diffusion studies in Nb 2 O 5 reported in ref. 30. Also, the observed oxygen diffusion coefficient in NbO 2 is lower than that of the pentoxide (indicating a higher diffusion barrier for NbO 2 than for Nb 2 O 5 ). Therefore, we can conclude that oxygen diffusion is energetically less favorable than Peierls phase transition due to the high barrier for oxygen diffusion. Furthermore, the diffusion barrier at zero field is estimated to be reduced by only ~10 meV under electric field application considering the electric potential drop along a typical diffusion length of ~1 Å. Therefore, IMT of NbO 2 is likely due to a Peierls phase transition through field induced nucleation rather than oxygen electromigration. The events that occur during IMT are illustrated in Fig. 2(d).
The expected transition speed of NbO 2 film is quite fast because only short-range atomic arrangement is needed for the transition (Peierls transition). Therefore, NbO 2 based IMT device is well suited for selector device in x-point memory array. However, sufficiently high resistivity in the insulating state of NbO x film has not yet been obtained. Moreover, the off-current of NbO 2 film can be suppressed under 1 nA at 1 V (Area = 50 × 50 nm 2 , Thickness = 25 nm) because insulating state of NbO 2 conductivity is about 10 −4 S/cm 31 . Likely, the relatively high conductivity of the insulating state of NbO x originates from interface defects between electrode and NbO x layer. In fact, many defects (Grain boundary, dislocation, and point defects) are observed between electrode and MBE-deposited NbO 2 film by TEM image (Supplementary Fig. S4). Additionally, these interface defects were observed in sputter-deposited NbO x device from our previous research 32 .
These defects can pin the Fermi level between electrode and NbO x layer. As a result, the device does not have a sufficiently high Schottky barrier. These defects can be eliminated and a high Schottky barrier can be obtained by inserting a NiO y layer, which consists of NiO and Ni 2 O 3 phases ( Supplementary Fig. S1), between electrode and NbO x layer (W/NiO y /NbO x /NiO y /W) as shown in Supplementary Fig. S5 33 . Based on DC I-V characteristics at various temperatures for both devices, current-temperature dependencies at low field (V = 0.1 V, saturation region) for both devices follow the Richardson relation (Eq. (5)) and the effective Schottky barrier can be obtained using the slope of the Richardson plot ( Supplementary Fig. S5). W/NiO y /NbO x /NiO y /W devices have higher Schottky barrier height (ϕ B ~ 0.25 eV) than W/NbO x /W devices (ϕ B ~ 0.15 eV).
Bp 0 2 (J 0 = Current density at saturation region, A* = Richardson constant, T = temperature, k = Boltzmann constant, q = electronic charge, ϕ Bp = effective schottky barrier energy). Before comparing the device performance of W/NiO y /NbO x /NiO y /W device with W/NbO x /W device, we analyzed the delay time of the W/NiO y /NbO x /NiO y /W. Interestingly, the zero field barrier W 0 of W/NiO y /NbO x / NiO y /W device (42-70 meV), which was extracted from delay time Arrhenius plot, also corresponds well to the calculated minimum energy pathway (MEP) found between rutile and tetragonal NbO 2 during the Peierls transition, which is 43 meV. This value is the same as that obtained from MBE film analysis (Fig. 2(c)). These results inferred that the barrier layers were not affected by transition mechanism of NbO x . Meanwhile, interface structure of NbO x device can control the conductivity of insulating state of the device. Therefore, we can suppress the high conductivity of NbO x film by simply inserting NiO y barrier layer without compromising fast transition characteristics of NbO x IMT layer. Figure 3(a) shows IMT characteristics after electroforming both W/NbO x /W and W/NiO y /NbO x /NiO y /W devices. Compared with the W/NbO x /W device, the W/NiO y /NbO x /NiO y /W device exhibited decreased conductivity in the insulating state. The I on /I off ratio of W/NiO y /NbO x /NiO y /W device improved to >5400 from >480, which is the I on /I off ratio of the W/NbO x /W device. Both devices have superior endurance that persisted over 10 8 AC cycles as shown in Fig. 3(b). The W/NiO y /NbO x /NiO y /W device has very uniform device-to-device and cycle-to-cycle stability during several DC I-V sweeps (Supplementary Fig. S6). Moreover, we measured the transition time and delay time of W/NiO y /NbO x /NiO y /W device to investigate the temporal characteristics of the device. The device has a transition time under 2 ns and a delay time down to 30 ns for variable voltage ramps (Supplementary Fig. S7). We expect that the delay time of W/NiO y /NbO x /NiO y /W can be even shorter for square pulses.
Since IMT mechanism of NbO x is a second-order structural transition of the Peierls type and involves only very short range atomic displacements, the drift-free characteristic is available in NbO x based device. As a matter of fact, Fig. 4(a) shows that W/NiO y /NbO x /NiO y /W device can recover its insulating state under less than 10 ns. Figure 4(b) illustrates the drift-free operation of the W/NiO y /NbO x /NiO y /W device when V th does not change at different time intervals 34 . These results indicate that the W/NiO y /NbO x /NiO y /W device can be used for fast operating applications.
We also evaluated the feasibility of x-point memory array using a novel W/NiO y /NbO x /NiO y /W device. The W/NiO y /NbO x /NiO y /W device was connected in series with a TiN/Ti/HfO x /TiN ReRAM device (ReRAM, 1 R) which has DC I-V characteristic shown in Fig. 5(a). Set voltage (V set ) and reset voltage (V reset ) of ReRAM is about 0.6 V and −1.2 V, respectively. To prevent the hard breakdown of 1 R, we set the compliance current to 500 μA during operation. Figure 5(b) shows DC I-V characteristics of 1S-1R device with superior DC endurance (>300 cycles). The V set and V reset of 1S-1R was about 1.8 V and −2 V, respectively. The state of the device is determined by applying a read voltage (V read ) of 1.4 V.
We simulated the x-point memory using novel W/NiO y /NbO x /NiO y /W selector based on measurement in Fig. 5(b). Since the leakage current of unselected cell at ½V read is suppressed to about 300 nA in both LRS and HRS state by adopting the W/NiO y /NbO x /NiO y /W selector, we demonstrate that the readout margin (Eq. (6)) can improve up to 2 9 vs. 2 1 word lines (W.L.) as shown in Fig. 5(d) 35,36 .
Conclusion
We have successfully fabricated NbO 2 poly-crystalline film using MBE, which does not require an electroforming process. We find that the IMT in NbO 2 undergoes the Peierls phase transition through the field-induced nucleation with the formation of a conductive filament of rutile NbO 2 in an insulating host matrix of tetragonal NbO 2 .
We also showed that the leakage current of NbO 2 IMT device originates from the insufficient Schottky barrier height between electrode and NbO x layer as a result of interfacial defects. Sufficiently high Schottky barrier and improved IMT characteristics can be obtained by introducing a NiO y layer between electrode and NbO x layer. A novel W/NiO y /NbO x /NiO y /W device has high I on /I off ratio (>5400), high operating temperature (>453 K), fast transition speed (<2 ns) and drift-free operation. We employed the W/NiO y /NbO x /NiO y /W device as a selector device on ReRAM memory cell. Due to the excellent selector characteristics of W/NiO y /NbO x /NiO y /W device, we show a significantly improved readout margin (up to 2 9 word lines) is possible in a large x-point memory array.
Methods
First, to analyze the IMT mechanism under E-field, we fabricated NbO x films using both MBE and RF-sputtering. About 25 nm-thick NbO x film was deposited by MBE and RF-sputtering on a 50 × 50 nm 2 TiN bottom electrode (B.E). The MBE-deposited NbO 2 film was deposited at 700 °C. Nb metal was evaporated from an electron beam source and molecular oxygen at a pressure of 5 × 10 −6 Torr were used. The sputter-deposited NbO x film was deposited at room temperature by RF reactive sputtering with a process gas of Ar/O 2 (30 sccm/1.3 sccm), at a working pressure of 5 × 10 −3 Torr and forward power of 100 W using a 2-inch Nb metal target. After both of NbO x were deposited, positive photoresist was spincoated at 3000 rpm for 35 s and baked at 100 °C for 90 s. The photoresist were exposed under the lithography mask which has 50 × 50 um 2 pattern and removed with developer to deposit contactable top electrode. Afterwards, W top electrode was deposited by RF reactive sputtering at room temperature with a process gas of Ar (30 sccm), at a working pressure of 5 × 10 −3 Torr and forward power of 100 W using a 2-inch W metal target. Secondly, to reduce the leakage current of NbO x film, NiO y barrier layer inserted NbO x structure deposited on a 50 × 50 nm 2 W bottom electrode (B.E). About 2-3 nm thick NiO y layer was sputter deposited additionally by RF reactive sputtering with a process gas of Ar/O 2 (30 sccm/2.0 sccm), at a working pressure of 5 × 10 −3 Torr and forward power of 30 W using a 2-inch Ni metal target as a barrier layer between W electrodes and sputtered NbO x layer. The condition for sputtering NbO x is same with above. As a result, W/NiO y /NbO x /NiO y /W device was fabricated and its electrical characteristics compared to a W/NbO x /W control sample. | 2023-02-17T14:56:25.380Z | 2017-06-22T00:00:00.000 | {
"year": 2017,
"sha1": "28a98d37feeb6b79779159e0a01135156bb5b8b2",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-04529-4.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "28a98d37feeb6b79779159e0a01135156bb5b8b2",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": []
} |
12843428 | pes2o/s2orc | v3-fos-license | Exit point in the strong field ionization process
We analyze the process of strong field ionization using the Bohmian approach. This allows retention of the concept of electron trajectories. We consider the tunnelling regime of ionization. We show that, in this regime, the coordinate distribution for the ionized electron has peaks near the points in space that can be interpreted as exit points. The interval of time during which ionization occurs is marked by a quick broadening of the coordinate distribution. The concept of the exit point in the tunneling regime, which has long been assumed for the description of strong field ionization, is justified by our analysis.
Theory
We recapitulate briefly a few facts constituting the basis of the Bohmian approach to quantum mechanics 19 . Substituting the polar form of the wave function of a system (we consider for simplicity a one-electron system), Ψ (r, t) = R(r, t) exp {iS(r, t)} with R(r, t) = |Ψ (r, t)| and S(r, t) = arg(Ψ (r, t)), into the time-dependent Schrödinger equation and taking real and imaginary parts, one obtains: 2 2 where the quantum potential and the velocity field are respectively defined as: The Bohmian interpretation involves assuming that the velocity field (4) generates a family of electron trajectories for an ensemble of particles. At the initial time, t = 0, the coordinates of the particles constituting the ensemble are distributed as prescribed by the usual R 2 (r, 0) rule of QM. Initial velocities of the particles of the ensemble are given by Eq. (4), evaluated at t = 0. Electron trajectories for t > 0 can be found by integrating Eq. (4) along each trajectory, provided that the velocity field, v(r, t), is known as a function of coordinates and time. Alternatively, one may note that Eq. (1) is a Hamilton-Jacobi equation for a system described by the quantum potential (3). One may, therefore, find Bohmian trajectories by solving Newton's equations of motion that are equivalent to the Hamilton-Jacobi equation (1), with the initial conditions specified above.
We consider a hydrogen atom in the field of a laser pulse E z = E 0 f(t)cos ωt, polarized along the z-direction, which we use as a quantization axis. The pulse envelope function is f(t) = sin 2 (πt/T 1 ), where T 1 is the total pulse duration. We performed calculations for pulses with T 1 = 3T and T 1 = 4T, where T = 2π/ω is an optical cycle (o.c.) of the field. We present results for various field strengths and frequencies, corresponding to the tunnelling regime of ionization. The initial state of the system is the ground state of the hydrogen atom. To solve the fully three-dimensional time-dependent Schrödinger equation (TDSE), we employed the procedure described in the works 28,29 . The atom-laser field interaction is described using the length gauge.
Using the time-dependent wave-function Ψ (r, t) provided by the TDSE, we can rewrite Eq. (4) in an equivalent way as: where p is momentum operator. This equation gives us the velocity field as a function of spatial coordinates and time. Since the wave-function in our approach is defined on a spatial grid, we obtain the velocity field at the grid-points. The velocity field at other points is found by means of the Lagrange interpolation procedure. Due to the symmetry of the problem with respect to rotations around the z-axis, it is sufficient to compute the velocity field in any plane containing the z-axis. We choose the (x, z)-plane for this purpose. For the initial ground state of the hydrogen atom, all the Bohmian trajectories launched at t = 0 have zero velocities. It is a well-known feature of Bohmian QM 19 that the velocity field in a state described by a real wave-function is zero. The physical possibility of this state of motion in the Bohmian picture is due to the fact that the force corresponding to the quantum potential (3) vanishes for such states, allowing particles to stay at rest. Having obtained the velocity field v(r, t) in the (x, z)-plane, we launch an ensemble (≈ 5 × 10 5 trajectories) of electron trajectories. The evolution of the trajectories in time is found by numerically integrating the system of differential equations = r v t ( , ) with the initial conditions x(0) = x 0 , z(0) = z 0 in the (x, z)-plane. Some of the trajectories obtained in this way describe electrons remaining bound, while some describe ionized electrons. Two typical examples for different pairs of initial conditions, x 0 , z 0 , producing bound and ionized trajectories, are shown in Fig. 1.
The overall character of the trajectories can be inferred from the inset in Fig. 1, where we left uncolored the region in the (x, z)-plane from which bound trajectories originate. We define 'ionized trajectories' here as those trajectories for which the distance of the electron from the atomic core at the end of the pulse exceeds a threshold value R min . We found that the particular value of R min is not important, as long as the value of this parameter exceeds atomic dimensions. We use below R min = 10 a.u.
As in ordinary statistical mechanics, an ensemble of particles can be described using distribution functions. At any time t 1 > 0, a distribution function ρ(Ω, t 1 ) giving the probability of detecting an electron with coordinates r and velocity v, lying inside a region, Ω, of the electron's phase-space can be found as: r v x z 1 , , where φ 0 (r) is the initial ground state wave-function of the hydrogen atom, N is an overall normalization factor, and only trajectories ending in Ω at t = t 1 are included in the sum. We impose one further restriction on the trajectories included into the sum (6). We are interested in those members of our ensemble for which ionization has occurred. This means that we must separate the distribution function describing the ionized subsystem from the total ensemble. This separation is necessary if the ionization probability is small and the contribution of the ionized electrons is difficult to see. To separate the ionized trajectories, we use the same criteria we employed above, including in the sum in Eq. (6) only those trajectories for which the distance of the electron from the atomic core at the end of the pulse exceeds the value R min . We should note that, with this choice of the parameter R min , the electrons which end up in the Rydberg atomic states after the end of the pulse are counted as ionized. This procedure agrees with the physical picture we are describing in the manuscript. Our aim is to follow the development of the ionization process in time. We must, therefore, take into account all the electron trajectories for which ionization event occurred at least once during the time interval of the pulse duration. It has been suggested 30 that the dominant mechanism, leading to the population of the Rydberg states in the tunnelling regime, is the frustrated tunnelling ionization (FTI), a two-step process including tunnelling and subsequent rescattering. The majority of the electrons ending up in the Rydberg atomic states must, therefore, undergo ionization during the interval of the pulse duration. For practical computation of the sum in Eq. (6), we launch the trajectories at time t = 0 with the initial conditions in phase-space region Ω 0 = D 0 × {0}, i.e. initial coordinates (x 0 , z 0 ) in a region D 0 of the (x, z)-plane, and zero velocities. For the region D 0 , we take a rectangle in the x, z-plane: |z| < 6 a.u., 0 < x < 6 a.u. The rectangle is divided into a number of squares, each with a side length of 0.01 a.u. Similarly, the phase-space volume at the time t = T 1 at the end of the pulse is divided into a set of regions Ω i (with each Ω i being a direct product of squares with a side length of 0.01 a.u. in the momentum and coordinate spaces). With phase-space thus discretized, the discretized version of the distribution (6) can be obtained (apart from an overall normalization factor) as a number of trajectories arriving at the time t = T 1 into a given region Ω i , weighted with the appropriate scaling factor, depending on the coordinate probability distribution in the initial state. We checked that the results we obtain are stable with respect to variations of the discretization parameters.
From the point of view of the statistical mechanics, the procedure encapsulated in Eq. (6) is equivalent to solving the Liouville equation for the distribution function describing the ensemble: where {A, B} is a Poisson bracket, , with a quantum potential defined in Eq. (3), and a distribution function describing the ensemble at the initial time t = 0 given by ρ(r, v, 0) = |φ 0 (r)| 2 δ(v). We shall be interested not in the full distribution function, but in the reduced quantity W(z, t), describing the probability
Results and Discussion
Distribution functions, W(z), calculated according to the recipe described above, are shown in Fig. 2 for the tunnelling regime of ionization (Keldysh parameter γ = 0.57 for the field parameters used to obtain the results presented in the Figure). One can clearly see that the distribution obtained in the tunneling regime has a double peak structure for small t, meaning that the ionization predominantly originates from two points along the polarization vector direction, which can be interpreted as exit points. This observation agrees with FDM.
To see how an ionized electron moves in the laser field, we follow the evolution of W(z) as time evolves. As the top panel of the Fig. 2 shows, in the case of tunneling ionization by a laser pulse with a total duration of 3 optical cycles, the distribution is a narrow-peaked function of z for all t except when the absolute value of the electric field of the pulse reaches a local maximum. In a narrow interval, including the local maximum at t ≈ 1.5T, W(z) undergoes a qualitative change. For this interval of time W(z) becomes a broad coordinate distribution, extending far into the region of large positive z-values. This long tail, which the distribution W(z) acquires, ensures that for a short period of time around the local field maximum, the probability to find an electron at the distances from the atom exceeding typical atomic dimensions increases dramatically. Natural interpretation of this behavior is to consider it as a signature of a burst of ionization in the Bohmian picture. This interpretation is further supported by the behavior of W(z) in the intervals of time containing secondary field maxima at t = T and t = 2T. In these intervals the distribution W(z) undergoes similar qualitative changes, developing tails extending far into the region of large negative z-values, with the implications that the probability to find an electron at large distances from the atom rises considerably. This behavior is consistent with the pulse shape shown in the inset in the top panel of the Fig. 2, electric field lowering the barrier in positive z-direction for t = 1.5T and negative z-direction for t = T and t = 2T, thus enabling electron to escape in these directions.
We have performed analogous calculations for other field parameters (different pulse strengths and pulse durations) and found that the described behavior is quite typical for the tunnelling regime of ionization, as evidenced by the results shown in Figs 2 and 3. In all cases, the tails in W(z) appear only in relatively short intervals of time, in agreement with the well-known fact that ionization predominantly occurs in short time intervals around the peak field strength. We may interpret the appearance of these tails in the coordinate distribution as a signature of ionization bursts in the Bohmian picture of ionization. The value of z immediately before the time when the tails appear may then be interpreted as the exit point, i.e. the z-value of the electron coordinate at the time when electron exits from under the barrier. This value is to be understood in the probabilistic sense, as the distribution W(z) at the time before the electron's exit has finite width. This width is, however, relatively small, and this justifies the concept of a well-defined exit point that is often assumed in simulations 5,7 . Figure 4 gives a more detailed view of the process of the formation of the tails in the tunnelling regime, illustrating the evolution of the distribution W(z) for different times around the mid-point of the pulse shown in the inset in Fig. 2. One can observe the fast process of the development of a tail in the coordinate distribution, W(z) undergoing a change from a narrow to a broadly-peaked function of z in a short interval of time around the maximum peak field strength. This agrees with the probabilistic view of the exit time 10 . As present results show, the coordinate of the exit point should also be understood in a probabilistic sense.
To understand better the Bohmian perspective of the development of the ionization process in the tunnelling regime, let us consider in more detail the quantum potential given by the Eq. (3). The Bohmian trajectories are real for all the interval of the pulse duration. The tunnelling in the Bohmian picture occurs not because the trajectory at some point becomes complex (as in the quantum orbits 31,32 approach), but because the quantum potential (3) effectively removes the barrier. For electric fields that are not too strong, when the depletion of the ground state can be neglected, the time-dependent wave-function of the system can be written as Ψ (r, t) = φ 0 e −iεt + φ i (r, t), where φ 0 e −iεt is the time-evolved ground state wave-function and φ i (t) describes the ionized wave-packet. The SFA ionization amplitudes, which are the Fourier transforms φ v ( ) of the φ i (t), are essentially Gaussian functions of the velocity components 14 . The main dependence of the amplitude on the velocity in the polarization direction (which interests us presently) is given, for a hydrogen atom, by the factor 14 , where h(γ) = arcsinhγ − γ(1 + γ 2 ) −1/2 , γ is the instantaneous value of the Keldysh parameter. The characteristic length on which φ v ( ) z changes appreciably in v-space is, therefore, a ≈ (2ω) 1/2 h(γ) −1/2 . The characteristic length on which φ i (z, t) changes in the coordinate space (assuming Gaussian character of φ v ( ) z ) is then 2/a. From this estimate and from the Eq. (3), defining the quantum potential, we may deduce that as long as we stay close to the atomic core, so that |φ i (r, t)| ≪ |φ 0 (r)|, the contribution to the quantum potential due to the ionized wave-packet is confined to the region z ≲ 2a −1 . This point is illustrated in Fig. 5, where the snapshots of the quantum potential computed according to Eq. (3), are given for several times around the maximum field strength for the pulse with shape shown in the inset in the top panel of Fig. 2. The quantum potential V Q (r, t) = V Q (0, 0, z, t), evaluated along the laser polarization axis, is shown.
One can see that, at time t 1 = 1.45T near the pulse midpoint, the quantum corrections effectively remove the barrier in the positive z-direction, so that a classical escape trajectory is possible. (This point is better illustrated in the bottom panel of Fig. 5, which shows potential curves under magnification). For times t 2 = 1.6T, t 3 = 1.35T, farther from the pulse midpoint, the barrier is closed. The quantum corrections to the potential are due to the part of the wave-function describing the ionized wave-packet, and using the estimate we made above, we find that these corrections manifest themselves close to the atomic core in the region z ≲ 3 a.u. for the field parameters in the Fig. 5. These corrections may become important again near the nodes of the wave-function, where the condition |φ i (r, t)| ≪ |φ 0 (r)| is not satisfied. Figure 5 supports these assertions. For z-values outside the range of the quantum corrections due to the ionized wave-packet, V Q (z, t) is (up to an insignificant constant factor) just the potential E(t 1 )z describing the electron interaction with the instantaneous electric field of the pulse, provided the trajectory does not pass through a node of the wave-function. As Fig. 5 shows, this is the case for the trajectory escaping at t 1 = 1.45T in the positive z-direction. The electron motion along this trajectory after the ionization event can be described considering only the laser field, which is a basic assumption made in the SMM. Here I is the ionization potential and E-is the z-component of the electric field at the peak strength. This formula gives z e ≈ 3.6 a.u. for the field parameters in Fig. 4, in good agreement with the location of the peak of the distribution W(z) at the instant when it broadens and describes (in the picture we developed above) the time of ionization. To summarize, we have performed an analysis of the strong field ionization process based on the Bohmian approach. An advantage offered by the Bohmian approach, which we have exploited in the present work, is the possibility of using the well-defined notion of an electron trajectory that is valid on the whole interval of the duration of the laser-atom interaction, including the interval of the sub-barrier motion. After the ionization event, the Bohmian trajectories describing ionized electrons are essentially the classical trajectories describing electron motion in a laser field. This provides a connection between the Bohmian approach and the SMM. Using the Bohmian approach, we defined the notion of the distribution of the electron coordinate in the direction of the laser field, and this sets initial conditions for the subsequent classical motion. This distribution undergoes rapid changes at times when ionization occurs, remaining a sharply-peaked function of z for times immediately prior to the ionization event. This can be interpreted as a justification of the notion of the electron coordinate at the exit point.
A question arises: can the results presented above be obtained using prescriptions of the conventional QM? To answer this question, one should note that the statement that the Bohmian approach leads to the same predictions as conventional QM requires a qualification. This statement is literally true, i.e. there is a one-to-one correspondence between the predictions of the Bohmian and conventional QM only when the latter provides an unambiguous answer 33 . Consider, as an example, the tunnelling time problem, which has been discussed recently in ref. 27 or transmission time, discussed in ref. 33. One can propose several definitions of the tunnelling time (e.g., Larmor time, Büttiker-Landauer time, Eisenbud-Wigner time), based on various aspects of the description of the motion based on conventional QM 27 . Analogously, several plausible definitions for the transition time can be given within the framework of conventional QM 33 . The Bohmian approach, on the other hand, leads to definitions of these concepts that are based on the the so-called 'dwell time' (i.e. the time a particle spends inside a given region), which in the Bohmian picture is defined in a quite natural and essentially unique way. (For discussions of the relations between Bohmian time and tunnelling and transmission times, see refs 27 and 33, respectively.) This essential uniqueness is an attractive feature of the Bohmian approach. The situation with the exit point is similar, the Bohmian approach offers the possibility of defining this notion in a natural way. | 2018-04-03T00:33:13.752Z | 2017-01-06T00:00:00.000 | {
"year": 2017,
"sha1": "365472a6a61de2a0d68a2f6bf801d198a6abaaf9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep39919.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "365472a6a61de2a0d68a2f6bf801d198a6abaaf9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
259263706 | pes2o/s2orc | v3-fos-license | Donanemab exposure and efficacy relationship using modeling in Alzheimer's disease
Abstract INTRODUCTION Donanemab is an amyloid‐targeting therapy that specifically targets brain amyloid plaques. The objective of these analyses was to characterize the relationship of donanemab exposure with plasma biomarkers and clinical efficacy through modeling. METHODS Data for the analyses were from participants with Alzheimer's disease from the phase 1 and TRAILBLAZER‐ALZ studies. Indirect‐response models were used to fit plasma phosphorylated tau 217 (p‐tau217) and plasma glial fibrillated acidic protein (GFAP) data over time. Disease‐progression models were developed using pharmacokinetic/pharmacodynamic modeling. RESULTS The plasma p‐tau217 and plasma GFAP models adequately predicted the change over time, with donanemab resulting in decreased plasma p‐tau217 and plasma GFAP concentrations. The disease‐progression models confirmed that donanemab significantly reduced the rate of clinical decline. Simulations revealed that donanemab slowed disease progression irrespective of baseline tau positron emission tomography (PET) level within the evaluated population. DISCUSSION The disease‐progression models show a clear treatment effect of donanemab on clinical efficacy regardless of baseline disease severity.
(2) neuroinflammation as measured by plasma glial fibrillary acidic protein (GFAP) as a marker for astrocytic activation or proliferation. 5 Exploratory post hoc analyses showed that donanemab treatment significantly reduced the concentration of plasma p-tau217 and plasma GFAP compared to placebo. 6 Amyloid plaque removal and these subsequent downstream changes that result from donanemab treatment are presumably responsible for the clinical benefits, supporting the amyloid cascade hypothesis. In the TRAILBLAZER-ALZ trial, donanemab resulted in a 32% slowing of clinical decline on the Integrated Alzheimer's Disease Rating Scale (iADRS) after 76 weeks compared to placebo. 2 Donanemab dosing decisions for the phase 2 TRAILBLAZER-ALZ trial and ongoing phase 3 TRAILBLAZER-ALZ 2 trial were based on the phase 1b trial and pharmacokinetic/pharmacodynamic (PK/PD) modeling. 7 In the population PK analysis, donanemab serum concentration-time profiles were best described using a two-compartment model with first-order elimination. In the exposureresponse (amyloid plaque) model, the donanemab serum concentration associated with amyloid plaque reduction was found to be 4.43 μg/mL, and at least 80% of participants maintained serum concentrations above this threshold. Simulations showed that at least 75% of participants reached amyloid plaque clearance (<24.1 Centiloids) by 76 weeks of treatment. 8 Here, we further explore the PK/PD of donanemab by characterizing the relationships between (1) donanemab exposure and amyloid plaque with plasma p-tau217 and plasma GFAP, as well as (2)
Participants and study design
The PK/PD models in these analyses are based on data from the donanemab phase 1b study (NCT02624778) and the phase 2 TRAILBLAZER-ALZ study (NCT03367403). 2,7 Both were randomized, double-blind, placebo-controlled clinical trials. The phase 1b study enrolled participants with mild cognitive impairment due to AD or mild to moderate dementia due to AD. TRAILBLAZER-ALZ enrolled participants with early symptomatic AD (mild cognitive impairment or mild dementia due to AD).
Key inclusion criteria include a gradual and progressive change in memory for at least 6
Consent statement
Written informed consent to participate was obtained from all participants or their legally authorized representatives or caregivers.
Biomarker analysis
Amyloid and tau PET scans, plasma p-tau217, and plasma GFAP concentrations were collected as described previously. 6
Clinical assessment
In TRAILBLAZER-ALZ, change in clinical symptoms (cognition and function) was measured using iADRS and CDR-SB. 2 The iADRS is an integrated assessment of cognition and daily function comprising items from the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog13) and the Alzheimer's Disease Cooperative Study-instrumental Activities of Daily Living (ADCS-iADL). The total score ranges from 0 to 144, with lower scores reflecting greater impairment. 11 CDR-SB scores range from 0 to 18, with higher scores reflecting greater disease severity. 12
Plasma p-tau217 and GFAP model development
Individual participant-observed longitudinal donanemab exposure was used in the population PK/amyloid plaque model, 8 where it was found that when donanemab exposure is maintained above a certain threshold, it resulted in an indirect response (i.e., the effect can be time-lagged and/or persist over time even in the absence of serum exposure) of a sustained reduction in amyloid plaque over time, which then further induced downstream reduction in plasma p-tau217 and GFAP. Individual participant parameters from the previously reported final population PK model and the exposure-response amyloid plaque model 8,9 were added to the plasma p-tau217 and plasma GFAP data sets. Two, separate indirect-response models were used to fit the plasma p-tau217 and plasma GFAP data over time using mixedeffects non-linear regression with individual participant data from TRAILBLAZER-ALZ. Individual participant baseline plasma p-tau217 concentration and estimated rate of plasma p-tau217 formation were parameters in the model. Two models were tested to predict the plasma p-tau217 reduction: a treatment-response model based on donanemab dosing and a model including the impact of change in amyloid levels.
A basic indirect-response model where donanemab treatment alters the production of plasma GFAP was used to fit the plasma GFAP data over time using the FOCEI method. The model was parameterized in terms of individual participant baseline GFAP concentration and estimated rate of GFAP formation. Individual post hoc participant parameters from the final population PK and the amyloid plaque models were added to the GFAP data set to obtain predicted drug concentrations and amyloid levels for individual participants. A treatmenteffect model driven by dosing information of donanemab and the impact of change in amyloid PET (a relative change from baseline) was evaluated as a predictor, reducing the rate of GFAP formation.
Final model development included covariate selection using a stepwise forward-inclusion, backward-deletion process. Linear, power, and exponential covariate relationships were evaluated. The forwardinclusion and backward-deletion criteria were p < 0.01 and p < 0.001, respectively. Covariates tested on estimated baseline plasma p-tau217 concentration and treatment effect parameters include entry age, entry weight, apolipoprotein E (APOE)ε4 carrier status, gender, race, treatment-emergent anti-drug antibody status, time since onset of AD symptoms, time since AD diagnosis, and baseline tau PET SUVR.
Covariates tested on the baseline GFAP concentration and treatmenteffect parameters include entry age, gender, entry weight, estimated glomerular filtration rate (eGFR), baseline tau PET SUVR, and baseline plasma p-tau217.
The models were evaluated using standard goodness-of-fit plots and visual predictive checks.
Disease progression model development
Two disease-progression models were developed with TRAILBLZAER-ALZ data for CDR-SB and iADRS using an identical approach.
Richard's logistic model was used to describe the non-linear disease progression. 13 Beta regression was used to account for decreasing variance in residual error as data approached the boundaries (0-144 for iADRS and 0-18 for CDR-SB). [13][14][15] A treatment-effect model using the donanemab dosing information was tested. Another model including the impact of change in amyloid levels was also tested as described previously. 9 In the final model development, covariates were tested on baseline score, disease progression, and drug-effect parameters. Covariates tested include APOE ε4 carrier status, baseline tau level, age, gender, time since onset of AD symptoms, time since AD diagnosis, and baseline C-reactive protein level. In addition, the influence of antidrug antibodies (ADAs) was tested on the treatment-effect term of the model but was not significant. Subsequently, a power calculation was conducted to explore the degree of effect that could be detected with the current data set, utilizing all available titer data, assuming the impact on treatment effect decreased with the log of the titer in a linear fashion. The parameter associated with the change in treatment effect with log(titer) was simulated as either 0.05 or 0.15. This analysis was conducted with only the iADRS model, as this was the primary endpoint for the study.
A stepwise forward-inclusion, backward-deletion process was used to select covariates. Linear, power, and exponential covariate relationships were evaluated. Forward-inclusion and backward-deletion criteria were both p < 0.01.
The model was evaluated using standard goodness-of-fit plots and visual predictive checks.
Participants
The data sets used in the population pharmacokinetic analyses included participants from the phase 1 or the phase 2 (TRAILBLAZER-ALZ) clinical trials assessing donanemab. 2,7 Participants with mild TA B L E 1 Baseline participant characteristics. Table 2.
Plasma p-tau217 reduction model
As reported previously, there was a statistically significant association between baseline tau PET SUVR and baseline plasma p-tau217 concentration (p < 0.001). 6 Higher baseline plasma p-tau217 concentrations were associated with higher baseline tau PET SUVR.
Therefore, baseline tau PET was included as a covariate in the plasma p-tau217 reduction model. None of the other covariates tested met the inclusion criteria.
A comparison of the observed plasma p-tau217 concentrations with the modeled data suggests the model adequately describes the data ( Figure S1). There was a small number of samples with high con-centrations, which widens the 95th percentiles for both placebo and treatment groups. The treatment effect is observed in all quartiles of baseline plasma p-tau217 (analyzed as a continuous variable), supported by the observation that the estimates of between-participant variability between baseline plasma p-tau217 and treatment effect are not correlated.
Plasma GFAP reduction model
An indirect-response model was found to adequately describe the time course of plasma GFAP concentrations in placebo-and donanemabtreated participants. The final model parameters are found in Table 2.
A treatment-effect model driven by donanemab-dosing information as well as a model where donanemab treatment decreased amyloid load (a relative change from baseline) and reduced the rate of GFAP formation described the data well. Although there was no statistically significant decrease in objective function with the latter model, it was selected as final due to the known positive correlation between amyloid load and plasma GFAP. 16 Statistically significant associations between age and body weight with baseline GFAP were identified (p < 0.001). No other covariates tested met the inclusion criteria for the final model.
A visual predictive check suggests that the model adequately describes the data ( Figure S1). A bootstrap analysis suggests that the parameters are well estimated (Table 2). from baseline) was evaluated as a predictor of disease progression on iADRS. 9 Parameter estimates of this model are presented in Table S1.
Simulations
Change in plasma biomarker concentration over time with placebo or donanemab treatment was simulated using the plasma biomarker models based on the change in amyloid (Figure 1) show that slowing of disease-progression rate compared to placebo is maintained after stopping donanemab treatment ( Figure 3A,B). Using the disease-progression model, simulations were carried out on the absolute iADRS scores for placebo and 76-week donanemab treatment ( Figure 3C), which show that slowing of the disease progression rate with donanemab treatment results in increasing difference compared to placebo over time (simulation for 264 weeks) ( Figure 3D).
DISCUSSION
The models used in these post hoc exploratory analyses of phase 2 data were built upon the framework of the previously described population PK and donanemab exposure-response (amyloid reduction model) models. 8,9 Here, we have shown how modeling can be used to describe the relationship between donanemab exposure, AD biomarkers, and clinical scales measuring cognition and function.
Plasma p-tau217 is elevated in patients with AD. 18 We reported previously that a decrease in plasma p-tau217 was observed following donanemab treatment in TRAILBLAZER-ALZ. 6 Here, two models were evaluated to examine the relationship between donanemab treatment and plasma p-tau217 concentration. Between a model based on a simple treatment effect and one based on change in amyloid plaque levels, the latter better predicted the change in plasma p-tau217. The simple treatment-effect model, although describing the data, did not offer understanding on the mechanism behind the observed reduction in plasma p-tau217. Using the indirect-response model, amyloid plaque reduction, which can be time-lagged and/or persist over time, affects the synthesis rate of plasma p-tau217. This supports the hypothesis that plasma p-tau217 reduction is driven by the reduction in amyloid plaque level. This finding is consistent with donanemab's known mechanism of action, suggesting that donanemab-induced amyloid plaque reduction results in further downstream changes and disease modification. There were no covariates identified that influenced the relationship between amyloid PET and plasma p-tau217 over time.
However, a statistically significant relationship was identified between baseline plasma p-tau217 and baseline tau PET SUVR (based on PERSI reference region), with higher plasma p-tau217 concentrations associated with higher values of tau SUVR, as published previously. 6 These results indicate that plasma p-tau217 concentrations are higher in more advanced disease states, suggesting that the decrease in plasma p-tau217 following donanemab treatment may represent slowing of disease progression.
Plasma GFAP is associated with AD pathology and has been shown to predict the progression of cognitive decline. 5,[19][20][21] It has also been reported that plasma GFAP levels decrease as eGFR levels and body weight increase, whereas plasma GFAP levels increase with age. 22 Here we show that an indirect-response model, where donanemab treatment decreased amyloid load (a relative change from baseline) and reduced the rate of GFAP formation, described the data well. A statistically significant relationship was identified between baseline plasma GFAP and age and weight at entry, with higher plasma GFAP concentrations associated with older age and lower weight. On forward search there was also a significant relationship between baseline plasma GFAP and eGFR and baseline p-tau217, but these were not retained in the final model due to these covariates not meeting the pre-specified criteria for reducing the between-participant estimate of baseline plasma GFAP. The baseline disease state was not evaluated as a covariate and is a limitation of this model, as plasma GFAP levels are altered early in the disease process. 23 These results indicate that baseline plasma GFAP concentrations are higher in older participants and are lower in participants with higher weight. Results suggest that the decrease in amyloid plaque load following donanemab treatment leads to a reduction in plasma GFAP.
In the TRAILBLAZER-ALZ trial, change in iADRS was the primary outcome measure and change in CDR-SB was a secondary clinical outcome. 2 effect with another previously reported disease-progression model based on change in amyloid plaque level. 9 Both models indicate that donanemab slows the rate of disease progression, and a simulation up to 264 weeks predicts that donanemab-treated participants will decline more slowly than placebo-treated participants, with an increasing difference from placebo after completion of 76 weeks of treatment.
Using the iADRS disease-progression model based on amyloid plaque reduction, we reported previously that simulating the maximum percent decrease in amyloid plaque level would result in a significant reduction in the disease-progression rate. 9 Here we further use this model to show that slowing of the disease-progression rate compared to placebo is maintained after stopping donanemab treatment. This is possibly due to the slow re-accumulation of amyloid once it has been removed by donanemab. 9 The findings from these disease-progression models support the amyloid cascade hypothesis. If the buildup of amyloid plaques in the brain initiates a series of downstream changes that result in worsening cognition, then removal of amyloid plaques should result in slowing of disease progression, as predicted here. This finding will be evaluated further with data from TRAILBLAZER-ALZ 2, where the number of noncarriers is expected to be larger compared with TRAILBLAZER-ALZ study.
There were limitations to the development of these models. First, these results are from post hoc exploratory analyses 6,9 and from a comparatively small number of participants; therefore, results will need to be confirmed with data from the larger phase 3 trial. In additionally, the disease-progression models were built only with data from the TRAILBLAZER-ALZ study, which used a single dosing regimen. This
CONSENT STATEMENT
Written informed consent to participate was obtained from all participants or their legally authorized representatives or caregivers. | 2023-06-28T13:09:28.314Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "4404dcf16c22e222dc0ef8bf62d42de8dd458369",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "4404dcf16c22e222dc0ef8bf62d42de8dd458369",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
121492292 | pes2o/s2orc | v3-fos-license | Sensor/Actuator Networks and Networked Control Systems
The last two decades have witnessed a great deal of interests in sensor/actuator networks (SANs), or, generally, networked control systems (NCSs). Compared to conventional system architectures, NCSs have numerous advantages such as reduced systemwiring, being easier for design, diagnosis, and maintenance, lower cost, and increased flexibility, reliability, and safety. As a result, NCSs have been widely applied to many areas, for example, automobiles, aircrafts, and spacecrafts, autonomous vehicles, transportation systems, power systems, remote monitoring and data acquisition systems, chemical processes, and many manufacturing plants. Research on NCSs has heretofore focused mainly on several basic communication constraints that can cause performance degradation or stability loss, such as network-induced delays, packet dropouts, data corruptions or disordering, and data rate limitation and quantization effects. Medium access constraint is another important issue worthy of intensive investigation, which refers to the fact that the network cannot accommodate all the nodes (sensors, actuators, subsystems, etc.) simultaneously at any time. There are other open issues awaiting investigation, too. As a special type of NCSs, sensor/actuators networks typically have a large number of devices advanced in sensing, communicational, computational, and mobility capabilities, demanding nontrivial energy. In addition, the batteries powering these devices have limited capacity and cannot be replaced or recharged conveniently. As a consequence, many interesting and challenging issues are open, for example, localization, coverage and routing of mobile sensor/actuator networks, and so forth. The special issue focuses on the state-of-the-art research and development in the analysis and design for networked control systems as well as the theoretical and technological advances in sensor and actuator networks. The intention of this special issue was to provide for researchers a forum to share their latest research results on filtering, estimation, control, and other aspects in sensor/actuator networks and networked control systems and for readers an opportunity to have an overview of the latest achievements about some interesting issues this field has been encountering. In the next section, we give a brief description of the papers in this special issue.
Introduction
The last two decades have witnessed a great deal of interests in sensor/actuator networks (SANs), or, generally, networked control systems (NCSs).Compared to conventional system architectures, NCSs have numerous advantages such as reduced system wiring, being easier for design, diagnosis, and maintenance, lower cost, and increased flexibility, reliability, and safety.As a result, NCSs have been widely applied to many areas, for example, automobiles, aircrafts, and spacecrafts, autonomous vehicles, transportation systems, power systems, remote monitoring and data acquisition systems, chemical processes, and many manufacturing plants.
Research on NCSs has heretofore focused mainly on several basic communication constraints that can cause performance degradation or stability loss, such as network-induced delays, packet dropouts, data corruptions or disordering, and data rate limitation and quantization effects.Medium access constraint is another important issue worthy of intensive investigation, which refers to the fact that the network cannot accommodate all the nodes (sensors, actuators, subsystems, etc.) simultaneously at any time.There are other open issues awaiting investigation, too.
As a special type of NCSs, sensor/actuators networks typically have a large number of devices advanced in sensing, communicational, computational, and mobility capabilities, demanding nontrivial energy.In addition, the batteries powering these devices have limited capacity and cannot be replaced or recharged conveniently.As a consequence, many interesting and challenging issues are open, for example, localization, coverage and routing of mobile sensor/actuator networks, and so forth.
The special issue focuses on the state-of-the-art research and development in the analysis and design for networked control systems as well as the theoretical and technological advances in sensor and actuator networks.The intention of this special issue was to provide for researchers a forum to share their latest research results on filtering, estimation, control, and other aspects in sensor/actuator networks and networked control systems and for readers an opportunity to have an overview of the latest achievements about some interesting issues this field has been encountering.
In the next section, we give a brief description of the papers in this special issue.
An Overview of the Special Issue
This special issue comprises twenty-six papers, which are carefully selected from many submissions by a rigorous peer review process.Roughly, the articles in this special issue can be classified into three topics: control and estimation of NCSs, corporative control and filtering of multiagent systems, and theoretical and practical issues in sensor and networks.
Falling within the topic area of control and estimation of networked systems are seven papers, dealing with stabilization or estimation problems of systems subject to constraints like delay, dropout, and quantization, including applications in various industrial fields.To name some, the paper entitled " ∞ guaranteed cost control for networked control systems under scheduling policy based on predicted error," by Q. Zhu et al., studies scheduling policy based on model prediction errors with the aim of reducing energy consumption and network conflicts at the actuator node.The actuator nodes are assumed to have limited energy and high collision probability and the sensor nodes are responsible for state prediction based on a model.The transmission control strategy is an essentially event-triggered one; that is, the control signal is transmitted only when the prediction error violates a threshold set by the controller.The design method of the guaranteed cost controller is presented by considering parameter uncertainty and long-time delay.
Another interesting paper on this topic is "Control and optimization of network in networked control system," by Z. Wang and H. Sun, which investigates the relationship between quality of performance (QoP) of control systems and quality of service (QoS) of the communication network.It presents an idea to avoid network congestion from the view point of control theory, which marks the departure of this work from the existing results.Specifically, the congestion and bandwidth are regarded as the state and control variables, respectively.Using these variables, a linear time-invariant system model is established to describe the connection between the congestion state and the network bandwidth.A linear quadratic method is introduced to eliminate network congestion by allocating bandwidth dynamically.
On the other hand, a couple of papers focus on the state estimation and filtering problems in NCSs or SANs.In "Distributed ∞ sampled-data filtering over sensor networks with Markovian switching topologies," by B. Yang et al., a distributed sampled-data filtering problem is studied for sensor networks with stochastic switching topologies and transmission delay.The network topology switching is triggered by a Markov chain.A distributed filter structure is given, in which each sensor has access to the measurement of its neighbor nodes.The distributed sampled-data filtering problem is transformed into the stability problem of a Markovian jump error system.Then, in the context of mean-square stability analysis and using Lyapunov-Krasovskii method, a topologydependent sampled-data filtering method is obtained.
Along this line, there is another paper "Varianceconstrained robust estimation for discrete-time systems with communication constraints," by B. Wang et al., which is concerned with the filtering problem in NCSs subject to measurement quantization, random transmission delay, and packets loss.The aim of this paper is to design a linear filter such that, for all the communication constraints, the error state of the filter is mean-square bounded and the steady-state variance of the estimation error of each state variable is no larger than a prescribed bound.
Another topic of this special issue regards theoretical developments in multiagent systems, with the emphasis on consensus control and consensus filtering problems.There are in total four papers on this topic, focusing, respectively, on latest advances in cooperative filtering, control and tracking of multiagent systems, and application to an extrusion machine producing process.In "Consensus tracking of multiagent systems with time-varying reference state and exogenous disturbances," by H. Yang et al., tracking control of multiagent systems is studied.A path following algorithm with a time-varying reference state is proposed, and the path tracking performance of multiagent systems with exogenous disturbances is analyzed.A disturbance observerbased control method is developed, which can guarantee asymptotical consensus of multiagent systems under either fixed or switching topologies, regardless of the time-varying reference state and the exogenous disturbance.
Dr. Y. Liu et al. contributed a noteworthy work "Distributed Kalman-consensus filtering for sparse signal estimation," which presents a Kalman filtering-based distributed algorithm for sparse signal estimation.The authors suggested a rebuilt pseudomeasurement-embedded Kalman filter in the form of information set and an improved parameter selection approach.By introducing the pseudomeasurement technique into the Kalman-consensus filtering problem, a distributed estimation algorithm is developed, which fuses the measurements from different nodes in the network and hence renders all filters to reach a consensus on the estimate of the sparse signals.
Furthermore, we have seven papers studying a variety of theoretic issues in sensor and actuator networks (including general complex networks) and eight papers discussing various applications.To save space, here, we just mention a few of them.For instance, in "Energy efficient low-cost virtual backbone construction for optimal routing in wireless sensor networks," by K. M. Pitchai and B. Paramasivan, an efficient weighted connected dominating sets (CDS) algorithm is presented for constructing a low-cost virtual backbone with hop spanning ratio and minimum number of dominators.The approach has three phases, with the initial phase revoking a partial CDS tree from a complete CDS tree and the second and final phases giving the CDS algorithm by determining the dominators using an iteration process.
K. Zhao et al. proposed in "Cooperative transmission in mobile wireless sensor networks with multiple carrier frequency offsets: a double-differential approach" a relay selectionbased double-differential cooperative transmission scheme, in which the best relay sensor node is selected to forward the source sensor node's signals to the destination node with the detect-and-forward (DetF) protocol.Assuming a Rayleigh fading environment, the closed-form expressions for the outage probability and average bit error rate of the scheme are first derived.Then, expressions for the asymptotic outage probability and the average bit error rate for large signal-to-noise ratio (SNR) are presented.Finally, a simple analytical solution to the optimal power allocation problem is obtained by minimizing the average bit error rate.
As transmission security issue is very important, in "Applying 3D polygonal mesh watermarking for transmission security protection through sensor networks," Dr. R. Hu et al. discussed the problem of copyright protection and digital right management.In this paper, a blind watermarking algorithm is proposed for security protection in transmitting 3D polygonal meshes through sensor networks.The presented method is based on selecting prominent feature vertices (prongs) on the mesh and then embedding the same watermark into their neighborhood regions.The embedding algorithm is based on modifying the distribution of vertex norms by using quadratic programming.Decoding results are obtained by a majority voting scheme over neighborhood regions of the prongs.In situations where cropping cannot remove all prongs, robustness against the cropping attack can be achieved both theoretically and experimentally, showing that the results provide a solution for 3D polygonal watermarking potential to withstand a variety of attacks.
Concluding Remarks
Due to space limitation, we cannot introduce all the papers in this special issue one by one with more details.However, we do hope that this special issue contains useful information that can help motivate more researchers to contribute to this fascinating area. | 2018-12-20T15:28:36.358Z | 2014-08-27T00:00:00.000 | {
"year": 2014,
"sha1": "8693446a2df081d13f93e9ea5bf1cd4ef343c31d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2014/805380.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8693446a2df081d13f93e9ea5bf1cd4ef343c31d",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234059050 | pes2o/s2orc | v3-fos-license | Aquifer Monitoring Using Ambient Seismic Noise Recorded With Distributed Acoustic Sensing (DAS) Deployed on Dark Fiber
Groundwater is a critical resource for human activities worldwide, and a vital component of many natural ecosystems. However, the state and dynamics of water‐bearing aquifers remain uncertain, mostly due to the paucity of subsurface data at high spatial and temporal resolution. Here, we show that analysis of infrastructure‐generated ambient seismic noise acquired on distributed acoustic sensing (DAS) arrays has potential as a tool to track variations in seismic velocities (dv/v) caused by groundwater level fluctuations. We analyze 5 months of ambient noise acquired along an unused, 23 km‐long telecommunication fiber‐optic cable in the Sacramento Valley, CA, a so‐called “dark fiber." Three array subsections, ∼6 km apart, are processed and the stretching technique is applied to retrieve daily dv/v beneath each location. Near the Sacramento river, dv/v variations in the order of 2%–3% correlate with precipitation events and fluctuations in river stage of ∼1.5 m. In contrast, regions away (2.5 km) from the river do not experience large dv/v variations. These observations reveal short‐scale spatial variability in aquifer dynamics captured by this approach. Dispersion analysis and surface wave inversion of noise gathers reveal that seismic velocity perturbations occur at depths of 10–30 m. Rock physics modeling confirms that observed dv/v are linked to pore pressure changes at these depths, caused by groundwater table fluctuations. Our results suggest that DAS combined with ambient noise interferometry provides a means of tracking aquifer dynamics at high spatial and temporal resolutions at local to regional scales, relevant for effective groundwater resource management.
Within the last few years, seismic methods have emerged as an attractive tool for monitoring aquifer dynamics. In particular, several studies have demonstrated the feasibility of using continuous records of ambient seismic noise for monitoring small changes in subsurface seismic velocities through time that are related to variations of the groundwater table (Clements & Denolle, 2018;Lecocq et al., 2017;Tsai, 2011). Techniques such as coda wave interferometry (Sens-Schönfelder & Wegler, 2006;Snieder et al., 2002) have become a useful tool in detecting small induced changes; given that coda waves are multiply-scattered, they are assumed to have sampled the medium many more times than ballistic waves, hence being more sensitive to very small changes in velocities. These studies commonly use seismic stations sparsely distributed throughout hydrological basins, with separations of several kilometers, which resolves regional changes along the paths that connect pairs of stations. Ambient seismic noise generated by natural sources is commonly exploited, which is characterized by weak signals at low frequencies (< 4 Hz) that have to be stacked for long periods of time (i.e., days to months) yielding monthly or seasonal changes. A few other studies have demonstrated the potential to use infrastructure-generated noise (frequencies in the order of 5-30 Hz) for local monitoring of natural and artificial changes in groundwater levels with spatial resolution in the order of several tens of meters and temporal sampling of days (Fores et al., 2018;Voisin et al., 2016Voisin et al., , 2017. Other techniques, such as seismic interferometry using ballistic waves as opposed to coda waves (Garambois et al., 2019), and single-station methods (Kim & Lekic, 2019) have recently emerged. One of the main objectives of these approaches is exploiting P-wave energy for a better localization and depth resolution of velocity changes caused by aquifer dynamics.
Whereas all these approaches are successful in providing information on aquifer dynamics at intermediate to large scales, there is clear need for a tool that enables efficient monitoring of groundwater dynamics at short spatial scales of a few meters and high temporal resolution of hours to days, while covering large areas for regional characterization. In this study, we show how distributed acoustic sensing (DAS) arrays deployed on unused telecommunication fiber-optic cables can bridge the gap between local and regional studies by continuously monitoring daily variations in groundwater table levels at scales of a few meters for distances of tens of kilometers. DAS systems probe conventional fiber-optic cables buried in the ground with coherent laser pulses and record the phase of the returning light, backscattered at impurities naturally occurring within the fiber. Changes in the optical phase of consecutive backscattered pulses are measured, which are proportional to changes in longitudinal strain along finite sections of the fiber, referred to as the gauge length (Parker et al., 2014). In this way, DAS technology effectively transforms fiber-optic cables into arrays of thousands of single component seismic sensors by measuring strain-rate variations induced by vibrations impacting the cable. DAS enables recording of high-density (∼1 m) seismological data at sampling frequencies that range from the mHz to the kHz, with apertures of 20-30 km, all with a single cable. These capabilities highlight the advantages of using DAS as a seismic sensor in regional, long-term monitoring experiments where obtaining high-resolution subsurface data is critical, an endeavor that would become logistically complicated and prohibitively expensive with traditional sensors. An increasing number of geophysical studies have already successfully applied DAS in active-source imaging in borehole deployments (Daley et al., 2013(Daley et al., , 2016Mateeva et al., 2013) and for recording earthquakes and ambient seismic noise signals in horizontally deployed cables (Dou et Williams et al., 2019). Recent work has demonstrated that DAS can be successfully deployed on existing subsurface telecommunication fiber-optic cables that are currently not being used for data transmission, both onshore and offshore Jousset et al., 2018;Lindsey et al., 2019;Sladen et al., 2019;Williams et al., 2019). These networks components, referred to as "dark" fiber, are widespread and are often available to be leased or purchased by commercial providers. This potential opens up the ability to record continuous seismic data virtually everywhere fiber-optic cables exist, provided that the cables are buried and coupled to the ground and the DAS interrogator is in range and can be powered. The use of DAS on dark fiber networks significantly reduces the cost, time and effort for deployment. In only a few hours, the system can be set up and we can continuously measure the seismic wavefield at tens of thousands of locations.
In this study, coda wave interferometry techniques are applied to 4 months of ambient noise data acquired on a ∼23 km long DAS dark fiber array to the north of the city of Sacramento, in California's Central Valley . Changes in subsurface seismic velocities (dv/v) are estimated at three locations along the array with the goal of investigating not only temporal but also spatial variability within the basin. By exploiting the high spatial density provided by DAS (2 m sampling, 10 m gauge length) and high frequencies contained in infrastructure noise records (∼4-20 Hz), changes in seismic velocities can be spatially localized beneath each subsection of the array under analysis. Correlation between seismic velocity changes and variations in groundwater table dynamics and related changes in pore pressure are investigated by comparing our seismic observations with precipitation and river stage measurements, and validated by a rock physics modeling exercise. Our results suggest that ambient seismic noise interferometry applied to noise records acquired on DAS arrays, potentially deployed on dark fiber networks, is a promising tool for monitoring groundwater table variations at spatiotemporal scales relevant to achieve sustainable management of groundwater resources.
Study Site
The data analyzed in this study was acquired as part of the Fiber-optic Sacramento Seismic Array (FOSSA) experiment . This experiment took place between July 2017 and March 2018 within the southern portion of the Sacramento River Valley (California), which constitutes the northern branch of the Central Valley ( Figure 1a). The study site is situated between the cities of West Sacramento and Woodland, straddling urban and agricultural areas ( Figure 1b).
Geologically, our study area is located on the floodplain of the Sacramento River. Shallow stratigraphy is composed of recent Quaternary alluvial fan, river, flood basin, and stream channel deposits that constitute a mixture of fine sands, gravels, clays and silts which are typically less than 150 ft (∼45 m) thick (Olmsted & Davis, 1961). These deposits contain the shallow aquifer system within the Valley, which is typically unconfined and highly heterogeneous due to the interfingering of coarse-and fine-grained sediments. This alluvium blankets the Sacramento Valley, covering the deeper aquifer system contained in the Pliocene Tehama Formation, slightly more confined as depth increases. These deposits are the result of alluvial fans that formed at the mouth of creeks and streams as a consequence of the uplift of the Coastal ranges to the west. The fans broadened as they extended eastwards, coalescing with adjacent fans. Lithologically, these deposits are mostly composed of pale green, gray, and tan sandstone and siltstones, with lenses of crossbeded pebble and cobble conglomerates. In our study area, which is located approximately along the axis of the Valley, sediments are typically fine-grained and interfinger with deposits resulting from the Sierra Nevada mountains. Based on correlations with borehole logs from natural gas production wells, in this region of the Valley, the Tehama Formation can be as thick as 2,500 ft (∼760 m; Olmsted & Davis, 1961). Similarly to the Quaternary deposits, permeable zones in the Tehama aquifer are also understood to be both discontinuous and heterogeneous (Water Resources Association of Yolo County, 2005).
Regionally, aquifers are recharged by runoff and groundwater from east-facing foothills, by percolation of precipitation, and by infiltration of surface water provided by creeks and streams. Some recharge is likely also derived from applied irrigation water and from the Sacramento River and the associated artificial canals and bypasses constructed by regional and local agencies. However, as a result of the heterogeneity of the alluvium deposits exposed on the surface, infiltration rates within the Valley are highly variable at short length-scales (i.e., within a few kilometers; Water Resources Association of Yolo County, 2005). Groundwater levels are also spatially variable, depending on whether water is stored in the shallow surficial or deep aquifer. Within our study area, groundwater table levels have remained at depths between ∼10 and 0 m below the surface for the last 4 years (California Department of Water Resources, 2019).
DAS Data Acquisition
The DAS data recorded for the FOSSA experiment was acquired along a section of the US Department of Energy (DOE) Energy Sciences Network's (ESnet) Dark Fiber Testbed. This network consists of more than 20,000 km of short-and long-haul single mode fiber-optic cables designed for telecommunications, connecting DOE experimental sites, supercomputing facilities, and associated research networks. Between July 28, 2017 and March 4, 2018, ∼210 TB of continuous DAS data were acquired along a 23.29 km-long transect of this network running between the cities of West Sacramento and Woodland (Figure 1b). Installation notes from the telecommunication company indicate that the cable was largely deployed on conduit buried in soil at depths of 1-1.5 m. Some sections were also placed in shallow horizontal boreholes beneath roads and railway tracks, again in conduit but slightly deeper (3-4 m). Data sets were recorded at a sampling frequency of 500 Hz with a spatial sampling of 2 m, resulting in strain-rate measurements at over 11,000 receiver points. Data was acquired using a commercial DAS unit (Silixa iDAS v.
Data Selection
For this study, ambient seismic noise recorded between October 25, 2017 and March 4, 2018 was utilized. Acquisition parameters of the DAS unit were adjusted on October 25 to improve data quality at long distances. Inspection of the data revealed that waveforms acquired before and after the settings modification were slightly different. These differences would have introduced artificial time lags between waveforms in our time-lapse analysis. Thus, we chose to discard data acquired before October 25 in this study.
With the objective of exploring not only temporal, but also spatial variations in aquifer properties within the study area, three sections along the array were chosen for detailed analysis (subsections A-C, Figures 1b-1e). These profiles are 300 m long, and are located in different settings within the area covered by the array (urban vs. rural). We should note that the distance between the fiber-optic cable and the main channel of the Sacramento River is different for the three sections; while subsections A and B are less than 200 m away from the river, subsection C is at a distance of 2,500 m (Figures 1c-1e).
Ambient Noise Cross-Correlation of DAS Subarrays
Each 1-minute-long DAS ambient noise recording is sequentially processed. The three, 300 m-long array sections of interest are extracted from each file and, subsequently, data for each section is analyzed following well-established ambient noise interferometry approaches Bensen et al., 2007). Data are detrended, demeaned and downsampled to a sampling frequency of 125 Hz (8 ms) after applying an anti-aliasing low-pass filter. Temporal normalization using a running absolute mean with a window of 0.5 s is applied to reduce the effect of earthquakes and other high-amplitude, undesired signals. Next, data are bandpassed between 0.002 and 15 Hz, followed by spectral whitening to balance frequencies in this band. Following these pre-processing steps, cross-correlations are calculated between the southernmost trace of each section, which acts as the virtual source, and all other traces in that section. The results are 1-minute-long virtual shot-gathers for each analyzed section for the duration of the experiment. Due to the effect of cross-correlating coherent noise generated by the DAS instrument, which is present in each trace, the resultant virtual shot-gathers are contaminated by zero-moveout noise that is particularly severe at zero lag-time. This noise is reduced by subtracting the median of all traces for each time sample (Rodríguez Tribaldos et al., 2019). Lastly, all 1 minute-long virtual-shot gathers for each day of the recording period analyzed are stacked using the phase-weighted stacking method of Schimmel and Paulssen (1997) with an exponent of 0.3. For stacking, the causal (positive) and acausal (negative) parts of the cross-correlations are averaged to increase signal-to-noise ratio. As a result of this process, we obtain a virtual shot-gather for each day between October 25, 2017 and March 4, 2018 for each selected section of the DAS array.
The causal (positive) part of a daily cross-correlation stack at each array section is shown in Figures 3a, 3c, and 3e for illustration. At all three sections, the stacked virtual-shot gathers reveal a strong train of surface waves traveling across the sub-arrays. In most stacks, the first 0.5 s are dominated by waves propagating at velocities in the order of 300 m/s. Because of the single component nature of DAS measurements, these arrivals are interpreted as ballistic Rayleigh waves propagating along the direction of the fiber-optic cable (Martin et al., 2018). Secondary arrivals with slightly different velocities immediately follow the main Rayleigh wave train. These arrivals are interpreted as a mixture of multiply-scattered waves (i.e. coda waves) and higher-order surface wave modes.
In order to investigate temporal coherency of the wave arrivals identified in the stacks, a trace at a specific distance along the sub-array is extracted for each subsection. A large enough distance is needed between virtual source and virtual receiver to allow for the different phases to separate and be distinguishable. A source-receiver separation of 200 m is chosen. Traces are inspected visually and those stacks resulting in noisy traces are removed and not considered for further analysis. Figures 3b, 3d, and 3f show this trace for all daily stacks. In all sections, coherent phase arrivals are present in each daily stack, which confirms the repeatability of the stacked cross-correlated functions in the offset-time domain throughout the experiment.
This repeatability also reveals significant shifts in the arrival times of temporally coherent phases at specific times during the experiment. The most evident time delays are observed for array sections A and B in early January and early March 2018 (Figures 3b and 3d). A hint of these delays is observed for the main Rayleigh wave arrival (i.e. prominent phase before 1 s lag time), but largest time shifts affect coda waves, with the delay slightly increasing with lag time. Consequently, we focus our seismic interferometry analysis on these later wave arrivals (Figure 3). In an attempt to better constrain the depth of the changing medium, this analysis was repeated for cross-correlated data filtered to different frequency bands comprised between 0.002 and 15 Hz. Data filtered between 4 and 15 Hz was chosen, as it reduces the amount of low-frequency arrivals that might not propagate through the array, and shows the larger time lags between consecutive daily stacks.
Coda Wave Interferometry Application
In order to quantify the observed time-shifts, coda wave interferometry techniques are applied to the 200 m offset cross-correlations from each daily stack, filtered between 4 and 15 Hz. We focus our analysis on wave arrivals comprised within a window beginning at 0.8 s and ending at 1.3 s lag time, immediately after the main Rayleigh wave. The stretching technique introduced by Sens-Schönfelder and Wegler (2006) is applied to consecutive pairs of traces throughout the recording period, with the objective of estimating variations in relative seismic velocities (dv/v) between consecutive days. This method assumes that one trace is a stretched version of that same trace for the previous day according to the linear function where τ is time lag, t is time and ϵ is the stretching factor. Assuming that velocity perturbations are homogeneous, ϵ = −dv/v. We choose to measure −dv/v between consecutive traces, rather than comparing the cross-correlation function of the day of interest against the stack of all traces, as it is typically done in coda wave interferometry studies (e.g., Clements & Denolle, 2018;Sens-Schönfelder & Wegler, 2006). The main reason for using this approach is that the relatively large time delay observed between consecutive days at some times throughout the recording period made it impossible to constructively stack all daily cross-correlations without loosing coherency. By stacking all traces, some of the coherent phases disappeared and the reference trace was too different to the daily cross-correlations to obtain any meaningful results using the stretching technique, which relies on the coherency between the two traces that are being compared. Besides, this approach also enables us to evaluate the potential of using DAS for capturing short-term changes that could be masked by stacking all traces, and to apply this technique for real-time monitoring during which changes could be analyzed on a daily basis.
In an iterative approach, stretching factors ranging from −10% to +10% are applied to the later trace of a pair in a moving window with a width of 0.25 s and a step size of 0.02 s. After each stretching operation, the two traces are cross-correlated in these windows. The stretching factor yielding the maximum zero-lag cross-correlation coefficient between the two traces for each analysis window is taken as the optimal dv/v. In our analysis, all estimated dv/v yield cross-correlation coefficients >0.8. Next, dv/v estimates for all windows are considered, and values above and below the 10% quartile are discarded to remove outliers. The median of all retained values is then calculated to yield a single value of dv/v for that day. We chose to use the median, as opposed to the mean, as the representative daily dv/v value because the median is less sensitive to outliers. Uncertainty in dv/v is calculated as the interquartile range of all kept values, as the variability in RODRÍGUEZ TRIBALDOS AND AJO-FRANKLIN 10.1029/2020JB021004 dv/v within each day does not show a perfectly normal distribution, but rather skewed to positive or negative values depending on the day of the recording. This procedure is repeated for each consecutive pair of traces throughout the recording period, resulting in a profile of daily variations in dv/v. Finally, a cumulative median of daily dv/v is calculated for direct comparison of our seismic observations with hydrological data.
dv/v Variations Through Time and Space and Comparison with Hydrological Data
Results from applying the described coda wave interferometry approach to stacks of the three sections under analysis are shown in Figure 4. Subsections A and B reveal a very similar pattern of seismic velocity variations (Figures 4b and 4c). Two significant negative perturbations, on the order of −2% to −3%, are seen in both sections in early January and early March, agreeing with the larger time lags observed in Figures 3b and 3d. In subsection C, however, no significant seismic velocity variations are observed throughout the recording period. These observations suggest spatial variability in seismic velocity perturbations within the study area, which is most likely reflecting variability in aquifer properties, hydraulic boundary conditions, and infiltration rates.
In order to explore this hypothesis, hydrological data contemporaneous to our seismic experiment is investigated for locations near our DAS array. Unfortunately, direct observations of groundwater table elevations at the FOSSA experiment site are sparse in both space and time. Only one groundwater well nearby the fiber-optic cable has been sampled during our recording period (well Sac Bypass, Figure 1b), with only two data points available from December 7, 2017 and February 14, 2018. Despite the lack of direct subsurface measurements, alternative hydrological data connected to groundwater are available for comparison. The top panel of Figure 4 shows measurements of daily precipitation and river stage contemporaneous with our seismic experiment, recorded at environmental stations SMF and BYL, respectively (Figure 1; California Department of Water Resources, 2019). A close comparison reveals that, for those sections of the array located only a few hundred meters away from the river (subsections A and B), both of these data sets are anti-correlated with seismic velocity variations, that is large precipitation events immediately followed by an increase in river stage correspond with a considerable decrease in seismic velocities below these sections. A correlation also exists between the magnitude of change for all three data sets, with a larger dv/v for a larger increase in river stage and larger precipitation event. The temporal correspondence between river stage and changes in seismic velocity perturbations suggest that variations in the river level are directly connected to changes in the subsurface immediately beneath the array. Figure 5 shows contemporaneous measurements of river stage at station BYL and groundwater table elevation at station Sac Bypass, located ∼1,200 m away from subsection A, for a period between January 2016 and January 2019. These observations show that perturbations in groundwater table and river level have agreed in both timing and magnitude for at least the last three years. Thus, we conclude that it is reasonable to use changes in river stage measurements as a proxy for variations in groundwater table elevation for locations near the river (Ha et al., 2008). Previous hydrogeophysical studies have also verified this approach in a near-river aquifer monitoring context (e.g., Baker et al., 2000). Consequently, we interpret seismic velocity perturbations observed below subsections A and B as an indication of fluctuations in the groundwater table at those locations. Moreover, we interpret these changes as variations in shear-wave velocity, as surface-wave coda waves are mostly sensitive to changes in V s .
In subsection C, precipitation and changes in river stage level do not bear any correlation with dv/v, which does not show significant variation through time. These observations are interpreted as either a lack of or a minimal variation in groundwater table levels beneath this section of the array. As previously pointed out, one of the main differences between subsection C and the other two sections of the array is its larger distance to the Sacramento river (2,500 m for subsection C, Figure 1). As part of the development of the Yolo County Integrated Regional Water Management Plan, relative infiltration rates of Quaternary sediments across the county were mapped (Water Resources Association of Yolo County, 2005). According to this study, our fiber-optic array is located in a region of slow to very slow infiltration rates. This observation suggests that recharge of the shallow aquifer from surface water is likely to take longer than just a few days, even for large precipitation events. Hence the increase in groundwater table beneath subsections A and B is likely not a result of infiltration from the surface, but rather a consequence of the increase of the level of the river, which is hydraulically connected with the floodplain. Due to the heterogeneity of the quaternary deposits and consequent discontinuous permeability, aquifer connectivity over thousands of m is difficult, and variations in the level of the Sacramento river in the order of ≤ 2 m do not affect the elevation of the groundwater table beneath subsection C. These observations suggest a strong connectivity between groundwater and the river bed in this basin. Understanding this interaction is extremely important, as significant variations in the groundwater table due to, for example, pumping can have important effects in the river ecosystem.
At even shorter spatial scales, a close look at dv/v variations for subsections A and B reveals a slight difference in the behavior of these velocity perturbations. The magnitude of cumulative dv/v decrease following RODRÍGUEZ TRIBALDOS AND AJO-FRANKLIN 10.1029/2020JB021004 10 of 20 major river stage increase events is in the order of 2.5%-3% in early January and 2% in early March beneath both sections. However, the pattern of recovery of these seismic velocities to the levels shown prior to river stage increase events differs between the two sections. In subsection A, the short term recovery of dv/v agrees with that of river stage levels after the event in early January. River level slowly decreases over 10 days, to stabilize at a level of 2.7 m, ∼0.5 m above the stage previous to the increase event. Seismic velocity perturbations follow a similar behavior, slowly decreasing toward 0% and stabilizing at a value of ∼−1% after 10 days. In the long term, however, river stage levels recover to its mean value of 2 m toward the end of the experiment before the second large increase occurs in early March, whereas seismic velocities remain fairly constant until they decrease again following the river stage increase. In contrast, for subsection B, no short-term recovery is observed in seismic velocities, which remain at a value of −3% after the large perturbation in January to decrease again following the March event. These observations agree with the geological nature of the Quaternary deposits making up the floodplain of the Sacramento river. First, the lack of recovery of the seismic velocities can be interpreted as due to the presence of clay horizons within these deposits. A geotechnical well shown in Ajo-Franklin et al. (2019), located right next to the river bed between subsections A and B, shows layers of clays and silty clays interbedded with fine sands and gravels. Clay horizons would retain part of the water, preventing the full recovery of average seismic velocities. In relation to this effect, the fact that subsection B shows no short-term recovery of dv/v might indicate higher proportions of clays beneath this region. It could also indicate strong hysteresis during drainage/imbibition cycles controlled by the same textural differences.
Depth Localization of Seismic Velocity Changes
Following quantification of seismic velocity variations, we investigate the depth range at which these changes occur. For that purpose, the depth sensitivity of the coda waves used in our analysis is explored by calculating V s sensitivity kernels as a function of depth and frequency. In order to calculate these kernels, information on the frequency content of our data and the shear-wave velocity structure beneath the study area is required. Dispersion analysis followed by surface wave inversion are applied to one of the daily stacks retrieved beneath subsection A. We choose to analyze this section because we considered it to be better constrained, as it is closest to the river stage measurement station (Figure 1). Slant-stacking is applied to the virtual common-shot gather stack from January 8, 2018 to transform the data from the time-offset domain to the frequency-velocity (dispersion) domain. The resultant dispersion spectra, shown in Figure 6a, is characterized by high amplitudes at 4-5 Hz and around 6 Hz. We also calculate the average frequency content of the stack, which also reveals distinct peaks around 4.5 Hz and at 6 Hz ( Figure 6c).
The dispersion curve corresponding to the fundamental mode of the Rayleigh wave is manually extracted from the dispersion spectra and subsequently used as input for the Monte Carlo surface wave inversion algorithm of Maraschini and Foti (2010) Using the retrieved frequency content characteristics and one-dimensional V s structure, V s sensitivity kernels are calculated using the computer programs in seismology (CPS) package (Herrmann & Ammon, 2002). We calculate the sensitivity kernel for the fundamental mode of the Rayleigh wave. Through a series of comprehensive numerical studies, Obermann et al. (2013) show that, at early times, coda waves are dominated by surface waves and are more sensitive to shallow depths. The study shows that, for velocity variations occurring at shallow depths, depth sensitivity is governed by the 1D sensitivity of the fundamental mode of the surface waves. Besides, as mentioned earlier, the axial sensitivity of DAS suggests that the waves recorded by the fiber-optic cable correspond to Rayleigh waves. Thus, the sensitivity kernel calculated here is representative of the coda waves that we have analyzed, and is a good indication of the most likely depth range of occurrence of our dv/v observations. V s kernels as a function of depth are shown for the main frequency components of our stack in Figure 6d. These kernels suggest that the analyzed waves are most sensitive to velocity structure occurring within the top 30 m of the subsurface beneath the array. Peak sensitivities are at 25 m for 4.5 Hz and 18 m for 6 Hz. Following our previous observations, these depths are far below the groundwater table interface, which is estimated to be located at depths of 4-5 m. Thus, we conclude that the dv/v variations observed using coda wave interferometry do not correspond to changes in the location of the groundwater interface itself, but rather to processes that are related to changes in the water level but occur at larger depths.
Beyond Correlation: Physical Links Between Aquifer State and Seismic Properties
A variety of hydrogeologic processes generate seismically observable signatures including direct fluid saturation effects, clay hydration, and pore-pressure perturbations. Before providing a rock physics modeling framework to quantitatively link surficial water table variations to dv/v, we will attempt to rule out several of these processes.
The first of these processes, replacement of air by water near the water table and the capillary fringe, can generate large seismic velocity perturbations for P-waves, and smaller S-wave effects, due to density changes as has been demonstrated in both the laboratory (Knight & Nolen-Hoeksema, 1990) and the field . However, as can be seen in Figure 6, our surface wave observations are dominated by ambient noise in the 4-6 Hz range and are predominantly sensitive to property perturbations between 10 m and 30 m below ground surface, far deeper than the 4.5 m mean groundwater level. Likewise, clay hydration and cohesion controlled phenomenon are mainly relevant to the vadose zone and the capillary fringe, near surface effects that are likely difficult to resolve using our 4-6 Hz measurement band.
The impact of pore pressure perturbations seems a more likely causative mechanism for the observed dv/v changes; an increase in water table height should increase pore pressure in zones with direct fluid communication and decrease effective stress, thus decreasing shear wave velocity. This effect is large at shallow depths and decreases as depth increases, as lithostatic pressure prevails over hydrostatic pressure. Over the relatively shallow depths considered (40 m), pore pressure diffusion times should be small in comparison to the ambient noise integration times (minutes vs. days). Past field hydrogeophysical studies of aquifer state using conventional broadband seismic observations have measured similar effects, mainly a decrease in dv/v correlated with higher water tables or seasonal precipitation cycles (Clements & Denolle, 2018;Lecocq et al., 2017;Tsai, 2011;Voisin et al., 2016).
These past studies have generally assumed a uniform linear stress dependence for V s which seems unlikely, particularly at shallow depths and low effective stress states where a nonlinear effective stress dependence has been firmly established both in laboratory (Prasad, 2002;Zimmer et al., 2007) and field studies . We utilize an established granular media model developed by Walton (1987) to estimate the dependence of V s on effective stress state in the near-surface. The Walton "Smooth" model assumes an uncemented granular media and has been extensively utilized in past studies to provide V s estimates as a function of stress state. We refer the interested reader to Mavko et al. (2020) for the full formulation. Given that the contact with consolidated rocks is well below our zone of investigation, as evidenced by co-linear drilling logs which did not contact such materials , the adoption of a granular media model seems reasonable.
Due to the lack of near-surface property logs, we assumed that soil porosity was a constant value of 38%, typical for unconsolidated sediments, and the coordination number was estimated from the empirical relationship described in García and Medina (2006). Pore pressure was assumed to be hydrostatic and dependent only on depth to water table (hydraulic head) while lithostatic stress was assumed to have a 23 kPa/m gradient. Grain elastic properties were assumed to be those of quartz (Mavko et al., 2020); a grain bulk modulus of 36 GPa, a grain shear modulus of 45 GPa, and a grain density of 2,650 kg/m 3 . To calculate dv/v estimates using this rock physics model, we use the baseline V s estimate provided previously in Figure 6 and used ΔV s estimates based on the model. One small inconsistency is that the Walton model was only used for the ΔV s estimate and does not perfectly fit the model shown in Figure 6b.
With the availability of a rock properties model relating pore pressure to dv/v, we first converted the river stage proxy data discussed previously to water table height. This height was used to calculate pore pressure perturbations, and the parameters listed above were then utilized to estimate dv/v as a function of depth for all times at which measured dv/v values were available. Figure 7 shows the measured river stage values from station BYL in panel (a). Figure 7b compares the measured dv/v from the DAS ambient noise processing to estimates of dv/v from the rock properties model for depths of 18 m (blue) and 25 m (red). These depths correspond to the 6 and 4.5 Hz peaks in surface wave sensitivity shown in Figure 6. As can be seen, the modeled dv/v values derived from water table variations capture the reduction in velocity seen after the major precipitation event which occurred in January 2018 in both sign and magnitude, as well as the late February 2018 event. This suggests that, for reasonable water table variations, the associated pore pressure perturbations should be more detectable using DAS coda wave analysis and surface waves. However, there are several features in the estimate which are not replicated in the coda wave measurements. These features include the November 2017 rainfall event, which may be below our sensitivity levels, and the seasonal river stage reduction seen from late January 2018 to early February. Likewise, the small diurnal variations in river stage are aliased because of the ambient noise stack period. Besides, they may not be large enough to drive water table variations at distance from the river. However, given these caveats, the accuracy in an absolute sense of the measurements is surprising, suggesting that a direct inversion for water table depth or hydrostatic head in confined aquifers is a possibility.
Discussion
Our combined analysis of seismic velocity variations and rock physics modeling indicates that dv/v changes as large as 3% are observed almost immediately after short-term river stage increase episodes, in connection with increasing pore pressures due to an elevation of the groundwater table in the order of ∼1.5 m. These observations are comparable to values reported by a limited number of previous studies monitoring shortterm, local groundwater table fluctuations using ambient seismic noise in the infrastructure frequency band (i.e., 5-30 Hz; Voisin et al., 2016Voisin et al., , 2017. We should note that these dv/v perturbations are considerably larger than those discussed in other hydrogeophysical studies of this type, which mostly focus on monitoring seasonal variations at a regional scale. For example, Sens-Schönfelder and Wegler (2006) observe dv/v variations of up to 1% in a period of 6 months using cross-correlations calculated between stations separated ∼170 m. They attribute this observed dv/v to a groundwater level decrease of ∼25 m inferred from a hydrological model based on precipitation rates. Clements and Denolle (2018) report smaller changes in dv/v on the order of 0.1%-0.15% over several months as a result of seasonal groundwater table perturbations of about 20 m. In a long-term monitoring study focusing on changes occurring over several decades, Lecocq et al. (2017) observe dv/v variations of only ±0.01%. The explanation for this disparity in the magnitudes of the observed seismic velocity variations mostly resides in the frequency content of the ambient seismic noise used for analysis. Sens-Schönfelder and Wegler (2006) high-pass filter their one-day seismic records at 0.5 Hz. Clements and Denolle (2018) analyze ambient seismic noise in the 0.5-2.0 Hz band, whereas Lecocq et al. (2017) explore microseismic noise with frequencies <1 Hz. On the vertical scale, surface waves are sensitive to the average effect of elastic parameters between the surface and a maximum depth governed by their wavelength. Hence, the lower the frequency, the larger the wavelength and the averaging effect. As a result, small changes in subsurface properties will have a very small effect in the average velocities recorded by surface-wave dominated coda waves. Moreover, these investigations estimate seismic velocity changes occurring along the paths between seismic stations that are usually located several kilometers apart. This configuration implies that seismic waves sample a larger volume of the subsurface, which will have a similarly small effect in seismic velocities (Obermann et al., 2013). Lastly, contact theory models similar to the Walton model used in our prior analysis, predict a considerably higher sensitivity to pore pressure variations at shallow depths (at low effective stress states). This would imply a lower predicted hydrogeophysical response at lower seismic frequencies.
In the case of the study by Sens-Schönfelder and Wegler (2006), the combination of shorter station spacing and higher frequency content (>0.5 Hz) with respect to those of Clements and Denolle (2018) is the most likely cause for the larger magnitudes of the observed dv/v values for a similar inferred groundwater table change. Here, we analyze infrastructure generated noise in the 4-15 Hz band, and station separation of 200 m. This enables the detection of smaller perturbations in groundwater table changes which have a larger effect in our recorded seismic velocities. Our observations suggest that, in this type of geological environment, groundwater table variations on the order of 1 m or more are resolvable using coda wave analysis of infrastructure noise recorded using DAS arrays.
An additional concern relating to coda wave interferometry monitoring studies is the ability to recover changes in subsurface properties without the overprint of temporal variations in the properties of noise sources. Prior studies have shown that apparent seismic velocity perturbations as large as 0.05% can be caused by temporal variability in the frequency content of the seismic ambient noise (Zhan et al., 2013). The large magnitude of our recovered dv/v variations and the agreement with auxiliary data and rock physics modeling results suggest that our observations reflect real changes in subsurface properties. Nevertheless, we confirm the reliability of our results by analyzing the stability of the frequency content of our cross-correlated waveforms. As illustrated in Figure 8, day-to-day variability in frequency content is very small for the stacks used for coda wave interferometry, which confirms that our observed velocity variations are due to real changes in subsurface properties, and not spurious measurements introduced by changes in frequency content.
Our study implies that the analysis of seismic velocities in combination with rock physics modeling is a promising approach to infer changes in aquifer state. However, some disagreement remains between estimated dv/v variations and those recorded by coda wave analysis. The most obvious feature is the predicted seasonal recovery occurring as a response of lowering the river level between January and March (Figure 7), which is not represented in our observations. During this time, a decrease in river stage of more than 1 m occurs, which is predicted to cause an increase of 1.5% in dv/v. These fluctuations are in the same range as those observed after the early January increase event, which are visibly captured by our seismic analysis. This observation suggests that the reason for this mismatch is not a lack of sensitivity of the coda wave observations. One possible explanation for this disagreement would be a limited ability to recover long-term dv/v variations due to comparing consecutive traces rather than using the stack of all daily traces as the reference trace. Alternatively, it is worth noting here that, due to a lack of ground-truth geological information at shallow depths at our analysis location, our rock physics model assumes the elastic properties of our model domain to be those of pure quartz, hence implying that sedimentary deposits are composed of 100% sand. However, lithological logs for nearby areas confirm the presence of clay layers, as expected for flood plain deposits. As previously discussed, the retention of some water by these clay layers could delay the water table response and prolong the reduction in seismic velocities visible in our observations. In future studies, a more detailed knowledge of the lithological characteristics of the subsurface and the use of additional ground truth water-level observations can help further investigate the relationship between seismic properties and changes in pore pressure distribution, as well as other physical processes linked to groundwater fluctuations.
Lastly, we should mention that even in cases where monitoring hydrogeophysical state is not useful for direct groundwater management, recovered V s variations can play an important role in providing corrections for deeper monitoring targets. An example would be geothermal systems (e.g., Zeng et al., 2017) or geologic carbon storage (GCS) reservoir monitoring, where perturbations in deeper units might be cloaked by variability in surficial aquifer state. The capacity to correct for near-surface variations provides a path toward recovering true velocity variations at depth. Moreover, the broadband nature of DAS recordings and the ability to continuously acquire data at apertures of tens of kilometers opens up the possibility to apply the interferometric analysis to recordings of a wide variety of frequency and wave types (i.e., body waves as opposed to surface waves) in order to localize seismic velocity variations at a range of depths using a single data set.
In conclusion, our results suggest that seismic interferometry analysis of ambient noise acquired using DAS arrays has significant potential as a tool for monitoring groundwater fluctuations at local to regional scales. DAS technology provides high density of measurements and large aperture, which enable continuous monitoring of spatial variations at high resolution and allows for improved localization of velocity anomalies that can be related to changes in aquifer properties. The possibility of deploying DAS on existing, unused telecommunication fiber-optic cables offers new opportunities for monitoring subsurface processes at minimal effort in complex environments where the deployment of traditional monitoring equipment, such as water-wells, or conventional seismometers could prove challenging and costly. Here, our approach has been applied to three sub-sections of the array that cover different regions within this portion of the Sacramento basin. However, our observations indicate that the DAS array can record ambient noise along 20 km with high signal-to-noise levels, sampling the wavefield at distances as small as 1-2 m. Thus, one could apply this approach to consecutive, overlapping array sub-sections along the entire profile to provide a continuous record of seismic velocity variation and, hence, aquifer property variations along several tens of kilometers with unprecedented spatial and temporal resolution. In this context, monitoring of seismic velocity variations using DAS can provide valuable information on aquifer dynamics at a scale relevant for water agencies to design and implement plans for sustainable management of groundwater resources.
Despite these undeniable advantages, it is worth mentioning that the routine application of dark fiber DAS for long-term monitoring experiments still presents some challenges. The most significant of these barriers is the large data volume generated by these arrays. In the present study, over 11,000 measuring channels at 2 m spacing with a sampling frequency of 500 Hz generated ∼210 TB of raw data in 7 months of recordings.
These unusually large data sizes require innovative approaches in data storage, handling and processing in order to fully exploit the potential of these exceptional data sets. Recent developments in fields such as high performance computing, array processing, and artificial intelligence applied to geophysical data (Bianco et al., 2019;Clements & Denolle, 2020;Dong et al., 2020;Hu et al., 2020;Zhong et al., 2020) promise to alleviate these issues and contribute toward advancing DAS as a revolutionizing tool in subsurface monitoring investigations.
Data Availability Statement
MATLAB was used for data analysis and plotting. Due to the large size of the data set (hundreds of TB), processed data products are made available. Examples of raw DAS data recorded along the array subsections analyzed in the study (shown in Figure 2) and pre-processed data shown in Figure 3 and used to calculate dv/v are available at Rodríguez Tribaldos and Ajo-Franklin (2021). River stage and precipitation data from stations BYL and SMF, respectively, can be accessed at https://cdec.water.ca.gov/. Groundwater level data from station Sac Bypass Shallow can be accessed at: https://water.ca.gov/Programs/Groundwater-Management/ Groundwater-Elevation-Monitoring--CASGEM. | 2021-05-10T00:04:02.576Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "d9fe8fedb71af36473bdbc014af5c0e0ea6bbb74",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020JB021004",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "a705162d401c1c5029c14f491c30703ced65eb84",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
258616017 | pes2o/s2orc | v3-fos-license | Hopes, joys and fears: Meaning and perceptions of viral load testing and low-level viraemia among people on antiretroviral therapy in Uganda: A qualitative study
Uganda applies the World Health Organization threshold of 1,000 copies/ml to determine HIV viral non-suppression. While there is an emerging concern of low-level viraemia (≥50 to <1,000 copies/ml), there is limited understanding of how people on antiretroviral therapy perceive viral load testing and low-level viremia in resource-limited settings. This qualitative study used the health belief model to explore the meaning that people living with HIV attach to viral load testing and low-level viraemia in Uganda. We used stratified purposive sampling to select people on antiretroviral therapy from eight high volume health facilities from the Central, Eastern, Northern and Western regions of Uganda. We used an interview guide, based on the health belief model, to conduct 32 in-depth interviews, which were audio-recorded and transcribed verbatim. Thematic analysis technique was used to analyze the data with the help of ATLAS.ti 6. The descriptions of viral load testing used by the participants nearly matched the medical meaning, and many people living with HIV understood what viral load testing was. Perceived benefits for viral load testing were the ability to show; the amount of HIV in the body, how the people living with HIV take their drugs, whether the drugs are working, and also guide the next treatments steps for the patients. Participants reported HIV stigma, lack of transport, lack of awareness for viral load testing, delayed and missing viral load results and few health workers as the main barriers to viral load testing. On the contrary, most participants did not know what low-level viraemia meant, while several perceived it as having a reduced viral load that is suppressed. Many people living with HIV are unaware about low-level viraemia, and hence do not understand its associated risks. Likewise, some people living with HIV are still not aware about viral load testing. Lack of transport, HIV stigma and delayed viral load results are major barriers to viral load testing. Hence, there is an imminent need to institute more strategies to create awareness about both low-level viraemia and viral load testing, manage HIV related stigma, and improve turnaround time for viral load results.
First and foremost, we greatly and humbly thank you all for taking off time to carefully read our manuscript and giving us very insightful review comments. We greatly believe that addressing these comments has been very key in improving our manuscript. Thank you very much once again.
We hereby humbly submit the responses to the different review comments raised, as shown below;
ACADEMIC EDITOR
Thank you for submitting your manuscript to PLOS Global Public Health. After careful consideration, we feel that it has merit but does not fully meet PLOS Global Public Health's publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. a) Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article's retracted status in the References list and also include a citation and full reference for the retraction notice.
Thank you very much for this very useful comment. We have read through the entire manuscript again carefully looking at every reference following your guidance. We have realized that some of the references had not been updated automatically by the referencing software, and hence we have updated them. A number of references including; 2, 11,12,13,30,31,33,34,36 and 37 have been updated to reflect the exact articles that they reference, as shown in the manuscript with track changes. We are sorry for this oversight, and we greatly thank you very much once again for noticing this oversight in the manuscript. Thank you very much.
b) Your manuscript is missing the following sections: Introduction. Please ensure these are present, and in the correct order, and that any references to subheadings in your main text are correct. An outline of the required sections can be consulted in our submission guidelines here: https://journals.plos.org/globalpublichealth/s/submission-guidelines#locparts-of-a-submission d) In the online submission form, you indicated that "The codebook for the study has been availed. Any further datasets used and/or analysed during the current study are available from the corresponding author on reasonable request". All PLOS journals now require all data underlying the findings described in their manuscript to be freely available to other researchers, either 1. In a public repository, 2. Within the manuscript itself, or 3. Uploaded as supplementary information. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If your data cannot be made publicly available for ethical or legal reasons (e.g., public availability would compromise patient privacy), please explain your reasons by return email and your exemption request will be escalated to the editor for approval. Your exemption request will be handled independently and will not hold up the peer review process, but will need to be resolved should your manuscript be accepted for publication. One of the Editorial team will then be in touch if there are any issues.
Thank you very much for this. We have addressed this in the online submission.
REVIEWER #1 General Comment
HIV management is one aspect of the field of medicine that has faced a lot of challenges. Initial challenges revolved around acceptance of HIV status, stigma, lack of support systems, and availability of ARVs especially in low and middle-income countries. Emerging challenges are issues to do with adherence to medication and the management of HIV in general. The author's findings on HIV patient perception towards VL testing is a plus in the management of HIV. The ability of PWH to interpret and appreciate the role of LLV in HIV management is a good sign of the acceptability of VL testing by the patients in their management. The conclusion of the study identify areas that needs improvement in the utilization of VL testing in the monitoring and management of HIV patients.
Major Points
a) To deal with sampling bias during participant selection, the authors should clarify if they include defaulters in their sample (pg 6).
Thank you very much for this comment. It also actually raises ethical concerns and we have addressed this in Lines 132 to 133 on page 6. Thank you for raising this comment. Furthermore, we are also humbly informing you that we used stratified purposive sampling (as shown in Line 127 on page 6) to enable us get the participants who were fit to answer the study research questions in details. Thank you very much. b) One aspect of low-level viremia is misinterpretation and misuse of the LLV results by the patients themselves or by other support systems around the patients. How did the authors capture this challenge (pg 17-18) Thanks for this comment, and it is actually an insightful challenge. However we discussed this challenge further in the discussion section, from Lines 502 to 508 on Page 21. Thank you very much.
c) Was culture a barrier to VL testing? (pg 14) Thank you very much for this question. Of course, culture has always been a barrier, and several interventions like sensitization have been used to overcome it. In this study, we used an inductive approach, which involved hearing more from what the participants said, compared to what we knew (deductive approach). In the interviews, the participants talked about different myths, though some of them were not really related to culture as such. However we greatly agree with you that culture is still a barrier in HIV care, which needs to be investigated thoroughly well. f) It would have been better if the authors included the healthcare givers in their study population. Generally, healthcare provision is a collaborative effort between the patients and the clinicians.
Minor Points
Thank you for noting this. The perceptions of health workers about LLV and VL testing would be great to include here. However the manuscript would be large, and the study team agreed to do this later on, and include it in another totally separate manuscript. Thank you very much.
REVIEWER #2
Nanyeenya et. al present a qualitative study based on the health belief model in which they assess the perceptions of HIV viral load testing and low-level viremia in people living with HIV in Uganda. This study provides unique insights into patient perceptions and understanding of viral load testing and results interpretation. Such studies are a crucial piece needed for developing more effective interventions to improve the overall care for people with HIV in a manner that centers their needs and enhances their understanding of these interventions. The manuscript is very well written and clearly presented. There are some comments the authors may wish to consider to improve on their work. a) In the discussion I would have liked to see the authors discuss how their results could be used to design interventions which can improve on how patients view low level viremia and even propose plausible interventions that could be effective based on the data they have generated.
Thanks very much for raising this very critical comment. This has been addressed as shown in Lines 497 to 500, 529 to 530, and 532 to 534 in the discussion section. Once again, thank you for highlighting this comment.
b) I applaud the efforts made by the authors to methodically conduct detailed interviews adapted for language and cultural setting which is so crucial to obtain information that accurately represents the views of the respondents however the sample size (n=32 detailed interviews) of the study is quite small which makes one to wonder whether such a small sample size is sufficient to fully capture the full spectrum of perceptions among people living with HIV. Could the authors comment on this choice of sample size and acknowledge this limitation in the manuscript?
Thank you very much for this comment. In order to make this study feasible, we estimated that a minimum of 32 in-depth interviews (IDIs) with PLHIV on ART would be conducted until information saturation was reached. This minimum sample size was estimated based on Saunders et al., (Saunders et al., 2018) and Guest et al., (Guest et al., 2006) c) The authors present their results very well. A summary figure or table of the key themes emerging from the detailed interviews as detailed in the sub-headings in the results section will break the monotony of lengthy text and provide a nice summary of the findings to accompany the detailed reporting of the results.
Thank you very much for this very insightful and critical comment. We have developed summary figure as per your guidance, as shown by the caption on Line 188 on page 8. The figure has also been saved and uploaded as Fig 1, as required by the Journal guidelines. d) Line 541minor typo correct behaviours currently spelled as bahaviors Thank you very much for this. We have addressed it as shown in Line 546.
Once again, thank you very much for the comments. | 2023-05-12T05:07:15.261Z | 2023-05-10T00:00:00.000 | {
"year": 2023,
"sha1": "08b5d2b7322390d0f2379a3748c1401b3f937e9a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "08b5d2b7322390d0f2379a3748c1401b3f937e9a",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6226961 | pes2o/s2orc | v3-fos-license | Collaborative development of predictive toxicology applications
OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals. The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation. Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way.
Introduction
In a study by the European Chemical Bureau (ECB), it was estimated that the new EU chemical legislation REACH would require 3.9 million additional test animals, if no alternative methods were accepted [1]. The same study showed that it was possible to reduce the number of test animals significantly by utilizing existing experimental data in conjunction with (Quantitative) Structure Activity Relationship ((Q)SAR) models. Chronic and reproductive toxicity, in vivo mutagenicity and carcinogenicity are the endpoints that will require the largest number of test animals within REACH, because no alternative in vitro assays are available yet.
Recent developments allow a more accurate prediction of complex toxicological endpoints than a few years ago. This progress has been supported by (i) the development of improved (Q)SAR algorithms, (ii) the availability of larger and better curated public databases, (iii) progress in computational chemistry and biology, and (iv) the development of an array of in vitro assays probing targets, pathways and endpoints.
The routine application of these new generation models is however still rare, because • Toxicity data has been collected in a variety of different databases; • These databases use different formats, that are frequently not generally compatible with in silico programs; • Many toxicity databases lack important information for modelling (e.g. curated chemical structures; inability to select and combine data from multiple sources); • It is hard to integrate confidential in-house data with public data for model building and validation; • Models have been published in a variety of different formats (ranging from simple regression based equations to full-fledged computer applications); • There is no straightforward integration of predictions from various applications; • There is no commonly accepted framework for the validation of in silico predictions and many in silico tools provide limited support for reliable validation procedures; • The application, interpretation, and development of (Q)SAR models is still difficult for most toxicological experts. It requires a considerable amount of statistical, cheminformatics and computer science expertise and the procedures are labour-intensive and prone to human errors.
The EC-funded FP7 project "OpenTox" [2] aims to address these issues. The overall objective of OpenTox is to develop a framework that provides a unified access to in vitro and in vivo toxicity data, in silico models, procedures supporting validation and additional information that helps with the interpretation of predictions. OpenTox is accessible at three levels: • A simplified user interface for toxicological experts that provides unified access to predictions, toxicological data, models and supporting information; • A modelling expert interface for the streamlined development and validation of new models; • Public OpenTox Application Programming Interfaces (APIs) for the development, integration and validation of new algorithms and models.
The core components of the OpenTox Framework are being developed or integrated with an open source licensing approach to optimize the dissemination and impact of the platform, to allow the inspection and review of algorithms, and to be open to potential contributions of value from the scientific community.
OpenTox Objectives
The overall long-term goal of OpenTox is the development of an interoperable, extensible predictive toxicology framework containing a collection of state-of-the art (Q)SAR, cheminformatics, bioinformatics, statistical and data mining tools, computational chemistry and biology algorithms and models, integratable in vitro and in vivo data resources, ontologies and user-friendly Graphical User Interfaces (GUIs). OpenTox supports toxicological experts without specialist in silico expertise as well as model and algorithm developers. It moves beyond existing attempts to create individual research resources and tools, by providing a flexible and extensible framework that integrates existing solutions and new developments.
OpenTox Design Principles
The design principles of interoperability, flexibility, transparency and extensibility are key ingredients of the OpenTox Framework design, which additionally guide its architecture and implementation.
Interoperability
Interoperability with respect to the OpenTox Framework refers to the principle that different OpenTox components or services may correctly exchange information with each other and subsequently make use of that information. Both syntactic interoperability for correct data exchange and semantic interoperability supporting the accurate communication of meaning and interpretation of data are supported principles for Open-Tox resources. The principles are reflected design-wise in the use of open, standardised interfaces and ontologies. The principles are relevant in application development and deployment when a combination of distributed multiple services can provide value to a user in completing a use case satisfactorily.
Flexibility
As a significant variety of user scenarios, requirements and use cases in predictive toxicology exist, flexibility is a key principle incorporated into OpenTox. Through the use of a component-based approach and the incorporation of the interoperability principles, many different and customised applications can be assembled that are based on the underlying platform.
Transparency
To achieve the scientific objective of knowledge-based enquiry based on principles of reasoning, reproducibility, and reliability, OpenTox supports the principle of transparency in its design. Computational models should be available for scrutiny by other scientists in as complete a manner and detail as possible. Evaluators and regulators should be able to both understand the details and accurately reproduce the results of predictive toxicity models, and be able to reliably form judgements on their validity as evidence. The principle also supports achievement of the OECD validation principles such as an unambiguous algorithm and a mechanistic interpretation, if possible. Use of Open Source, Open Interfaces and Standards within OpenTox support implementation of the transparency principle applied to in silico-based predictive toxicology applications and their reported results.
Extensibility
The field of predictive toxicology is rapidly developing and broadening in many areas including the use of biomarkers, systems biology, epigenetics, toxicokinetics, in vitro assays, stem cell technology, and computational chemistry and biology. Hence, OpenTox needs to be extensible to a broad range of future predictive toxicology applications. In such applications, contributing and diverse experimental data and models need to be combined as evidence supporting integrated testing, safety and risk assessment and regulatory reporting as stipulated under REACH. In the initial design of the Open-Tox Framework we have first attempted to create a general solution for (Q)SAR model development and application. We also will address and strengthen its extensibility in subsequent extensions of the OpenTox APIs, and guided by suitable use cases, to additional areas of scientific enquiry in the predictive toxicology field as part of its evolutionary development.
Toxicity Data
Toxicity data has been traditionally dispersed over a variety of databases where only a small fraction was immediately suitable for in silico modelling and structure-based searches because they contained chemical structures and defined toxicological endpoints. Recent efforts (e.g. from Istituto Superiore di Sanità (ISS), Fraunhofer Institute for Toxicology & Experimental Medicine (FhG ITEM), US Environmental Protection Agency (US EPA), US Food & Drug Administration (US FDA)) have improved the situation, because they provide curated data that has been compiled from various sources (public testing programs, general literature, non-confidential in-house data). Public repositories of bioassay data like PubChem [3] provide additional information that can be used for toxicological risk assessment.
The aggregation of data from different sources is however still far from trivial and poses some interesting toxicological, computer science, technological and legal questions, e.g.: • Reliable identification of database entries that point to identical primary experiments; • Reliable mapping from various non-unique chemical identifiers (e.g. names, CAS numbers) to chemical structures; • Development of ontologies that describe the relationships between the various toxicological effects and mechanisms and related chemical and biological entities; • Utilization of high content and high throughput screening data for toxicity predictions; • Integration of databases with different access policies (and legal status); • Structure anonymisation to share toxicity data from sensitive in-house datasets (if possible [4]); • Systematic data quality assessment.
As the size of toxicity databases prohibits a manual inspection of all data, it is necessary to apply advanced data-and text-mining techniques to solve most of these tasks automatically and to identify instances that need human inspection.
Some of the data integration issues have already been addressed by other computational toxicology and chemistry initiatives e.g. ECB QSAR Model Reporting Format [5], DSSTox [6], ToxML [7], CDK [8], InChI [9]. However although these approaches solve some technical aspects of data integration, none of them provides an architecture for the seamless merging and use of toxicity data from various sources. An OpenTox goal is to provide unified access to existing tools for data integration, develop new tools for this purpose, provide sound validation techniques and aid driving efforts to develop standards in this area.
Ontologies
The definition of ontology and controlled vocabulary in OpenTox is required so as to standardize and organize high-level concepts, chemical information and toxicological data. Distributed OpenTox services exchanging communications need to have unambiguous interpretations of the meaning of any terminology and data that they exchange between each other.
Prioritisation of OpenTox toxicological endpoints focuses on those endpoints recognized internationally as critical for the testing of chemicals. Primary sources of information include the OECD guidelines for testing of chemicals [10,11] and the toxicological endpoints relevant to the assessment of chemicals in the EU [12].
A further more detailed definition of Ontology in this context is provided in Additional File 1.
Approach to Predictive Toxicology (Q)SARs
Initial OpenTox work has focused on creating a Framework for the support of (Q)SAR-based data driven approaches.
Toxicity (Q)SARs
Because of its relevance for the reduction of animal testing, we are focusing initially on the reproductive toxicity, chronic toxicity, mutagenicity and carcinogenicity endpoints. The OpenTox Framework however works independently of the underlying data, which makes it useful also for any other toxicology-relevant endpoints.
The main problem for toxicological modellers is that they have to deal with endpoints with very complex and frequently unknown biological mechanisms and with datasets with very diverse structures. This currently prohibits in many cases a systems biology approach as well as the application of simple regression-based techniques. For this reason advanced data mining and cheminformatics techniques are gaining increasing acceptance within the toxicological community. Modern techniques like lazar [13], fminer [14] and iSAR [15] allow the automated determination of relevant chemical descriptors and the generation of prediction models that are understandable and interpretable by non-computer scientists.
Many (Q)SAR models for the prediction of mutagenic and carcinogenic properties have been developed in recent years. The prediction of bacterial mutagenicity is relatively successful (typical accuracies 80%), but the success with carcinogenicity predictions has been much more limited and very few models are available for in vivo mutagenicity. With recent developments like lazar, it is however possible to predict rodent carcinogenicity with accuracies similar to bacterial mutagenicity and to achieve a reliable estimation of prediction confidences. It is likely that further improvements can be obtained with better algorithms for chemical and biological feature generation, feature selection and model generation, and the novel combination of existing techniques.
Aggregation of Predictions from various Models
It is known from machine learning, that the aggregation of different prediction models leads to increased accuracies [16]. The aggregation of predictions from different in silico programs is however still a cumbersome task that requires a lot of human intervention and ad hoc solutions. A new plugin-architecture is therefore needed that allows an easy integration of models and programs from different origins, independently of their programming language and legal status. Similar plugin facilities are needed for algorithms that perform a dedicated task during model generation (e.g. feature generation, feature selection, classification, regression). With such a modularized approach it will be easier to experiment with new algorithms and new combinations of algorithms and to compare the results with benchmarked methods.
Validation of Models
An objective validation framework is crucial for the acceptance and the development of in silico models. The risk assessor needs reliable validation results to assess the quality of predictions; model developers need this information to (i) avoid the overfitting of models, (ii) to compare new models with benchmarked techniques and (iii) to get ideas for the improvement of algorithms (e.g. from the inspection of misclassified instances). Validation results can also be useful for data providers as misclassifications point frequently to flawed database entries. OpenTox is actively supporting the OECD Principles for (Q)SAR Validation so as to provide easy-to-use validation tools for algorithm and model developers.
Care must be taken, that no information from test sets leaks into the training set, either performing certain steps (frequently supervised feature generation or selection) for the complete dataset or by "optimizing" parameters until the resulting model fits a particular test set by chance. For this reason OpenTox provides standardized validation routines within the framework that can be applied to all prediction algorithms that are plugged into the system. These kinds of techniques are standard in the field of machine learning and data-mining, but are however not yet consistently employed within the field of (Q)SAR modelling.
Determination of Applicability Domains
For practical purposes it is important to know the proportion of compounds that fall within the Applicability Domain (AD) of a certain model. For this purpose OpenTox will provide automated facilities to identify the proportion of reliable predictions for the "chemical universe" e.g. structures of the database [17], particular subsets (e.g. certain classes of pharmaceuticals, food additives, REACH submission compounds) and for inhouse databases. This feature will also help with a more reliable estimation of the potential to reduce animal experiments.
Retrieval of supporting Information
Linking (Q)SAR predictions to external data sources has found little attention in the (Q)SAR community. It is however essential for the critical evaluation of predictions and for the understanding of toxicological mechanisms. Again the problem is less trivial as it seems at a first glance and requires similar techniques as those for database aggregation. The development of new text mining techniques is crucial for the retrieval of factual information from publications.
Interfaces
Model developers will benefit from a set of APIs that allow an easy integration, testing and validation of new algorithms. New techniques can be easily tested with relevant real-world toxicity data and compared to the performance of benchmark algorithms.
Toxicity databases
OpenTox database work aims to integrate and provide high-quality toxicity data for predictive toxicology model development and validation. OpenTox supports the creation of dictionaries and ontologies that describe the relations between chemical and toxicological data and experiments and for the retrieval and quality assurance of toxicological information. This includes tools for chemical syntax checking, structure consolidation, and the identification of inconsistent data that requires manual inspection.
(Q)SAR algorithms
OpenTox provides access to (Q)SAR algorithms that derive data-based predictions and models. Predictions are visualized by an application GUI or serve as input for validation routines. The open architecture is designed to allow an easy integration of external programs (open source and closed source) into any specific application.
OpenTox is starting with the integration of cheminformatics, statistical and data mining tools including functionality from other open source projects (e.g. R, WEKA [18], Taverna [19], CDK, OpenBabel [20]). A flexible plug-in architecture for applying, testing and validating algorithms interactively and systematically is used. OpenTox algorithms offer support for common tasks, such as feature generation and selection, aggregation, and visualization. The open source plug-in architecture should encourage researchers from other areas (e.g., data mining or machine learning) to integrate their methods in a safe testing environment with relevant datasets. OpenTox currently implements: 1. Algorithms for the generation and selection of features for the representation of chemicals (structurebased features, chemical and biological properties); 2. Classification and regression algorithms for the creation of (Q)SAR models; 3. Services for the combination of predictions from multiple algorithms and endpoints; and 4. General purpose algorithms (e.g. for the determination of chemical similarities, estimation of applicability domains, categorization, read across and sub-structure based database queries).
User Requirements
User requirements indicate that we will need to provide a great flexibility with the OpenTox Framework to meet individual needs in specific applications.
A summary of user requirements for several different kinds of OpenTox user are described in Additional File 2.
Use Cases
OpenTox pursues a use case driven development and testing approach. Use case development involves input from both users and developers, an internal and external peer review process, and testing approach based on user evaluation of the applications developed for the use case. Once use cases are reviewed and accepted, they are published publically on the OpenTox website.
OpenTox use cases are classified hierarchically into three classes: Class 1: Collaboration/Project Level (e.g., 3 month development project); Class 2: Application Level, e.g., carry out a REACHcompliant risk assessment for a group of chemicals; Class 3: Task Level, e.g., given an endpoint (and a data set for a chemical structure category for that endpoint) develop and store a predictive model resource for a chemical space.
OpenTox Use Cases are documented by a standardised OpenTox Use Case Template describing the task, inputs, outputs, exceptions, triggers, and process resources required for the overall process and each activity step in the process. Table 1 provides an example overall process template for predicting an endpoint for a chemical structure, which the ToxPredict application described later on is based on. The user is typically a non-computational expert but knows the structure of a compound or has a chemical id or electronic structure (e.g. MOL) file. The user enters a structure via their web browser via one of three optional methods: file, paste, or sketch structure, selects the specific endpoints of interest, and starts the calculation. When the calculation is finished a report is returned.
The workflow is described in Figure 1 as the following series of steps: 1) OpenTox data infrastructure is searched for chemical id or structure; 2) The structure is checked for chemical correctness, and number of molecules; 3) Clean-up: if 2D, the structure is converted to 3D, valences saturated with hydrogen atoms, and partially optimized with molecular mechanics; 4) A check on the chemical correctness is made (bond distances, charges, valences, etc.); 5) An image of the molecule is displayed, with the results of structure check and clean-up. If serious problems with the structure are found, the user is asked if they want to continue, or if appropriate, the process is terminated automatically with an error message; 6) If experimental results for the molecule are found in the database, then the following is printed "Experimental data for this structure is available in the Open-Tox database and is summarized here:"; 7) All necessary descriptors are calculated, results of regression obtained, and chemical similarity to calibration molecules evaluated; 8) The prediction report is provided including the details of the basis for model prediction and including statistical reporting on the reliability of the prediction.
The OpenTox Framework Design
OpenTox is a platform-independent collection of components that interact via well-defined interfaces. The preferred form of communication between components is through web services. A set of minimum required functionalities for OpenTox components of various categories (prediction, descriptor calculation, data access, validation, report generation) are available on the OpenTox website [21].
OpenTox tries to follow the best practices of open source project management for core framework components. This means that source code, technical discussions and documents are open to the general public and interested parties can participate in development if they have registered for access to the developers' area of the website [22].
OpenTox is committed to the support and further development of Open Standards and Ontologies. Appendix 1 summarises some of the most important standards of relevance to the Framework.
Architecture
OpenTox is a framework for the integration of algorithms for predicting chemical toxicity and provides: • components for specialized tasks (e.g. database lookups, descriptor calculation, classification, regression, Table 1 Overall Use Case process template for predicting an endpoint for a chemical structure Activity Name: Overall Use Case -Given a chemical structure, predict endpoints.
Trigger Event:
User needs toxicity prediction for one compound and initiates service request.
Knowledge Needed (Source):
Assume user has at least basic toxicity and chemistry knowledge but is not an expert QSAR user.
Resources needed (including services):
Computer interface for user entry of structure, selection of endpoints and return of results. OpenTox Data Resources, Prediction Model Building and Report Generation.
Exception Events: Incorrect chemical structure. Endpoint unavailable. Unable to predict endpoint.
Knowledge Delivered (destination):
In case of exception events direct user to further consulting and advice services.
Output Information delivered (destination): Report on endpoint predictions. report generation) that communicate through welldefined language independent interfaces; • applications that implement the capabilities of OpenTox components for specific Use Cases.
The OpenTox Framework supports building multiple applications, as well as providing components for third party applications. The Framework guarantees the portability of components by enforcing language-independent interfaces. Implementation of an integration component in a specific language/platform automatically ports the entire OpenTox Framework to that language/ platform.
The OpenTox Framework is composed of: • Components -every component encapsulates a set of functionalities and exposes them via well defined language-independent interfaces (protocols); • Data Infrastructure adhering to interoperable principles and standards; • Ontologies and associated services; • Documentation and guidance for application development and use.
An OpenTox-based application implements a specific Use Case, with the appropriate user interfaces, and adhering to guidance on APIs and standards.
The interactions between components are determined by their intended use and can differ across different Use Cases, which consist of a series of steps, each applying component functionalities on input data. The interaction between components is implemented as a component. Interaction components such as workflows (e.g., Taverna) combine multiple services to offer the following functionalities: • load the series of steps, corresponding to the specific Use Case (from a configuration file on a file system or on a network); • take care of loading necessary components; • execute the steps.
OpenTox Application Programming Interfaces
To assure reliable interoperability between the various OpenTox web services, a well-defined API is required. The OpenTox APIs specify how each OpenTox web service can be used, and how the returned resources look like. It further specifies the HTML status codes returned in case of succeeded operations as well as errors codes.
OpenTox interfaces have the minimum required functionalities shown in Appendix 2. The initial specifications for the OpenTox APIs have been defined and are available on the OpenTox website [23]. The initial objects already specified are Endpoint, Structure, Structure Identifiers, Feature Definition, Feature, Feature Service, Reference, Algorithm, Algorithm Type, Model, Dataset, Validation Result, Applicability Domain, Feature Selection, and Reporting. All current OpenTox web services adhere to the REpresentational State Transfer (REST) web service architecture [24] for sharing data and functionality among loosely-coupled, distributed heterogeneous systems.
Further information on interfaces and the REST approach is included in Additional File 3.
The choice of employing web services allows the complete framework to operate in different locations, independent of operating systems and underlying implementation details. Figure 2 shows the OpenTox resources modelled in the OpenTox Ontology. These resources are provided by the various OpenTox web services. The links between the components reflects interaction between the respective web services.
The model web service provides access to (prediction) models. Models are created via the algorithm web service, which supports different types of algorithms (e.g. supervised learning, feature selection, descriptor calculation, and data cleanup). Building a model will normally require various parameters, one or several datasets, as well as a set of features.
Datasets are stored in the dataset web service. A dataset contains data entries, which are chemical compounds, as well as their feature values. Features are defined as objects representing a property of a compound, including descriptors and calculated features, endpoints, and predictions. Different representations for chemical compounds can be accessed from the compound web service. The feature web service provides the available features (e.g. structural features, chemical descriptors, endpoints).
The validation web service evaluates and compares the performance of prediction models. Simple training-testset-validation is supported as well as cross-validation. The validation result contains quality statistical figures and reports (available in html or pdf formats) that visualize the validation results. The task web service supports long-running, asynchronous processes. The ontology web service provides meta information from relevant ontologies (which can be accessed using SPARQL queries [25]), as well as lists of available services.
Approaches to Authentication and Authorization will be specified in the next version 1.2 of the API.
All OpenTox resources have representations providing information about the type of resource, and what the service accepts as input such as tuning parameters. Most algorithms and model resources in OpenTox are available in multiple representations. The Resource Description Framework (RDF) representation [26], and in particular its XML formatted variant, was chosen as the master data exchange format, due to the following reasons: • RDF is a W3C recommendation: RDF-related representations such as rdf/xml and rdf/turtle are W3C recommendations so they constitute a standard model for data exchange; • RDF is part of Semantic Web Policy: RDF as a representation for a self-contained description of web resources contributes to the evolution of the Semantic Web; a web where all machines can "understand" each other; • RDF is designed to be machine-readable.
Some services support additional representations like JavaScript Object Notation JSON [27], YAML [28] or Application/X-Turtle [29]. Some prediction model services provide Predictive Model Markup Language (PMML) representations [30] to improve their portability, since many machine learning applications like Weka provide support for PMML. The second version of the API, OpenTox API version 1.1, was completed and published on the OpenTox website in November 2009. Version 1.2 is scheduled for completion for September 2010 and is open to community-based input and comments on the OpenTox API pages containing more detailed information on the interfaces [23].
Ontologies and Controlled Vocabulary
The definition of ontology and controlled vocabulary is extremely important to the construction of the OpenTox data infrastructure. It contributes to the necessary standardization and rational organization of data, thus facilitating both vertical (e.g., within one toxicological endpoint) and horizontal (e.g., through different endpoints) retrievals. The definition consists of two main steps: first, the selection of the toxicological endpoints to be included; second, the definition of the type and extent of information for each endpoint, and their internal relationships and hierarchies.
Schema
Two publicly available schemas for describing toxicology data are the OECD harmonised templates (OECD-HTs) Hardy et al. Journal of Cheminformatics 2010, 2:7 http://www.jcheminf.com/content/2/1/7 [31] and the ToxML (Toxicology XML standard) schema [7]. It appears that the OECD-HTs have the advantage of being closer to the schemas established by the regulators for the industry to submit their data. However, this schema is quite generic, and does not lend easily itself to the needs of the OpenTox project in terms of scientific databases and scientific computing. On the other hand, the ToxML schema has many features necessary for accommodating large amounts of data at different levels of complexity, and for creating hierarchies within ontology constructs.
REACH endpoints and OECD Guidelines
The OpenTox data infrastructure prioritises support of toxicological end points for which data are required under the REACH regulation. In current toxicological testing, these endpoints are addressed by both in vitro and animal experiments carried out according to OECD guidelines.
The OECD guidelines for testing of chemicals [11] are published on the Internet. Whereas there is no official list of OECD endpoints (test guidelines are developed according to the needs of member countries), and no official OECD approach to toxicity testing, interesting background information on criteria for toxicity testing has been developed as SIDS (Screening Information Data Set) [12,33,34].
Data sources for the OpenTox data infrastructure
The main source of data for the public OpenTox data infrastructure is in the public domain, which is spread in many and varied sources and databases. They can be categorized into: -Textual databases (e.g., IARC [35], NTP [36]); -Machine readable files (e.g., .sdf) that include both structures and data, and that can be immediately used by modellers for (Q)SAR analyses in the OpenTox platform (e.g., DSSTox [6], ISSCAN [37], AMBIT [38], RepDose [39]); -Large and quite complex databases on the Internet (e.g., PubChem [3], ACToR [40]).
The above differences in the types of data sources are entwined with differences in the quality of data (some databases may contain contradictory results, with no critical selection), and with changes with time (updates). Because of the varying data quality level of the various data sources, higher priority is given to databases subject to curation and quality evaluation. Databases being integrated in the first phase of OpenTox development include ISSCAN, DSSTox, CPDBAS, DBPCAN, EPAFHM, KIERBL, IRISTR, FDAMDD, ECETOC skin irritation, LLNA skin sensitisation and the Bioconcentration factor (BCF) Gold Standard Database [41,38]. Enabling access arrangements to clinical data such as that from the FDA, data from the US EPA's ToxCast [42] program, and commercial sources are also current OpenTox activities.
OpenTox Controlled Vocabulary and Hierarchy
The OpenTox data infrastructure on toxicological data is used to support the development of (Q)SAR models within the OpenTox platform. Thus, its design takes into account the requirements of (Q)SAR modelling. A wide spectrum of (Q)SAR approaches, as applied to toxicity, exists today, ranging from coarse-grained to fine-tuned ones. Broad classes are [43]: -structural alerts, which are substructures and reactive groups linked to the induction of chemical toxicity (e.g., carcinogenicity). They are used for preliminary hazard characterization, are quite popular with regulators and industry, and most often are based on, and provide to the users mechanistic information; -QSARs for noncongeneric sets of chemicals (e.g., lazar, PASS [44]), which generate probabilities of being active/inactive (and to what extent) for compounds with very different structures; -QSARs for congeneric sets of chemicals (e.g., Hansch approach), which use mechanistically-based descriptors, and describe how relatively small changes in structure can provoke variations in activity. Series of very similar (highly congeneric) chemicals are usually developed by industry.
Despite their differences, all the various (Q)SAR modelling approaches share the need of a highly structured information as a starting point. This includes the selection of ontologies, with controlled vocabulary and hierarchies.
We believe that such ontology work should be part of a public global community resource, subject to review and curation. We have created OpenToxipedia as a collaborative resource for the entry and editing of toxicology terms, supported by a Semantic Media Wiki [45]. An OpenTox Ontology Working Group is dedicated to the development and incorporation of ontologies which are relevant to OpenTox Use Cases; collaborative work on projects is supported by a Collaborative Protégé Editor. The approach is also to work with other groups with existing ontology developments so as to maximise reuse and interoperability between public ontologies.
The OECD-HT and ToxML schema and data resource mapping experiments for the OpenTox context are described in Additional File 4.
Based on our evaluation, we decided to adopt ToxML as the schema for data management and integration within OpenTox, and to support conversion and export to the OECD-HTs for reporting purposes.
Algorithms
The first tasks related to algorithms in OpenTox were to document, evaluate and discuss available and possibly interesting or useful algorithms. To make this selection more objective, we had to agree on a set of selection criteria for inclusion of algorithms in the initial OpenTox Framework development. Ongoing scientific efforts in various complementing fields have led to a high number of algorithms that are available and potentially useful for (Q)SAR and related tasks. To meet the specific user requirements and long term goals of OpenTox, it was crucial to establish a set of selection criteria.
Algorithm Templates
To make a reasonable comparison of the available (Q) SAR algorithms possible, they were grouped into three categories: (i) descriptor calculation algorithms, (ii) classification and regression algorithms and (iii) feature selection algorithms (Two additional categories for clustering and consensus modelling are currently being added.). For each algorithm a short text description and a uniform (for each of the three categories) table was generated to facilitate a comparison with respect to the selection criteria. The text description of the algorithm gives a brief overview of the algorithm's background, its capabilities, dependencies and technical features. The uniform tables have three logical parts. The first one enables a black-box point of view of the algorithm and has the same fields for every algorithm category. It contains a field for the name, the input and output (semantically), the input and output format, user-specific parameters and reporting information. The second logical part is variable for the three algorithm categories and describes some intrinsic properties of the algorithms. It comprises fields for the algorithm's background and its performance. The descriptor calculation algorithms have a special field for the type of descriptor that is generated. The classification and regression algorithms have additional fields for the applicability domain and the confidence in the prediction, the bias, the type of learning (lazy or eager learning) and the interpretability of the generated model. The feature selection algorithms have special fields for type of feature selection (class-blind or class-sensitive), for the distinction of optimal, greedy or randomized methods and for the distinction of filter and wrapper approaches. The third part of the description table is again identical for the different algorithm categories. It gives information about the algorithm's availability within OpenTox, the license and dependencies, the convenience of integration, the priority of integration, the author of the algorithm and the author of the description. Additionally there are fields for a contact address (email) and for comments. Algorithm descriptions according to the template format are located on the OpenTox website [46].
The fields of the OpenTox description table for the Algorithm Template are described in Additional File 5.
The initial implemented OpenTox algorithms are described in Additional File 6.
Algorithm Ontology
A graphical overview of the current OpenTox Algorithm ontology is shown in Figure 3.
A formal OWL [47] representation of the algorithm ontology is available on the OpenTox website [48]. The plan is to extend this ontology in the future to a full description of every algorithm, including references, parameters and default values. This will be achieved by adopting the Blue Obelisk ontology [49] and is currently work-in-progress. The RDF representation of an Algorithm contains metadata described by the Dublin Core Specifications [50] for modelling metadata (DC Namespace) and the OpenTox namespace. The establishment of an ontological base for the services facilitates the extension of the services and the introduction of new algorithms and new algorithm classes.
Validation
OpenTox provides unified and objective validation routines for model and algorithm developers and for external (Q)SAR programs. It implements state-of-the art procedures for validation with artificial test sets (e.g. nfold cross-validation, leave-one-out, simple training/test set splits) and external test sets. These validation techniques are available for all (Q)SAR models (OpenTox and external programs) that are plugged into the Framework. This will help to compare algorithms and (Q)SAR models objectively and to speed up the development cycle.
OECD Guidelines for (Q)SAR Validation
The OECD Guidelines for (Q)SAR Validation [10] addressed are as follows: PRINCIPLE 1: "DEFINED ENDPOINT" OpenTox addresses this principle by providing a unified source of well-defined and documented toxicity data. (Q)SAR model quality crucially depends on the clarity of endpoints and experimental protocols used and the ability to communicate this information in an unambiguous way, both in model development and model application. The current practice usually includes a textual description of the materials and methods used for acquiring experimental data as well as literature references, while the model description is a separate entity. The challenge to the distributed web services framework, was to provide an automatic and unique way of describing and linking the endpoint information in a formal way, able to be processed automatically by the software, with minimal human interaction. This is currently solved by making use of a simple ontology of endpoints. We have defined an ontology based on the OWL (Web Ontology Language) [47] for toxicological endpoints which is in line with current ECHA REACH guidance [51]. Using this ontology, each attribute in a toxicological dataset can be associated with an entry to the ontology, therefore allowing a unique mapping between endpoints in various and heterogeneous datasets. This ontology possesses 5 subclasses: ecotoxic effects, environmental fate parameters, human health effects, physico-chemical effects, and toxicokinetics. Each of these subclasses has one or two further layers of subclasses. PRINCIPLE 2: "AN UNAMBIGUOUS ALGORITHM" OpenTox provides unified access to documented models and algorithms as well as to the source code of their implementation. Currently OpenTox is deploying Algorithm Template descriptions and an algorithm type ontology which allows a clear definition of what type of algorithm(s) is used to construct a model. PRINCIPLE 3: "DEFINED APPLICABILITY DOMAIN" OpenTox integrates tools for the determination of applicability domains (ADs) and the consideration of ADs during the validation of (Q)SAR models. Evaluation of ADs are supported by an OpenTox algorithm API supporting situations where the AD is calculated both for situations where it is included as part of the model building application and those where it is carried out separately [52]. A specific AD algorithm is applied to a dataset, and the result is then an AD model. This model can then be applied to reason about the applicability of a model when applied to a new compound query. PRINCIPLE 4: "APPROPRIATE MEASURES OF GOODNESS-OF-FIT, ROBUSTENESS AND PREDIC-TIVITY" OpenTox provides scientifically sound validation routines for the determination of these measures. Within the validation part of the prototype framework, we have concentrated so far on including validation and cross-validation objects. These include a set of measures for evaluating the quality of models generated by algorithms on the datasets as summarised in Table 2. PRINCIPLE 5: "A MECHANISTIC INTERPRETA-TION, IF POSSIBLE" As mechanistic interpretation often relies on human knowledge, this usually cannot be done automatically. However, in the current API it is foreseen to generate skeletons for reporting using the validation results created by extensive testing during model construction, allowing subsequent userentered explanations about mechanisms. Other potential future extensions of OpenTox services could include resources providing insight on mechanisms, e. g. from pathways and systems biology models, selection and inclusion of in vitro assays relevant to the mechanism in the model, or from data mining of human adverse events data. QMRF reporting is being facilitated by the current integration of the existing QMRF editor [53] into OpenTox, this allowing end-users to annotate models with the information required by the QMRF format.
OpenTox Approach to Validation
To guarantee a fair comparison to other algorithms, the following principles are followed: • Separation of validation as an independent service to algorithm and model building services; • Ability to reproduce the computational experiment (even in non-deterministic models e.g., by storing initial random values/random seeds); • Retrieval of the exact same training and test data that was used, so that all algorithms have to work with the same data (store random seed for cross-validation); • Use of an external validation comparison and test set that performs the same operations for all algorithms (and prevents unintended cheating).
Validation testing results are stored for subsequent retrieval because this allows obtaining information about the performance of various algorithms/models (on particular datasets) without repeating (time-consuming) experiments. This is especially useful when developing new algorithms or new versions of algorithms to allow a quick comparison to other methods.
Three example Validation Use Cases are described in Additional File 7.
Validation Interfaces and Services
A Validation API is included in the OpenTox APIs ensuring the seamless interaction between all OpenTox components with regards to validation needs. Each validation resource for example, contains information about the dataset and the model, so the underlying procedures can be invoked.
The REST service implementation for validation is described in Additional File 8.
Further detailed information about the validation API including the approach for cross-validation can be found at http://www.opentox.org/dev/apis/api-1.1/ Validation.
Validation Application Example: Building and Validating a Model
The application example of building and validating a model is executed using the Validation web service prototype [54] (developed at the Albert Ludwigs Freiburg University (ALU-FR)) along with the lazar and fminer algorithms [13,14] (provided by In Silico Toxicology (IST)). The application is compliant with the OpenTox API, and based on interoperability between two Open-Tox web services, located at two different locations: ALU-FR's services [55] and the web services of IST [56].
The goal of this Use Case is to evaluate a prediction algorithm: the algorithm trains a model on a training dataset, and then predicts the compounds of a test dataset towards a certain toxicology endpoint. The validation result reflects how well the model performed. The workflow for the training test set validation is illustrated in Figure 4. Web services are displayed as rectangles; the three key POST REST operations are symbolized as dashed lines, while solid lines visualize data flow operations.
A description of the step by step execution of the Model Validation Use Case by the OpenTox web services is provided in Additional File 9.
Confusion Matrix
A confusion matrix is a matrix, where each row of the matrix represents the instances in a predicted class, while each column represents the instances in an actual class. One benefit of a confusion matrix is that it is easy to see if the system is confusing two or more classes.
Absolute number and percentage of unpredicted compounds Some compounds might fall outside the applicability domain of the algorithm or model. These numbers provide an overview on the applicability domain fit for the compound set requiring prediction.
Precision, recall, and F2-measure These three measures give an overview on how pure and how sensitive the model is. The F2-measure combines the other two measures.
ROC curve plot and AUC A receiver operating characteristic (ROC) curve is a graphical plot of the true-positive rate against the false-positive rate as its discrimination threshold is varied. This gives a good understanding of how well a model is performing. As a summarisation performance scalar metric, the area under curve (AUC) is calculated from the ROC curve. A perfect model would have area 1.0, while a random one would have area 0.5.
Measures for Regression Tasks
Name Explanation
MSE and RMSE
The mean square error (MSE) and root mean squared error (RMSE) of a regression model are popular ways to quantify the difference between the predictor and the true value.
R 2
The explained variance (R²) provides a measure of how well future outcomes are likely to be predicted by the model. It compares the explained variance (variance of the model's predictions) with the total variance (of the data).
Reporting
The OpenTox report generating component generates reports to present the results (of predictions/model validation) to the user in a structured reporting format. Reporting formats are guided by standards and templates such as QMRF and REACH CSR and OECD validation principles [10], which specify that to facilitate the consideration of a (Q)SAR model for regulatory purposes, it needs to be associated with the OECD Guidelines for (Q)SAR Validation.
A description of information to be included in Open-Tox reports is provided in Additional File 10.
The different type of OpenTox reports are summarized in Table 3.
Reporting types supported by OpenTox and the corresponding API are described in Additional File 11.
OpenTox Data Infrastructure
A major pre-requisite for the successful implementation of the main principles of the Three Rs Declaration of Bologna [57] is the universal access to high quality experimental data on various chemical properties. In particular, the range of replacement alternatives methods includes the following OpenTox-relevant approaches: • The improved storage, exchange and use of information from animal experiments already carried out, so that unnecessary repetition can be avoided; • The use of physical and chemical techniques, and of predictions based on the physical and chemical properties of molecules; • The use of mathematical and computer modelling, including modelling of structure-activity relationships, molecular modelling and the use of computer graphics, and modelling of biochemical, pharmacological, physiological, toxicological and behavioural processes.
Since it is likely that, in many circumstances, an animal test cannot be currently replaced by a single replacement alternative method, the development, evaluation and optimisation of stepwise testing strategies and integrated testing schemes should be encouraged. The OpenTox data facilities, made publically accessible through a web services framework, provide a solid basis for addressing the above mentioned replacement alternative goals in a more efficient, technically sound and integrated way compared to current uncoordinated practices and fragmented resources. Unfortunately, even today, more than half a century after Russell and Burchs's original publication [58] and more than 10 years after the adoption of the Three Rs Declaration of Bologna, the "state-of-the-art" is characterised by highly fragmented and unconnected life sciences data (both from a physical and ontological perspective), which is furthermore frequently inaccurate and/or difficult or even impossible to find or access. The OpenTox approach to data resource management and integration has the following major features, which address the replacement alternatives challenge and associated user, industry and regulatory needs including REACH: • Universal database structure design, allowing for storage of multi-faceted life sciences data; • An ontology allowing for efficient mapping of similar and/or complementary data coming from different datasets into a unifying structure having a shared terminology and meaning; • Integration of multiple datasets with proven highquality physico-chemical and/or experimental toxicity data; • Built-in heuristics for automatic discovery of 2D chemical structure inconsistencies; • Extensive support for structure-, substructure-and similarity-based searching of chemical structures; • An OpenTox standards-compliant dataset interface that allows query submission and results retrieval from any OpenTox standards-compliant web service; • Transparent access to and use of life sciences data, hosted at various physical locations and incorporating a variety of distributed software resources, through the OpenTox Framework.
The OpenTox initial data infrastructure includes ECHA's list of pre-registered substances [59] along with high-quality data from consortium members (e.g. ISS ISSCAN [37], IDEA AMBIT [38]), JRC PRS [60], EPA DSSTox [6], ECETOC skin irritation [61], LLNA skin sensitization [62], and the Bioconcentration Factor (BCF) Gold Standard Database [41]). Additional data for chemical structures has been collected from various public sources (e.g. Chemical Identifier Resolver [63], ChemIDplus [64], PubChem [3]) and further checked manually by experts. The database provides means to identify the origin of the data, i.e., the specific inventory a compound originated from. The data is currently publicly available and accessible via an initial implementation of the Open-Tox REST data services [65], as defined in the OpenTox Framework design and its implementations.
The Additional File 12 on OpenTox Data Infrastructure describes in more detail the current OpenTox data facilities and resources.
OpenTox Applications
We describe here the implementation of two Use Cases as applications based on the OpenTox Framework. The first case, ToxPredict, is aimed at the user having no or little experience in QSAR predictions. This Use Case should offer an easy-to-use user interface, allowing the user to enter a chemical structure and to obtain in return a toxicity prediction for one or more endpoints. The second case, ToxCreate, is aimed at the experienced user, allowing them to construct and to validate models using a number of datasets and algorithms.
Both Use Cases also demonstrate inter-connectivity between multiple OpenTox services. Within ToxPredict, web services from three different service providers (TUM, IDEA, and NTUA) are operating together. In ToxCreate the model construction is performed using IST web services, while the validation and reporting is executed using ALU-FR services.
ToxPredict Application
As the ToxPredict Use Case should offer easy access to estimate the toxicological hazard of a chemical structure for non-QSAR specialists, one main aim was to design a simple yet easy-to-use user interface. For this, one of the goals was also to reduce the number of possible parameters the user has to enter when querying the service. The Use Case can be divided into the following five steps: 1. Enter/select a chemical compound 2. Display selected/found structures 3. Select models 4. Perform the estimation 5. Display the results The ToxPredict graphical user interface is shown in Figure 5; the interaction and sequence of OpenTox services interoperating during the different steps of the ToxPredict application execution are detailed in Figures 6,7,8,9,10,11 and 12. A detailed step-by-step graphical interface description of the ToxPredict workflow steps are provided in Additional File 13.
The following sequence of descriptions explains the workflow and operations of the example ToxPredict user session.
ToxPredict
Step 1 -Enter/select a chemical compound The first step in the ToxPredict workflow provides the means to specify the chemical structure(s) for further estimation of toxicological properties. Free text searching allows the user to find chemical compounds by chemical names and identifiers, SMILES [66] and InChI strings, and any keywords available in the Open-Tox data infrastructure. The data infrastructure contains information from multiple sources, including the ECHA pre-registration list. ToxPredict Step 2 -Display selected/found structures The second step displays the chemical compounds, selected by the previous step. The user interface supports the selection/de-selection of structures, and editing of the structures and associated relevant information. The OpenTox REST Dataset services are used in this step of the application in order to retrieve the requested information. ToxPredict Step 3 -Select models In the third step, a list of available models is displayed. Links to training datasets, algorithms and descriptor calculation REST services are provided. The models provide information about the independent variables used, the target variables (experimental toxicity data) and predicted values. All these variables are accessible via the OpenTox Feature web service, where each feature can be associated with a specific entry from the existing endpoint ontology. The association is usually done during the upload of the training data into the database. The endpoint, This step involves an interplay between multiple OpenTox web services. Algorithm, Model, and Feature services are registered into the Ontology service, which provides RDF triple storage with SPARQL, allowing various queries. The ToxPredict application queries the Ontology service for all available models, along with the associated information about algorithms used in the model, descriptors, and endpoints. The list of models may include models, provided by different partners and running on several remote sites (TUM and IDEA models are shown in this example). The Ontology service serves like a hub for gathering a list of available models and algorithms from remote sites. There could be multiple instances of the ToxPredict application, configured to use different Ontology services, and therefore, allowing for a different subset of models to be exposed to end users.
ToxPredict
Step 4 -Perform the estimation Models, selected in Step 3 are launched in Step 4, where the user can monitor the status of the processing. The processing status is retrieved via OpenTox Task services. Different Model, Algorithm, Dataset, and Ontology services, running on different remote locations can be involved at this stage. If a model relies on a set of descriptors, an automatic calculation procedure is performed, which involves launching a descriptor calculation by remote Algorithm services. The procedure is as follows: The Ontology service is queried to retrieve information about the independent variables, used in the model. If no such variables are involved (e.g., in case of Tox-Tree models, which rely on chemical structure only), the workflow proceeds towards model estimation. In case of a model, based on descriptors (e.g., a regression model), the procedure is slightly more complex, as explained below.
Each independent variable is represented as a Feature and managed via the Feature service. Each feature has associated a web address (OWL property opentox: hasSource from OpenTox OWL ontology), which specifies its origin. The tag could point to an OpenTox Algorithm or Model service, in case it holds a calculated value, or point to a Dataset service, in case it contains information, uploaded as a dataset (for example experimental endpoint data). If the feature origin is a descriptor calculation algorithm, the web address points to the Algorithm service, used to calculate descriptor values, and the same web address can be used again via the OpenTox Algorithm API in order to calculate descriptors for user-specified structures. The Algorithm services perform the calculation and store results into a Dataset service, possibly at a remote location. Then finally, a dataset with all calculated descriptor values is submitted to the Model service. Upon estimation, Model results are submitted to a Dataset service, which could be at a remote location, which could be the same or different to that for the model services.
The interplay of multiple services, running on remote sites, provide a flexible means for the integration of models and descriptors, developed by different organisations and running in different environments. Identification of algorithms and models via web URIs ensure the compliance with the OECD validation principle 2 of "An unambiguous algorithm", as well as repeatability of the results of the model building. Extensive meta information about the algorithm and models themselves is accessible via web URIs and the OpenTox API. ToxPredict Step 5 -Display the results The final step displays estimation results (see Figure 5), as well as compound identification and other related data. Initial demonstration reports in several formats can be accessed via icons on the right hand side of the browser display.
ToxPredict is a demonstration web application, providing a user-friendly interface for estimating toxicological hazards. It provides a front end to multiple OpenTox services, currently integrating IDEA ontology, dataset, feature and model services with TUM descriptor calculation and model services and NTUA algorithm services. Future work will include integration of other third party model services, as well as Validation and Reporting services. While current functionality may appear to an end-user not much different from a standalone prediction application like ToxTree, the back-end technology provides a very flexible means for integrating datasets, models and algorithms, developed by different software technologies and organisations and running at remote locations.
ToxCreate application
The ToxCreate Use Case, in contrast to ToxPredict, is aimed at researchers working in the life sciences and toxicology, QSAR experts, and industry and government groups supporting risk assessment, who are interested in building predictive toxicology models. It allows the creation of a number of models using one or more algorithms. Therefore it is not as easy to use as the ToxPredict application, as not only the algorithm has to be selected, but also the right parameter setting needs to be explored; these parameters are algorithm-dependent. For this decision-making, the expert has to have sound knowledge of the algorithm they are using.
The following sequence of steps explains the execution of a sample session of the ToxCreate application: A graphical interface description of the ToxCreate workflow steps are provided in Additional File 14. ToxCreate Step 1 -Upload Dataset The first step of the ToxCreate workflow enables the user to specify a model training dataset in CSV format, consisting of chemical structures (SMILES) with binary class labels (e.g. active/inactive). The file is uploaded to the server and labelled with a user-defined name. In contrast to Tox-Predict, users can specify their own training data/endpoint. This is done in batch mode, i.e. without interactive screens to select chemicals based on different criteria, which is convenient for expert users. By hitting "Create model", a QSAR model is derived. The current prototype demonstrates lazar models only. No model parameters can be set at this time, but future versions will enable arbitrary OpenTox API-compliant models. ToxCreate Step 2-Create and Display Model This next step in ToxCreate displays information about the (note that in this way, arbitrary combinations of model algorithms and datasets/endpoints are available to test the structure).
ToxCreate Step 4 -Display Prediction Results
Step 4 displays the predictions made by the selected models from the previous step along with an image of the predicted structure. Based on the selections made in the previous step, the expert user may predict the same structure by a variety of algorithms for the same dataset/endpoint and compare the predictions. Together with model validation, users are able to use ToxCreate to select appropriate models with adjusted parameters beforehand. By predicting a variety of related endpoints, instead of just one, combined with arbitrary models at the same time, ToxCreate enables free predictive toxicology modelling exploration along different dimensions.
Discussion
The OpenTox Framework supports the development of in silico predictive toxicology applications based on OpenTox components for data management, algorithms and validation. Initial applications are being provided openly to users and developers through the OpenTox website and linked services including partner resources. Such applications support users in the development and training of QSAR models against their own toxicological datasets, e.g., they may upload a dataset for a given endpoint to an OpenTox service, define a variety of parameters and build and download a model. Subsequent releases in 2010 and 2011 will extend the Framework to the support of a broader range of computational chemistry and biology modelling approaches, and integration of data from new in vitro assays, and refine the API designs based on development experiences on the effectiveness of applications in supporting integrated testing strategies as required by REACH.
OpenTox provides a platform technology with: 1. a unified interface to access toxicity data and in silico models; 2. a framework for the development and validation of new (Q)SAR models; 3. a framework for the development, validation and implementation of new in silico algorithms; and 4. well defined standards for the exchange of data, knowledge, models and algorithms.
OpenTox currently provides high-quality data and robust (Q)SAR models to explore the chronic, reproductive, carcinogenic and genotoxic toxicity of chemicals. The integration of further toxicological endpoints should be straightforward with OpenTox tools and standards.
OpenTox is tailored especially to meet the requirements of the REACH legislation and to contribute to the reduction of animal experiments for toxicity testing. It adheres and supports the OECD Guidelines for (Q) SAR Validation and incorporates the QSAR Model Reporting Format (QMRF) from the EC Joint Research Council (EC JRC). Relevant international authorities (e. g., EC JRC, ECVAM, EPA, FDA) and industry organisations participate actively in the advisory board of the OpenTox project and provide input for the continuing development of requirement definitions and standards for data, knowledge and model exchange.
OpenTox will actively support the further development and validation of in silico models and algorithms by improving the interoperability between individual systems (common standards for data and model exchange), increasing the reproducibility of in silico models (by providing a common source of structures, toxicity data and algorithms) and by providing scientifically-sound and easy-to-use validation routines. For this reason it is likely that the predictive toxicology application development cycle will speed up which will lead to improved and more reliable results. As OpenTox offers all of these features openly to developers and researchers, we expect an international impact that goes beyond a single research project. For organisations, that cannot afford a dedicated computational toxicology department, the OpenTox community provides an alternative affordable source of solutions and expertise.
Biotech and pharmaceutical industry SMEs will benefit from the OpenTox project, because it will provide access to toxicological information and in silico models from a single, easy-to-use interface that is publicly available. OpenTox should reduce the costs for product candidate development by providing new resources for toxicity screening at a very early stage of product development, thus eliminating toxic liabilities early and reducing the number of expensive (and sometimes animal consuming) efficacy and toxicity experiments. With the OpenTox Framework it will also be possible to identify substructures that are responsible for toxicity (or detoxification), and information that can be used for the design of safer and more efficient products.
The ECB estimated that 3.9 million additional animals could potentially be used for the initial implementation of the REACH program (A more recent evaluation based on REACH chemical pre-registrations at ECHA indicate an even larger testing requirement [67]). Chronic effects such as reproductive and developmental toxicity, in vivo mutagenicity and carcinogenicity will require~72% of the test animals (~2.8 million animals). In the same study a 1/3 -1/2 reduction potential was estimated for (Q)SAR techniques available at that time (2003). As OpenTox focuses initially on the development of improved (Q)SAR techniques for reproductive, developmental and repeated dose toxicity, and for in vivo mutagenicity and carcinogenicity endpoints, it could contribute substantially to an estimated reduction potential of 1.4 million animals alone for REACH. A more detailed analysis of replacement possibilities under consideration of applicability domains is being currently pursued.
The OpenTox Framework works independently of the toxicity endpoint. As it will be easy to plug in databases for other endpoints, it is likely that significant savings will occur also for other endpoints (e.g. ecotoxicity endpoints from the FP7 Environment Theme ENV. 2007.3.3.1.1). An exciting opportunity in this respect is the inclusion of human data from epidemiological and clinical studies and the utilization of data from adverse effect reporting systems, because in this case no data from animal experiments will be needed.
Conclusions
This work provides a perspective on the growing significance of collaborative approaches in predictive toxicology to create the OpenTox Framework as a public standards-based interoperable platform. Key challenges to be overcome are both technical and cultural and involve progressing issues related to cross-organisational, enterprise and application interoperability, knowledge management and developing a culture and framework supporting a community-based platform and collaborative projects emerging from the community foundation [68][69][70]. The OpenTox Framework offers a standardized interface to state-of-the art predictive toxicology algorithms, models, datasets, validation and reporting facilities on the basis of RESTful web services and guided by the OECD Principles, REACH legislation and user requirements.
Initial OpenTox research has provided tools for the integration of data, for the generation and validation of (Q)SAR models for toxic effects, libraries for the development and integration of (Q)SAR algorithms, and scientifically-sound validation routines. OpenTox supports the development of applications for non-computational specialists in addition to interfaces for risk assessors, toxicological experts and model and algorithm developers.
The OpenTox prototype established a distributed state-of-the-art data warehousing for predictive toxicology. It enables improved storage, exchange, aggregation, quality labelling, curation and integrated use of high quality life sciences information, and allows for consistent and scientifically sound mathematical and computer modelling, including modelling of structure-activity relationships for REACH-relevant endpoints.
A key decision towards algorithm implementation was the adoption of the REST architectural style, because it is suitable for achieving three important goals: independent deployment of components, ease of standardised communication between components and generality of interfaces. These advantages will enable the development and integration of additional algorithms in the future, which may be offered by a variety of third-party developers in the community. Ongoing maintenance and addition of novel predictive algorithms relevant to predictive toxicology will contribute to the long-term sustainability of OpenTox in generating valuable resources for the user scientific community.
Many descriptor calculation algorithms and QSAR modelling methods have already been implemented and incorporated within OpenTox. These include methods provided by OpenTox partners and algorithms contained in other state-of-the-art projects such as WEKA and CDK. Descriptor calculation algorithms are able to generate both physico-chemical and sub-structural descriptors. QSAR modelling methods cover a wide range of approaches and address many user model building requirements, since they include regression and classification algorithms, eager and lazy approaches, and algorithms producing more easily interpretable and understandable models. The initial prototype also includes implementations of clustering algorithms and feature selection tools. Within OpenTox we have also implemented basic validation routines, simple validation (with supplied test set or training/test split), cross-validation routines (including leave-one-out), as well as making initial reporting routines available.
The OpenTox Framework supports rapid application development and extensibility by using well-defined ontologies, allowing simplified communication between individual components. Two user-centered prototype applications, ToxCreate and ToxPredict, show the potential impact of the framework regarding high-quality and consistent structure-activity relationship modelling of REACH relevant endpoints. The applications have been made available publically on the Web [71] providing immediate access to the applications as they have been developed. Considerable additional materials and references have been provided with this paper to support as complete a description of OpenTox as possible for users and developers.
ToxPredict satisfies a common and important situation for a user wishing to evaluate the toxicity of a chemical structure. The user does not have to cope with many current challenges such as the difficulty of finding or using existing data or the complications of creating and using complicated computer models. Because of the extensible nature of the standardised design of the OpenTox Framework, many new datasets and models from other researchers may be easily incorporated in the future, both strengthening the value offered to the user and ensuring that research results are not left languishing unused in some isolated resource not accessible to the user. The approach offers the potential to be extended to the complete and easy-to-use generation of reporting information on all REACH-relevant endpoints based on existing available scientific research results, and indications when additional experimental work is required, thus satisfying currently unmet industry and regulatory needs.
ToxCreate provides a resource to modellers to build soundly-based predictive toxicology models, basely solely on a user-provided input toxicology dataset that can be uploaded through a web browser. The models can be built and validated in an automated and scientifically sound manner, so as to ensure that the predictive capabilities and limitations of the models can be examined and understood clearly. Models can subsequently be easily made available to other researchers and combined seamlessly into other applications through the OpenTox Framework.
Continuing effort will be carried out by OpenTox developers to meet current academic and industry challenges regarding interoperability of software components and integration of algorithm and model services within the context of tested Use Cases. The approach to interoperability and standards lays a solid foundation to extend application development within the broader developer community to establish computing capabilities that are sorely missing in the field of predictive toxicology today, and which are holding back advances in both R&D and the application of R&D project outcomes to meet industry and regulatory needs.
Competing interests
The authors declare that they have received research funding for this work from the European Commission under its Seventh Framework Program. Otherwise the authors declare that they have no competing interests.
Authors' contributions
BH fulfilled the principal investigator role coordinating the activities of requirements analysis, research and development, and drafted the manuscript. ND provided created design components for OpenTox templates and interfaces. CH led the OpenTox Framework and API design activities and the development of the OpenTox ToxCreate application. MR carried out technical implementation of Open-Tox web resources. NJ played a leadership role in OpenTox Framework and API design activities, implementation of the OpenTox data services and the development of the OpenTox ToxPredict application. VJ performed chemical data collection, analysis and curation, led OpenTox testing activities and helped to draft the manuscript. IN helped in the design of RDF representations of OpenTox API objects and provided guidance for ontology development related issues. RB participated in high quality toxicity database preparation and in the discussion of the results. OT participated in the development of ontology for toxicological endpoints. OT and RB participated in validation of available schemas for describing toxicology data. OT mapped a number of databases to the ToxML and OECD-HT schemas. SK played a leadership role in OpenTox Framework and API design activities and led the work activities on OpenTox algorithms. TG, FB and JW worked on the OpenTox API and algorithm implementation. AK worked on the OpenTox API and validation and reporting service design. MG worked on the OpenTox API and validation and reporting service implementation. AM worked on the OpenTox API and fminer descriptor calculation service implementation. HS worked on the OpenTox API and the algorithms prototype implementation. GM worked on use case development and documentation. AA worked on the application of QSAR algorithms to publicly available datasets. PS worked on the OpenTox API, the algorithms prototype implementation and use case development. PS worked on the OpenTox API and the algorithms prototype implementation. DG led the activities on graphical user interface design and specifications. VP participated in the development of controlled vocabulary and in the discussion of the results. DF worked on the OpenTox API and the algorithms prototype implementation for MakeMNA, MakeQNA, and MakeSCR. AZ worked on the MakeMNA and MakeQNA descriptor calculation service implementation. AL participated in the development of ontology for toxicological endpoints and OpenToxipedia. TG participated in the development of OpenToxipedia. SN participated in the development of the controlled vocabulary and in high quality toxicity database preparation. NS participated in the development of the controlled vocabulary. DD worked on the OpenTox API, and MakeMNA and MakeQNA descriptor calculation service implementation. SC provided customer inputs for use case development from pharma and R&D Labs. IG provided the initial concept for the MaxTox algorithm and prediction logic. SR developed the application and its API compliance for the model generation of MaxTox. HP developed the MaxTox Random Forest models in R. SE developed ontologies and use cases for repeated dose toxicity. All authors read and approved the final manuscript.
Authors' information
Barry Hardy (BH) manages the eCheminfo and Innova-tionWell community of practice and research activities of Douglas Connect, Switzerland. He obtained his Ph.D. in 1990 from Syracuse University working in the area of computational chemistry, biophysics and computeraided molecular modelling and drug design. Over the past 20 years BH has led numerous international projects in the area of the chemical, life and medical sciences. He has developed technology solutions for internet-based conferencing, tutor-supported e-learning, laboratory automation systems and computational chemistry and informatics. BH was a National Research Fellow at the FDA Center for Biologics and Evaluation, a Hitchings-Elion Fellow at Oxford University and CEO of Virtual Environments International. He is currently coordinating the OpenTox FP7 project.
The owner of in silico toxicology Christoph Helma (CH) has received his Ph.D. in chemistry and a Masters in toxicology. His main research interest is the application of data mining techniques to solve real-world toxicological problems. He has more than 10 years experience in predictive toxicology research and has published more than 40 peer reviewed research papers. He was editor for the "Predictive Toxicology" textbook and editor for special sections in "Bioinformatics" and "Combinatorial Chemistry and High Throughput Screening", invited speaker for major (Q)SAR conferences and main organizer of the "Predictive Toxicology Challenge". CH Romualdo Benigni (RB) is the leading expert of the ISS for (Q)SAR. He has participated in several EU funded projects aimed at evaluating experimental mutagenicity systems from a toxicological point of view, and to projects on the evaluation of (Q)SAR models for the prediction of mutagenicity and carcinogenicity. He is the Italian representative in the EU ad hoc Group on (Q)SAR, and in the OECD ad hoc Group and Steering committee on (Q)SAR. His research activities include: Molecular biology; Environmental chemical mutagenesis; Statistics and mathematical modelling; Structure-Activity Relationships; Chemical Relational Databases. He organized and co-organized workshops/seminars/schools on (Q)SAR and modelling, including: • "Quantitative modelling approaches for understanding and predicting mutagenicity and carcinogenicity" Rome, 3-5 September 1997.
• "Complexity in the Living: a problem-oriented approach" Rome, 28 München. After receiving his doctoral degree from the Vienna University of Technology, he spent a few years as an assistant professor in the Machine Learning lab of the University of Freiburg. He was the co-organizer of the Predictive Toxicology Challenge 2000-2001, an international competition in toxicity prediction. He has organized several conferences and workshops, edited special issues of journals, given invited talks and tutorials, and serves on the program committees of major data mining and machine learning conferences and on the editorial board of the Machine Learning journal. His current research interests include data mining, machine learning, and applications in chemistry, biology and medicine.
Andreas Karwath (AK) has recently become interested in the field of cheminformatics after receiving his PhD in the fields of computational biology and data-mining in 2002 from the University of Wales, Aberystwyth. His main research topics are the application of data-mining and machine learning for structured data. He has been involved in a number of applications in bio-and cheminformatics, including remote homology detection, functional class prediction of unknown genes, and the alignment of relational sequences with the REAL system. AK is the main developer of the SMIREP prediction system that is available on the Internet http://www.karwath.org/ systems/smirep. The SMIREP system allows the reliable prediction of various (Q)SAR endpoints, mainly employing the SMILES code of the compounds under consideration. AK is also on the editorial board of the The Open Applied Informatics Journal, served as member of the program committee for a number of well-known international conferences as well as being a reviewer for journals like JMLR, Bioinformatics, Machine Learning, and JAIR.
Haralambos Sarimveis (HS) received his Diploma in Chemical Engineering from the National Technical University of Athens (NTUA) in 1990 and the M.Sc. and Ph.D. degrees in Chemical Engineering from Texas A&M University, in 1992 and 1995 respectively. Currently, he is the director of the "Unit of Process Control and Informatics" in the School of Chemical Engineering at NTUA. His main research directions are in process control and computational intelligence (neural networks, fuzzy logic methodologies, evolutionary algorithms). His research work has resulted in more than 100 publications in QSAR, modelling algorithms, process control, artificial intelligence and related fields.
Georgia Melagraki (GM) received her Diploma and Ph.D. degrees in Chemical Engineering from NTUA. She has also received the M.Sc. degree in Computational Mechanics and pursued management studies towards an MBA in the same institution. She has a strong scientific background in the field of cheminformatics, QSAR and related fields. Her scientific work has been published in more than 20 original research articles in international peer-reviewed journals.
Andreas Afantitis (AA) received his Diploma and Ph. D. degrees in Chemical Engineering from NTUA. He has also received the M.Sc. degree in Computational Mechanics and pursued management studies towards an MBA in the same institution. Currently he is the director of NovaMechanics Ltd, being responsible for the overall management, strategic direction, growth and financial control. His main research directions are in cheminformatics, bioinformatics and medicinal chemistry. He is a co-author in more than 20 papers in international peer-reviewed journals, Pantelis Sopasakis (PS) received his Diploma in Chemical Engineering from NTUA and currently he is a Ph. D. student. His research interests are in dynamic modelling, optimal control and stochastic optimization with emphasis on physiological and biological systems.
David Gallagher (DG) has 18 years of human graphical user interface design (GUI) as part of product marketing for computational chemistry SW programs and QSAR tools, with emphasis on the non-expert user. Products include "CAChe WorkSystem" and "ProjectLeader", currently marketed by Fujitsu Ltd. He has published peerreviewed research papers on QSAR, given oral research presentations on QSAR at ACS and other scientific meetings, led numerous training workshops on QSAR, and created and published tutorials for QSAR training.
Vladimir Poroikov (VP), Prof. Dr., Head of Department for Bioinformatics and Laboratory for Structure-Function Based Drug Design. Member of Editorial Board of several International scientific journals, Chairman of Russian Section of The QSAR and Modelling Society, Member of American Chemical Society and International Society on Computational Biology. Coauthor of more than 120 published works and 12 nonopen published reports in R&D of new pharmaceuticals, member of the organizing committees and/or invited speaker of many international conferences. VP is a coinvestigator of several international projects supported by FP6, FP7, ISTC, INTAS, IFTI, and RFBR.
The Principal Investigator of the MaxTox project, Dr. Indira Ghosh (IG) -Dean and Professor in School of Information Technology, JNU (New Delhi), and Scientific Advisor of SL -has more than a decade of experience working in the pharmaceutical industry (AstraZeneca R&D, Bangalore, India). Before joining AstraZeneca, she obtained her Ph.D. from the prestigious Indian Institute of Science, Bangalore in the field of molecular biophysics. After completing her Ph.D. | 2014-10-01T00:00:00.000Z | 2010-08-31T00:00:00.000 | {
"year": 2010,
"sha1": "b35eddbbba1ae75a4db9ab9b681bf7988f231450",
"oa_license": "CCBY",
"oa_url": "https://jcheminf.biomedcentral.com/track/pdf/10.1186/1758-2946-2-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b35eddbbba1ae75a4db9ab9b681bf7988f231450",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
17192641 | pes2o/s2orc | v3-fos-license | Accepted for publication in ApJ Preprint typeset using L ATEX style emulateapj v. 10/09/06 THE NATURE OF DARK MATTER AND THE DENSITY PROFILE AND CENTRAL BEHAVIOR OF RELAXED HALOS
We show that the two basic assumptions of the model recently proposed by Manrique and coworkers for the universal density profile of cold dark matter (CDM) halos, namely that these objects grow inside out in periods of smooth accretion and that their mass profile and its radial derivatives are all continuous functions, are both well understood in terms of the very nature of CDM. Those two assumptions allow one to derive the typical density profile of halos of a given mass from the accretion rate characteristic of the particular cosmology. This profile was shown by Manrique and coworkers to recover the results of numerical simulations. In the present paper, we investigate its behavior beyond the ranges covered by present-day N-body simulations. We find that the central asymptotic logarithmic slope depends crucially on the shape of the power spectrum of density perturbations: it is equal to a constant negative value for power-law spectra and has central cores for the standard CDM power spectrum. The predicted density profile in the CDM case is well fitted by the 3D S\'ersic profile over at least 10 decades in halo mass. The values of the S\'ersic parameters depend on the mass of the structure considered. A practical procedure is provided that allows one to infer the typical values of the best NFW or S\'ersic fitting law parameters for halos of any mass and redshift in any given standard CDM cosmology.
INTRODUCTION
The universal shape of the spherically averaged density profile of relaxed dark halos in high-resolution N -body simulations is considered one of the major predictions of standard cold dark matter (CDM) cosmologies. Down to 1 % of the virial radius R, it is well fitted by the so-called NFW profile (Navarro et al. 1996(Navarro et al. , 1997, specified by only one mass-dependent parameter, the scale radius r s or equivalently the concentration c ≡ r s /R. At radii smaller than 1 % of the virial radius, however, the behavior of the density profile is unknown. On the basis of recent numerical simulations, some authors advocate a central asymptotic slope significantly steeper (Moore et al. 1998;Jing & Suto 2000) or shallower (Taylor & Navarro 2001;Ricotti 2003;Hansen & Stadel 2006) than that of the NFW law. Others suggest an ever decreasing absolute value of the logarithmic slope (Power et al. 2003;Navarro 2004;Reed et al. 2005), which might tend to zero as a power of the radius as in the three-dimensional (3D) Sérsic (1968) or Einasto (Einasto & Haud 1989) laws (Merritt et al. 2005(Merritt et al. , 2006. This uncertainty is the consequence of the fact that the origin of such a universal profile is poorly understood. Two extreme points of view have been envisaged. In one of these, it would be caused by repeated significant mergers (Syer & White 1998;Raig et al. 1998;Subramanian et al. 2000;Dekel et al. 2003), while in the other it would be essentially the result of smooth accretion or secondary infall (Avila-Reese et al. 1998;Nusser & Sheth 1999;Del Popolo et al. 2000;Kull 1999;Manrique et al. 2003;Williams et al. 2004;Ascasibar et al. 2004).
The fact that the M − c relation at z = 0 is consistent , Wechsler et al. 2002Zhao et al. 2003b) with the idea that all halos emerge from major mergers with similar values of c, which then decreases according to the inside-out growth of halos during the subsequent accretion phase, seems to favor an important role of mergers. However, the purely accretiondriven scenario is at least as attractive, as the inside-out growth during accretion leads to a typical density profile that appears roughly to have the NFW shape with the correct M − c relation in any epoch and cosmology analyzed (Manrique et al. 2003, hereafter M03).
Certainly the effects of major mergers cannot be neglected in hierarchical cosmologies, so both major mergers and accretion should contribute in shaping relaxed halos. However, as noted by M03, if the density profile arising from a major merger were set by the boundary conditions imposed by current accretion, then the density profile of halos would appear to be independent of their past aggregation history, so halos could be assumed to grow by pure accretion without any loss of generality. All the correlations shown by relaxed halos in numerical simulations can be recovered under this point of view (Salvador-Solé et al. 2005). Simultaneously, this would explain why halos with very different initial conditions and aggregation histories have similar density profiles (Romano-Diaz et al. 2006).
In the present paper, we show that the M03 model relies on two basic assumptions, namely, (1) that halos grow inside out in periods of smooth accretion, and (2) that the mass profile and all its derivatives are continuous functions. The former assumption is supported by the results of numerical simulations (Salvador-Solé et al. 2005;Lu et al. 2006;Romano-Diaz et al. 2006, while the second one is at least not in contradiction with them. In the present paper we show that both assumptions are in fact sound from a theoretical point of view, as they can be related to the very nature of CDM. This renders the predictions of the model beyond the range of current simulations worth examining in detail, as is done below. For simplicity, we consider spherical structures, which at best is an approximation to the triaxial structures observed in numerical simulations. Secondary infall is known to be influenced by deviations from spherical symmetry (Bond & Myers 1996;Zaroubi et al. 1996). Yet, in an accompanying paper (Gonález-Casado et al. 2007), we show that this does not seem to affect the fundamental role of the two assumptions given above. Another simplifying assumption used here is the neglect of substructure. It has recently been shown in high-resolution simulations that about 57 % of the halo mass is collected in the previous major merger (Faltenbacher et al. 2005). However, substructures (and sub-substructures) contribute only about 5 % of the total mass in halos (Diemand et al. 2007) whose density profile is well described by equation (1). It seems therefore a good first approximation to ignore substructure.
The paper is organized as follows. In §2, we show how the M03 model emerges from the properties of standard CDM. The behavior of the predicted density profile at extremely small radii and for a wide range of halo masses is investigated in §3. Our results are summarized in §4.
CDM PROPERTIES AND HALO DENSITY PROFILE
2.1. Inside-out growth during accretion We now argue that halos grow inside out during accretion, as is indeed found in numerical simulations, because of some properties of CDM; in particular, its characteristic power spectrum, leading to a slow halo accretion rate. As shown in §2.2, this has important consequences for the inner structure of these objects.
Schematically, one can distinguish between minor and major mergers. In minor mergers, the relative mass increase produced, ∆ ≡ ∆M/M , is so small that the system is left essentially unaltered, whereas in major mergers ∆ is large enough to cause rearrangements.
The smaller the value of ∆, the most frequent the mergers (Lacey & Cole 1993). For this reason, although individual minor mergers do not affect the structure of halos, their added contribution yields a smooth secular mass increase, the so-called accretion, with apparent effects on the aggregation track M (t) of the halo. The accretion-scaled rate,Ṁ /M (t), is given by (Raig et al. 2001, hereafter RGS01) where R m (M, t, ∆) is the usual Lacey-Cole (Lacey & Cole 1993) instantaneous merger rate (see eq. [A1]) and ∆ m is the maximum value of ∆ for mergers contributing to accretion. In contrast, less frequent major mergers yield notable sudden mass increases or discontinuities in M (t).
After undergoing a major merger (and virializing), halos evolve as relaxed systems until the next major merger. The fact that standard CDM is nondecaying and nonself-annihilating 1 guarantees that the mass collected during such periods is conserved. This does not yet imply that halos grow inside out during accretion, because their mass distribution might still vary due to energy gains or losses or to the action of accretion itself. Even if each individual minor merger would leave the halo unchanged their collective action might alter these systems. However, standard CDM is also dissipationless and therefore halos cannot loose energy. Furthermore, under the assumption of spherical symmetry halos cannot suffer tidal torques from surrounding matter, and hence the surrounding matter is unable to alter the kinetic energy of the halo. Therefore, the only possibility for a timevarying inner mass distribution of accreting halos is that the accretion process causes it itself.
This possibility, that the process of accretion itself could alter the internal mass distribution, would be realized only if the accretion time 1/R a were smaller than the dynamical time τ cr . On the contrary, if 1/R a is substantially larger than τ cr , the adiabatic invariance of the inner halo structure will be guaranteed, and the halo will evolve inside out. Thus, by requiring 1/R a to be C times larger than τ cr , we are led to the equation which gives the upper mass M a for inside-out growth at t. In equation (3),ρ is the mean cosmic density and ∆ vir (t) is the virialization density contrast given e.g. by Bryan & Norman (1998). Here M a is indeed an upper limit because R a (M, t) is an increasing function of t; see RGS01. In any CDM cosmology analyzed, equation (3) appears to have no solution for values of C significantly larger than unity in the relevant redshift range. This means that accretion is always slow enough for halos to grow inside out, as required by the M03 model.
As previously mentioned, the inside-out growth of halos in accreting periods is unambiguously confirmed by the results of N -body simulations (Salvador-Solé et al. 2005;Lu et al. 2006;Romano-Diaz et al. 2006. It is also consistent with the fact that dark matter structures preserve the memory of initial conditions, in the sense that the most initially overdense regions end up being the central regions of the final structures (Diemand et al. 2005), implying that the spatial positions of particles are not significantly perturbed by merging/accretion during the assembly of the structures. Likewise, the energy of the individual particles in the final structure (at z = 0) is very strongly correlated with their energies at much earlier times (z = 10; Dantas & Ramos 2006). This shows that particles even preserve the memory of the initial energies.
Smoothness of the mass profile
Contrarily to an ordinary fluid, CDM is collisionless and free-streaming and, hence, cannot support discontinuities (shock fronts) in the spatial distribution of any of its macroscopic properties. As a consequence, all radial profiles in relaxed halos are necessarily smooth. This holds in particular for the mass profile, M (r), and its radial derivatives, 2 which has the following consequence.
The inside-out growth of a halo during accretion ( §2.1) implies that the mass profile M (r) built at that interval is the simple conversion, through the definition of the instantaneous virial radius of the associated mass aggregation track M (t).
The smoothness condition implies that the old M (r) profile must match perfectly with the new part of the profile built during that time. Since minor mergers only cause tiny discontinuities, the new piece of the M (t) track that they produce is well approximated by a smooth function, and, as the functionsρ(t) and ∆ vir (t) in equation (4) are also smooth functions, the corresponding piece of the M (r) profile automatically fulfills the right smoothness condition. Thus, the system can grow, during accretion, without the need to essentially rearrange its structure.
Only when a halo undergoes a major merger and its M (t) track suffers a marked discontinuity will the mass profile prior to the major merger no longer match the piece that begins to develop after it. Since the M (r) profile cannot have any discontinuities, the halo is then forced to rearrange its mass distribution (through violent relaxation) to fulfill the required smooth condition.
In other words, the fundamental assumption of the M03 model that the mass distribution of relaxed halos is determined by their current accretion rate (through dramatic rearrangements of the structure on the occasion of major mergers and very tiny and negligible ones during accretion periods), is simply the natural consequence of the slowly accreting, nondecaying, nonselfannihilating, dissipationless, and collisionless nature of standard CDM.
The M03 model
The mass profile of a specific halo with mass M i at time t i accreting at a given rate during any arbitrarily small time interval ∆t around t i is therefore simply the smooth extension inward of the small piece of profile being built during that interval. 3 Unfortunately, the smooth extension of a small piece of a function is hard to find in practice, so the mass profiles of real individual halos can hardly be obtained in this way.
There is one case, however, in which such a smooth extension can readily be achieved: that of halos with M i at t i accreting at the typical cosmological rate R a (M (t), t) during any arbitrarily small interval of time around t i . In this case, the (unique) smooth extension we are looking for necessarily coincides with the smooth function M (t), the solution of the differential equatioṅ for the boundary condition M (t i ) = M i , properly converted from t to r by means of equation (4). Once the typical M (r) profile is known, by differentiating it and taking into account equations (4) and (5), one is led to the typical density profile for halos with M i at t i proposed by M03, From equations (6) and (7) we see that the shape of this profile is ultimately set by the CDM power spectrum of density perturbations in the cosmology considered through the merger rate R m (M, t, ∆) used to calculate the accretion rate R a (M, t) (see eqs. [A1] and [2]). This dependence is, however, so convoluted that the density profile (eq. [6]) must be inferred numerically. Only its central asymptotic behavior can be derived analytically, as will be shown in the next section.
SOME CONSEQUENCES OF THE MODEL
From equation (2) we see that the exact shape of the density profile in equation (6) depends on ∆ m . This parameter marks the effective transition between minor and major mergers, and it can be determined from the empirical M − c relation at some given redshift.
For each given ∆ m value, the density profiles, down to R/100, predicted at z = 0 in the concordance model characterized by (Ω m , Ω Λ , h, σ 8 ) = (0.3, 0.7, 0.7, 0.9) for halos with different masses have been fitted to the NFW profile to find the best-fit values of c. Then we searched for the value of ∆ m that minimizes the departure of the theoretical M − c relations from the empirical one drawn from high-resolution simulations by Zhao et al. (2003a). As shown in Figure 1, ∆ m = 0.26 gives an excellent fit over almost 4 decades in mass (8 × 10 10 h −1 M ⊙ < M < 4 × 10 14 h −1 M ⊙ ).
In Figure 2, we plot, down to a radius equal to the current resolution radius of most numerical simulations, the density profiles predicted in the concordance model for halo masses at z = 0 ranging from 10 11 to 10 16 M ⊙ . They are all well fitted to a NFW profile, although there is a tendency for the theoretical profiles for M 10 14 M ⊙ to deviate from that shape and approach a power law with logarithmic slope intermediate between the NFW asymptotic values of −1 and −3. This tendency also makes c increase very rapidly at large masses where the fit by the NFW law is no longer acceptable. This causes the M − c relation to deviate from its regular trend at smaller values of M (see Fig. 2). Both effects, already reported in M03, were later observed in simulated halos (Zhao et al. 2003a;Tasitsiomi et al. 2004). This is a clear indication that the NFW profile does not provide an optimal fit for very massive structures. Of course, above 10 14 M ⊙ halos are hardly in virial equilibrium, so such a deviation has essentially no practical effects. As explained in Salvador-Solé et al. (2005), another interesting consequence of the M03 model is that the M (t) tracks traced by accreting halos (hence, growing inside out) coincide with curves of constant r s -and M s -values, with M s defined as the mass interior to r s . Thus, the intersection of those accretion tracks at any arbitrary redshift sets the relation M s (r s ) between such a couple of parameters, implying that the M s (r s ) relation satisfied by halos is time-invariant. In Figure 3, we show how the different M s (r s ) curves obtained by fitting the density profiles predicted at different redshifts to a NFW law overlap. There is only some deviation at large masses, where the density profiles are not correctly described by the NFW profile. Such a time-invariant M s (r s ) relation is well fitted, for r s in the range 10 −4 Mpc < r s < where g(c) stands for ln(1 + c) − c/(1 + c) andρ 0 is the current mean cosmic density. Equation (10) is an implicit equation for the concentration of halos with any given mass and redshift. What about the central behavior of the predicted density profile? According to equation (4), small radii corre-spond to small cosmic times. In this asymptotic regime, all Friedman cosmologies approach the Einstein−de Sitter model in which ∆ vir (t) is constant andρ(t) is (in the matter-dominated era when halos form) proportional to t −2 . If the power spectrum of density perturbations were of the power-law form P (k) ∝ k j , with the index j satisfying 1 > j > −3 to guarantee hierarchical clustering, then the universe would be self-similar. The mass accretion in equation (5) would take the asymptotic form: M (t) ∝ t 2 j+3 (see the Appendix). The fact that both ∆ vir (t)ρ(t) and M (t) would then be power laws has two consequences. First, the time dependence of their respective logarithmic derivatives on the right-hand side of equation (7) cancel, which implies that ρ(t) is proportional to ∆ vir (t)ρ(t), and hence toρ(t). Second, the virial radius given by equation (4) is also a power law, . From this we get t(r), and thereby one finds This central behavior, fully in agreement with the numerical profiles obtained from power-law power spectra, is particularly robust, as it does not depend on ∆ m . Note that it coincides with the solution derived in self-similar cosmologies by Hoffman & Shaham (1985) assuming spherical collapse. It is worth noting, however, that in that derivation, such an asymptotic behavior is restricted to j > −1 so as to warrant the required adiabatic invariance (Fillmore & Goldreich 1984), while, in the present derivation, there is no such a restriction as the central density profile is not assumed to be built by spherical infall, but results instead from smooth adaptation to the boundary condition imposed by current accretion.
The CDM power spectrum is, of course, not a power law. However, in the limit of small masses involved in that asymptotic regime, it tends to a power law of index j = −3. Thus, according to equation (11), we expect a vanishing central logarithmic slope of ρ(r) for the standard CDM case. This is confirmed by the numerical profiles obtained in this case: as one goes deeper and deeper into the halo center, they become increasingly shallower.
What is still more remarkable is that down to a radius as small as 1 pc, the density profiles appear to be well fitted by the 3D Sérsic or Einasto law over at least 10 decades in halo mass (see Fig. 4), from 10 6 M ⊙ to 10 16 M ⊙ . We remind the reader that for power-law spectra, equation (6) leads to density profiles with central cusps, so the Sérsic shape is not a general consequence of the M03 model, but it is specific to the standard CDM power spectrum. In fact, from the reasoning above we see that what causes the zero central logarithmic slope in the CDM case is the fact that the logarithmic accretion rate d ln M (t)/d ln t = tR a (M (t), t) diverges in the limit of small values of t. This is in contrast to the general power-law case, where the accretion rate remains finite. Similar to the characteristic density ρ c in the NFW profile (eq. [1]), the central density ρ 0 entering the 3D Sérsic law (eq. [12]) can be written in terms of the mass M and the values of the two (instead of one) remaining parameters, n and either r n or c n ≡ R/r n : where Γ is the usual gamma function and is the incomplete or regularized one. At z = 0, the two free parameters n and c n depend on M according to the relations plotted in Figure 5, which are well approximated by .
These expressions can be used to infer the typical values of the Sérsic parameters for present-day halos with any mass. It is worth mentioning that the predicted values of n are of the same order of magnitude as the ones obtained by Merritt et al. (2005) from simulated halos with masses ranging from dwarf galaxies to galaxy clusters (the values of c n were not presented in that work). To obtain more general values of these parameters for halos of any mass and redshift, we can proceed as in the NFW case above. For reasons identical to the ones leading to the time-invariant relation M s (r s ), the relations ρ 0 (r n ) and n(r n ) must be time-invariant. In Figure 6, we see how the corresponding curves obtained from the fit to the Sérsic profile of the same density profiles as used in Figure 3 overlap, indeed, even better than the M s (r s ) curves do, since there are no large deviations at large masses. These invariant relations are well fitted, for x ≡ log(r n /Mpc) in the range −25 < x < −9, by Replacing these expressions into equation (13), we can solve for r n and then use the equation (19) to find the value of n. This provides a very concrete prediction that can be tested with numerical simulations. One can take the very strong correlation shown in Figure 6, which allows one to fit the density profile of any dark matter structure with only two free parameters: e.g. n and ρ 0 . With this value of n (purely from the shape of the density profile), one now gets a value for the mass (from Fig. 5). This mass can trivially be compared to the true virial mass (which is naturally known in the simulation), and hence one can confirm or reject the prediction of the accretion-driven model.
SUMMARY
We have shown how the basic properties of standard CDM can justify the M03 model, which was previously shown to be in overall agreement with the results of numerical simulations. In this model, the density profile of relaxed halos permanently adapts to the profile currently building up through accretion and does not depend on their past aggregation history. As a consequence, the typical density profile of halos of a given mass at a given epoch is set by their time-evolving cosmology-dependent typical accretion rates.
Although halos have been assumed to be spherically symmetric throughout the present paper, this is not crucial for the M03 model. As will be shown in a following paper (González-Casado et al. 2007), the results presented here also hold for more realistic triaxial rotating halos. Furthermore, an approach similar to the one followed here allows one to explain not only their mass distribution, but also others of their structural and kinematic properties, such as the radial dependence of angular momentum.
According to the M03 model, the central asymptotic behavior of the halo density profile depends, through the typical accretion rate, on the power spectrum of density perturbations. The prediction made in the case of power-law spectra should be possible to check by means of numerical simulations, provided one concentrates on massive halos, as these reach the asymptotic regime at larger radii. In the case of the standard CDM power spectrum, the model predicts a vanishing central logarithmic slope. The way this asymptotic behavior is reached is surprisingly simple: down to a radius as small as 1 pc, the density profile is well fitted by the 3D Sérsic or Einasto profile over at least 10 decades in halo mass.
Another consequence of the M03 model with useful practical applications is the existence of time-invariant relations among the NFW or 3D Sérsic law parameters (M s and r s in the former case and ρ 0 , r n , and n in the latter) fitting the halo density profiles. A code is publicly available 4 that computes such invariant relations for any desired standard CDM cosmology.
Some of these consequences can be readily tested by numerical simulations or by (X-ray or strong-lensing) observations, which should allow one to confirm or reject the prediction for the central behavior of the density profile of halos that is made by the M03 model. This work was supported by the Spanish DGES grant AYA2006-15492-C03-03. We thank Donghai Zhao and co-workers for kindly providing their data.
In the differential equation (eq. [5]), the variable M in R a is replaced by the mass accretion track M (t). As halos grow through both accretion and major mergers, the accretion tracks M (t) increase with increasing time less rapidly than do M ⋆ (t), tracing the typical mass evolution of halos in any self-similar universe. Thus, ν(t) ≡ ν(M (t), t) is a decreasing function of t (see eq. [A4]) and, in the small-t asymptotic regime, ν(t) −2 tends to zero. Taking the Taylor series expansion of [1 + ν −2 (t)x] 3/(j+3) inside the integral on the right-hand side of equation (A5), at leading order in ν −1 (t), we obtain | 2014-10-01T00:00:00.000Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "4bb847d6d718936fb3f3eb6310e90ee3680e72e1",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "659bb4b90c55abfc2802404ba524daa65db8c696",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
119270273 | pes2o/s2orc | v3-fos-license | Positivity and Kodaira embedding theorem
Kodaira embedding theorem provides an effective characterization of projectivity of a K\"ahler manifold in terms the second cohomology. Recently X. Yang [21] proved that any compact K\"ahler manifold with positive holomorphic sectional curvature must be projective. This gives a metric criterion of the projectivity in terms of its curvature. In this note, we prove that any compact K\"ahler manifold with positive 2nd scalar curvature (which is the average of holomorphic sectional curvature over 2-dimensional subspaces of the tangent space) must be projective. In view of generic 2-tori being non-abelian, this new curvature characterization is sharp in certain sense.
Introduction
Let (M m , g) be a Kähler manifold with complex dimension m. For x ∈ M , denote by T ′ x M the holomorphic tangent space at x. Let R denote the curvature tensor. For X ∈ T ′ x M let H(X) = R(X,X, X,X)/|X| 4 be the holomorphic sectional curvature. Here |X| 2 = X,X , and we extended the Riemannian product ·, · and the curvature tensor R linearly over C, following the convention of [15]. We say that (M, g) has positive holomorphic sectional curvature, if H(X) > 0 for any x ∈ M and any 0 = X ∈ T ′ x M . It was known that compact manifolds with positive holomorphic sectional curvature must be simply connected [17]. A three circle property was established for noncompact complete Kähler manifolds with nonnegative holomorphic sectional curvature [9]. On the other hand it was known that such metric may not even have positive Ricci curvature [4].
The following result was proved by X. Yang in [21] recently, which answers affirmatively a question in [23].
If the compact Kähler manifold M has positive holomorphic sectional curvature, then M is projective. Namely M can be embedded into a complex projective space via a holomorphic map.
The key step is to show that the Hodge number h 2,0 = 0. Then a well-known result of Kodaira (cf. Chapter 3, Theorem 8.3 of [10]) implies the projectiveness.
The purpose of this paper is to prove a generalization of the above result of Yang. First of all we introduce some notations after recalling a lemma of Berger. Proof. Direct calculations shows that for each i and each i = j. Equation (1.1) then follows by expanding H(Z) in terms of Z = i z i E i , and the above formulae.
For any integer k with 1 ≤ k ≤ m and any k-dimensional subspace Σ ⊂ T ′ x M , one can defined the k-scalar curvature as By the above Berger's lemma, {S k (x, Σ)} interpolate between the holomorphic sectional curvature, which is S 1 (x, {X}), and scalar curvature, which is S m (x, T x M ).
We say that (M, g) has positive 2nd-scalar curvature if S 2 (x, Σ) > 0 for any x and any two complex plane Σ.
Clearly, the positivity of the holomorphic sectional curvature implies the positivity of the 2nd-scalar curvature, and the positivity of S k implies the positivity of S l if k ≤ l. We shall prove the following generalization of above mentioned result of Yang. Recall that a projective manifold M is said to be rationally connected, if any two generic points in it can be connected by a chain of rational curves. By the work of [7], any projective manifold M admits a rational map f : M Z onto a projective manifold Z such that any generic fiber is rationally connected, and for any very general point (meaning away from a countable union of proper subvarieties) z ∈ Z, any rational curve in M which intersects the fiber f −1 (z) must be contained in that fiber. Such a map is called a maximal rationally connected fibration for M , or MRC fibration for short. It is unique up to birational equivalence. The dimension of the fiber of a MRC fibration of M is called the rational dimension of M , denoted by rd(M ).
Heier and Wong (Theorem 1.7 of [3]) proved that any projective manifold M m with S k > 0 satisfies rd(M ) ≥ m − (k − 1). So as a corollary of their result and Theorem 1.1 above, we have the following consequence. C from M onto a curve C of positive genus, such that over the complement of a finite subset of C, f is a holomorphic submersion with compact, smooth fibers, each fiber is a rationally connected manifold.
Note that the intrinsic criterion of the 2nd scalar curvature can be used to imply that all compact Riemann surfaces (by taking a product with a very positive P 1 ) are projective while Yang's result (under the positivity of holomorphic sectional curvature) can only be applied to P 1 . In the mean time a generic two complex dimensional tori is not algebraic. Hence the projectivity can NOT be possibly implied by the positivity of S k with k ≥ 3 (taking the product of a non-algebraic tori of complex dimension 2 with a very positive P 1 one can endow a Kähler metric with S k > 0 for k ≥ 3 on such a non-algebraic manifold). In view of these examples our result is sharp in some sense. Moreover the positivity of S 2 is stable (namely a open condition) under the holomorphic deformation of the complex manifolds (along with the smoothly deformation of the Kähler metrics specified by Kodaria-Spencer [10]). Hence our result provides a condition invariant under the small deformation of holomorphic structure. On the other hand, there are celebrated examples of Voisin [18] of Kähler manifolds of complex dimension four and above, which can not be deformed into an algebraic one via a complex holomorphic deformation, and the wildly open Kodaira's problem in complex dimension three asking whether or not a Kähler threefold can be deformed into a projective manifold.
It is well known that h m,0 = 0 if (M m , g) has positive scalar curvature. The traditional Bochner formula also implies the vanishing of h p,0 = 0 for k ≤ p ≤ m if the Ricci curvature of (M m , g) is k-positive, namely the sum of the smallest k eigenvalues of the Ricci tensor is positive (cf. [8]). The following result also holds.
It turns out that the original argument of proving the above result contains an error. However it can be proved using a maximum principle consideration via the co-mass (a L ∞ -norm) of differential forms. Please see [14] Proposition 4.2 and Corollary 4.3 for details.
As a counterpart to Theorem 1.7 of [3], one can ask the question that, for a given projective Kähler manifold M m with S k < 0, what is the maximal possible rational dimension? A naive conjecture which mimics the Heier-Wong's Theorem would be: S k < 0 =⇒ rd(M ) < k. For k = m, the conjecture says that having negative scalar curvature would imply the manifold cannot be rationally connected. This is still unknown even for m = 2 as far as we know. (Masataka Iwai [6] shared an example of complex surface with a Hermitian metric of negative scalar curvature which is rationally connected.) On the other hand, S m < 0 (or just the integral of the scalar curvature being negative) does imply that H 0 (M, K −⊗ℓ M ) = 0 for any ℓ > 0, where K −1 M is the anti-canonical line bundle, so M cannot be a Fano manifold when S k < 0 for any k. Note also that a recent result in [13] (cf. Theorem 5.1) implies that any holomorphic map from P 2 or a two dimensional tori into a Kähler manifold M m (not necessarily compact) with S 2 < 0 is either constant or of rank one.
We should mention that there is also a recent work of Wu and Yau [20] on the ampleness of the canonical line bundle assuming the holomorphic sectional curvature being negative, which is another perfect example of getting algebraic geometric consequence in terms of the metric property via the holomorphic sectional curvature.
Generally speaking, we think it is interesting to obtain algebraic geometric characterizations of condition S k > 0 or S k < 0, as well as the conditions Ric ⊥ > 0, Ric ⊥ < 0 studied recently in [15] by the authors, where an complementary metric criterion of the projectivity was given in terms of Ric ⊥ 2 > 0. A complete classification result for threefolds and a partial classification of fourfolds have been obtained (cf. [16]) for Kähler manifolds with Ric ⊥ > 0. The estimates developed in the proof of this paper have also been useful [14] in proving the rational-connectedness of Kähler manifolds with Ric k > 0. We refer the interested readers to [14] for these and other notions of curvature positivities as well as many related results and questions.
2. The projectivity of M with positive S 2 Here we adopt the argument of [15] to show that the dimension of H 2,0 (M ), the space harmonic (2, 0-forms, h 2,0 (M ) = 0. Then Theorem 8.3 of [10] implies that M is projective.
Given any x 0 and v ∈ T ′ x0 M , there exists a unitary frame {dz i } at x 0 , which may depends on v, such that [5]). More precisely, given any skew-symmetric matrix A, there exists a unitary matrix U such that t U AU is in the block diagonal form where each non-zero diagonal block is a constant multiple of F , with In other words, we can choose a unitary coframe ϕ at x 0 such that where k is a positive integer and each λ i = 0 for 1 ≤ i ≤ k.
Suppose k is the unique positive integer such that s k+1 = 0 while s k is not identically zero, and consider the holomorphic 2k-form σ = s k . By the argument on p151 of [15], we know that σ = λϕ 1 ∧ · · · ∧ ϕ 2k = 0. Now we apply Lemma 2.1 to σ at the point x 0 , where |σ| 2 attains its maximum and have that for any v. Taking v = ∂ ∂z i , the dual of ϕ i at x 0 and sum them over we have that at x 0 2k i,j=1 On the other hand, it is easy to see that S 2 > 0 implies that S 2k > 0. This is a contradiction to (2.3). Hence there is no nonzero s ∈ H 2,0 (M ).
In [14], via a different technique, the result has been extended to Kähler manifolds with so-called RC-2 positivity, namely for any two unitary vectors
Some related estimates
Let Σ be a 2-plane where S 2 (x 0 , Σ) = inf Σ ′ S 2 (x 0 , Σ ′ ), integrating the Bochner formula of Lemma 2.1 for v ∈ S 3 ⊂ Σ, we have Here f (Z) denote the average of the integral of the function f over S 3 ⊂ Σ. We also have choose a unitary frame of T x0 such that R(v,v, ·, (·)) is diagonalized and s is a holomorphic 2-form given by s = i =j a ij dz i ∧ dz j .
A possible alternate approach to Theorem 1.1 is to apply the maximum principle at x 0 , where |s| 2 attains its maximum. In view of the compactness of the Grassmannians we can also find a complex two plane Σ in T ′ x0 M such that S 2 (x 0 , Σ) = inf Σ ′ S 2 (x 0 , Σ ′ ) > 0. We prove the following estimates, some of which were used in establishing the rational connectedness of algebraic manifolds under the Ric k > 0 condition in [14].
Here µ 1 , µ 2 are the singular values of the projection P from Σ ′ to Σ, and The relevance with Theorem 1.1 is that at x 0 where |s| 2 attaints its maximum we have The integral is clearly independent of the choice of a unitary frame of the two dimensional space spanned by { ∂ ∂zi , ∂ ∂zj }, or the choice of a unitary frame {E 1 , E 2 } of Σ. If the right hand side of (3.3) had a positive lower bound, the maximum principle would show that |s| 2 = 0 at x 0 , thus |s| 2 = 0 everywhere, which gives another proof Theorem 1.1.
Since the estimates of Proposition 3.1 have other applications we include a proof here. The proof needs some basic algebra and computations. Let a ∈ u(m) be an element of the Lie algebra of U(m). Consider the function: By the choice of Σ, f (t) attains its minimum at t = 0. This implies that f ′ (0) = 0 and f ′′ (0) ≥ 0. Hence R(a(X), X, X, X) + R(X,ā(X), X, X) dθ(X) = 0; (3.5) R(a 2 (X), X, X, X) + R(X,ā 2 (X), X, X) + 4R(a(X),ā(X), X, X) dθ(X) We exploit these by looking into some special cases of a. Let W ⊥ Σ and Z ∈ Σ be two fixed vectors.
To show (3.2), let us apply (3.5) to the above a and also to the one with W being replaced by √ −1W , and add the resulting two estimates together, we get X, Z R(W, X, X, X) dθ(X) = 0.
To prove (3.3) we need to consider general W which may not be perpendicular to Σ. In other words, we consider the case |Z| = |W | = 1 and Z ∈ Σ.
Apply this to (3.6) and also apply to a with W being replaced by √ −1W , add the results up we get the estimate: X, Z R(Z, X, X, X) + Z, X R(X, Z, X, X) dθ(X) (3.8) + X, W R(W, X, X, X) + W, X R(X, W , X, X) dθ(X) +2 X, Z X, W R(W, X, Z, X) + Z, X W, X R(X, W , X, Z) dθ(X).
Apply the above to Z = E i (i = 1, 2) and sum the results together we have 4 R(W, W , X, X) + | X, W | 2 (R 11XX + R 22XX ) dθ(X) ≥ 2 3 S 2 (x 0 , Σ) + 4 X, W R(W, X, X, X) + W, X R(X, W , X, X) dθ(X). (3.9) Now we want apply the above to all unit vectors W ∈ Σ ′ and take the average. Denote by P the orthogonal projection to Σ. Let {v 1 , v 2 } be a unitary basis of Σ ′ . Replacing {v 1 , v 2 } by a new unitary basis {av 1 + bv 2 , −av 1 + bv 2 } (where |a| 2 + |b| 2 = 1) if necessary, we pay assume that P v 1 ⊥ P v 2 . So we can choose a unitary basis {E 1 , E 2 } of Σ such that v 1 = µ 1 E 1 + αE ′ and v 2 = µ 2 E 2 + βE ′′ with µ i being the singular value of the projection to Σ restricted to Σ ′ , and with E ′ , E ′′ ∈ Σ ⊥ . Now we apply (3.9) to W ∈ S 3 ⊂ Σ ′ . First we observe that The second term on the left hand side of (3.9) has average value Similarly we have In the mean time, the second term on the right hand side of (3.9) has average value We compute Hence after adding the result with its conjugation we have Similarly we also have Therefore we have Putting them all together and noting that S 2 (x 0 , Σ) = R 1111 + 2R 1122 + R 2222 we get This proves (3.3), which completes the proof of the proposition.
The high dimensional case
Now for a k-dimensional subspace Σ ⊂ T ′ x0 M with S k (x 0 , Σ) = inf Σ ′ S k (x 0 , Σ ′ ) we derive estimates similar to Proposition 3.1.
and {v 1 , · · · , v k } and {E 1 , . . . , E k } be unitary frame at x 0 of Σ ′ and Σ respectively. Let {µ i } be the singular values of the projection of Σ ′ towards Σ. Then for any E ∈ Σ, E ′ ⊥ Σ, we have Proof. Let f (t) be the function constructed by the variation under the 1-parameter family of unitary transformations. The equations (3.5) and (3.6), as well as their proofs, remain the same. The proof of (4.1) and (4.3) are exactly analogous to that of (3.2) and (3.4), so we omit it here.
For the given k-planes Σ and Σ ′ , we may always take unitary basis {v 1 , . . . , v k } of Σ ′ and unitary basis {E 1 , . . . , E k } of Σ, so that the restriction on Σ ′ of the projection map to Σ is given by a diagonal matrix under these basis. That is, v i = µ i E i + α i E ′ i for each i with E ′ i ⊥ Σ, and {µ i } be the singular values of the projection from Σ ′ to Σ. Now we apply (4.4) to W ∈ S 2k−1 ⊂ Σ ′ and take the average of the result. We have that Similarly we can calculate, while (k + 2) S 2k−1 ⊂Σ ′ X, W R(W, X, X, X) + W, X R(X, W , X, X) dθ(X) dθ(W ) X, v i R(v i , X, X, X) + v i , X R(X, v i , X, X) dθ(X).
Using (4.1), the first half in the above can be further simplified into Putting the above together we have (4.3). | 2018-04-29T18:12:22.000Z | 2018-04-25T00:00:00.000 | {
"year": 2018,
"sha1": "935ecebcdd6d416209469a6e69ae9a0843ac1cc5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1804.09696",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1d237b45c93e67b654ad0cb438805f4a239ee51a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
237751841 | pes2o/s2orc | v3-fos-license | PATH ANALYSIS OF SOCIAL SUPPORT AS DETERMINANT OF ANXIETY IN PEOPLE AT RISK OF COVID-19 DURING THE PANDEMIC
Method: The design was an explanatory study with a population of all Covid-19 People at Risk in Kemlagi District, Mojokerto Regency, with a sample of 150 respondents by random sampling. The exogenous variable was social support which is measured using the Social Support Questionnaire (SSQ). The endogenous variable was anxiety measured by the GAD-7 (Generalized Anxiety Disorder-7) questionnaire which consists of 7 questions about signs and symptoms of normal/appropriate affective, cognitive, and physical behavior. The analysis with Structural of Equation Model (SEM) with Partial Lease Square (PLS) approach with theory-based model development and path diagram development.
INTRODUCTION
Coronavirus disease 2019 or known as Covid-19, is a group of acute respiratory diseases that have occurred in Wuhan, Hubei China, since December 2019 (Huang and Zhao, 2020a). Covid-19 is a different type of coronavirus that causes an acute respiratory syndrome that has developed rapidly around the world. (Shuja, Aqeel, Jaffar, & Ahmed, 2020).The mortality rate is 2.3% greater than that of ordinary influenza and in contrast to severe acute respiratory syndrome (SARS), it is more at risk of transmitting to others (Bouey, 2020). On March 11, WHO declared that the Covid-19 outbreak was categorized as a "pandemic" because the virus was spreading around the world. The coronavirus pandemic in Indonesia began with the discovery of sufferers of the 2019 coronavirus on March 2, 2020.
Until now, on May 12, 2020, 14,749 positive cases of have been confirmed, with 10,679 active cases, 3,063 recovered cases, and 1,007 deaths. There are 1766 confirmed cases of Covid-19 in East Java (Kemenkes, 2020). Mojokerto Regency is also a red zone for the spread of Covid-19, with 612 cases. The highest number of cases in The Mojokerto Regency is in the Kemlagi District area (https://covid19.mojokertokab.go.id).
Covid-19 has a real impact on people's psychology, especially individuals who live in the Covid-19 red zone or what are known as people at risk. People at risk are a person who, when and or within 14 days, comes from an infected country/region and has no symptoms (healthy) . Those who experience anxiety because of uncertain health qualities develop obsessive-compulsive behavior, such as repeated temperature checks, repeated hand washing, and sterilization. Furthermore, it is mandatory to carry out independent and strict quarantine if signs and symptoms resembling Covid-19 appear. It can lead to rejection, discrimination, and stigmatization from society (Brooks et al., 2020).
The psychological effects often experienced are loneliness, rejection, stress or anxiety, depression, insomnia, and hopelessness. Some of these cases might even increase the risk of aggression and suicide . Research conducted by (Huang & Zhao, 2020b), the prevalence of generalized anxiety disorder (GAD), depressive symptoms, and the overall quality of sleep in the community were 35.1%, 20.1%, and 18.2%, respectively. Another study conducted by (Özdin & Bayrak Özdin, 2020) also stated that 23.6% of the total population experienced depression, 45.1% experienced anxiety during the Covid-19 pandemic in Turkey.
When the Covid-19 outbreak began, some people with mild symptoms suspected of infection have limited contact with other people and the environment to self-isolate at home. Even though these individuals do not experience infection and remain physically healthy, they often experience negative psychological effects and impaired sleep quality (Xiao, Zhang, Kong, Li, & Yang, 2020a). Mental health is important considerations for those undergoing self-isolation because of the increased risk of COVID-19 infection. Psychological well-being and good sleep quality are influenced by many local socio-cultural factors (Yao, Yu, Cheng, & Chen, 2008). Social support is a significant social factor that refers to the care and support of others. Adequate social support previously has been reported to have positive effects on psychological health and sleep function (Xiao, Zhang, Kong, Li, & Yang, 2020b).
The purpose of this study was to determine the effect of social support on anxiety for people at risk of Covid-19 during the pandemic.
METHOD
This research was an explanatory study with a crosssectional study design. The population of this study was all people at risk of Covid-19 in Kemlagi District Mojokerto Regency, with a sample of 150 respondents' literate and cooperative. The sampling of this research was probability sampling with random sampling.
The endogenous variable was anxiety measured used a modified Chinese version of the questionnaire (GAD-7) consisting of 7 questions about signs and symptoms of appropriate affective, cognitive, and physical behavior. Participants were asked how often they had been bothered by any symptoms during the past 2 weeks. "Not at all", "multiple days", "more than half a day", and "almost daily" response options, question items are scored 0, 1, 2, and 3 respectively. A score of 15 or more represents the cut point for identifying cases of anxiety (Gao et al., 2020). The exogenous variable was social support which was measured using The Social Support Questionnaire (SSQ) to measure the amount of support received by respondents during the Covid 19 pandemic consist 20 items question with indicators of emotional support, informational and instrumental support (Huang & Zhao, 2020b). The score on each item was determined by the frequency indicated on items, and a higher score indicated a higher frequency of support. The validity of the questionnaire, the value of Cronbach's negligence is 0.936 or more than 0.7, so the social support questionnaire is said to be reliable.
This study was conducted in the Kemlagi Subdistrict of Mojokerto Regency from August until September 2020. All study subjects reported their demographic data and completed two questionnaires standard assessing their social support and general anxiety disorders during the COVID-19 pandemic. To ensure the accuracy of the research survey, we set boundaries for some items (age range is limited to 20-50 years old, some questions need to be answered backward) and encouraged participants to answer carefully through the explanation of the questionnaire by the researcher. Completing the research questionnaire took 15-30 minutes.
Statistical analysis with Structural of Equation Model (SEM) with Partial Least Square (PLS) approach with theory-based model development and path diagram development which assisted by software SmartPLS 3 version for windows. Steps of evaluation the following structural equations: 1) Evaluation of the measurement of the outer model to determine the validity and reliability of indicators measuring latent variables. 2) Evaluation of the structural model (inner model) taking into account the goodness of fit of Q-Square to see the proportion of the relationship between variables. 3) Test the hypothesis by looking at the significant number of the path coefficient structural equation Ethical considerations for all participants agreed to sign an informed consent form. Before starting the study by filling in the informed consent provided by the researcher. This study had obtained permission from local authorities and ethical approval from ethics by the Husada School of Health Science of Maluku number RK.020/KEPK/STIK/III/2021.
RESULTS
Characteristics participant of the 150 samples were analyzed as the first step the demographic variables shown in Table 1. Table 1 shows the mean age (standard deviation) of the participants was 32.7 ± 41.3 years. Most of the participants were female (69.3%), and have a high school education background as many as 103 (68.7%). Among this sample, 111 (74%) participants were employed with married status were 135 (90%).
The prevalence of social support and anxiety of respondents in the Kemlagi during the COVID-19 outbreak are respectively shown in Table 2. Overall the prevalence of social support found that the core aspects of Informative Support were at most poor. Respondents have average social support is not good (more than the median). The value of emotional The prevalence of anxiety shows that people who at risk in the face of the Covid-19 outbreak experience mild anxiety symptoms. The symptoms experienced include cognitive, affective, physical, and behavioral symptoms. It cause of a health problem is more negative or worse, although many also have positive perceptions about stressors so that the anxiety their experience is reduced (as shown in table 2 above). It can be concluded that Social Support for people at risk in dealing with the Covid-19 outbreak is not good, the community lives more at home and avoids social activities so that they feel less support from one another.
The results of the outer model test of all indicators shown in Table 3, which have a loading factor value> 0.5 with a statistical T value ≥ 1.96 and P ≤ 5%, indicate that this variable is a significant dimension of the latent variable. Testing the goodness of fit on the outer weight to determine the validity and reliability of the variables by looking at the value of Composite Reliability (CR).A satisfactory value if> 0.6. Average Variance Extracted (AVE) coefficient of the mean of variance extraction to test the reliability of the construct variables. The minimum AVE value to state the reliability of variable instruments is 0.5. The results of measuring the reliability and validity of the latent variable constructs show that all indicator blocks measure the constructs of emotional support (X1), informative support (X2), instrumental support (X3), and Anxiety (Y1) are valid and reliable with reliability values. composite> 0.6, Cronbach's Alpha more than 0.5 and the mean value of the extracted variants> 0.5. This means that the indicators developed in the model are declared valid and reliable to measure the latent variables used in this study (as shown in table 3).
After the outer weight test is carried out on all latent variables and the goodness of fit is obtained, the results are valid and reliable. Then the latent variables can be continued in the structural model analysis of inner weight. The results of the structural path coefficient in the diagram show that all indicators are valid on latent variables because the value of the convergent Validity test is> 0.6. The relationship between latent variables can be seen in the structural path coefficients that are built. The results of testing the complete structural path model (inner weight) along with the loading factor value with the SmartPLS Partial Least Square program can be seen in table 3.
The calculation of Q-Square value it can be interpreted that the model can explain the quality of life of the elderly by 93.7%, while the remaining 6.3% is explained by variables other than emotional support (X1), informative support (X2), instrumental support (X3) and Anxiety (Y1). This model can also be said to be fit and predictors of latent variables have a strong influence on anxiety on people at risks of the Covid-19 during the pandemic, seen from the Q² value above 0.00. This is interpreted as saying that the model is fit and predictors of latent variables emotional support, informative support, and instrumental support have a strong influence on anxiety with NFI value> 0.
Based on the equation, the relationship between variables (Figure 1) can be interpreted as follows: Emotional support has a direct effect on informative support by 51.6%, while the rest is the influence of other factors. Informative support is influenced directly by emotional support 2.6 % and the direct effect of informative support by 55.3%, while the amount 0.42 is the indirect effect. Anxiety is directly affected by emotional support of only 5 percent and the direct effect of Informative Support is 71.5%, while the indirect effect is through emotional support. Hypothesis testing is done by analyzing the fit structural models so that each path coefficient can be interpreted as shown in Figure 1 above, to prove that: (1) Emotional support has a significant positive effect on Informative Support. The test results obtained a path coefficient of 0.516, the value of T statistic = 8,289. (2) Emotional support has a positive and significant effect on increasing instrumental support. The loading factor path coefficient is 0.286 and at a significance level of 5% and p-value = 0.00. (3) Emotional support is not significant in increasing anxiety. The test results obtained a path coefficient of 0.05, a statistical T value of 0.869 <T table 1.97, and a significance probability (P) 0.385> 0.05. It means that every time there is an increase in emotional support, it will increase anxiety by 5% and its increase is less significant. Thus it can be concluded that the third hypothesis is not proven, namely the dimensions of emotional support have a less positive effect on people's anxiety at risk during the Covid-19 Pandemic.
DISCUSSION
The Effect of social support on the decrease in the anxiety level of people at risk in the face of the Covid-19 outbreak, research facts show that social support will reduce the anxiety level of people at risk in the face of the Covid-19 outbreak by 5% and this is a significant increase. Coronavirus disease 2019 is a group of acute respiratory diseases with unknown causes. Those experiencing anxiety because of the uncertainty of their health status. And It is also developing obsessive-compulsive symptoms, such as repeated temperature checks and sterilization. Furthermore, they must carry out independent, strict quarantine if signs and symptoms resembling Covid-19 appear and monitored by local health authorities. It can lead to rejection, discrimination, and stigmatization from society (Brooks et al., 2020).
When people at risk face the Covid-19 outbreak, emotional support helps increase the ability of people at risk to interpret stressors properly and utilize their resources to solve problems, so that the level of anxiety is mild and good sleep quality (Asmundson & Taylor, 2020). Emotional support affects the ability of people at risk to solve problems (coping effort), either with problem management coping strategies or emotional regulation, but has a weak contribution to choosing effective coping strategies so that anxiety levels are mild. Emotional support affects improving the quality of sleep of people at risk during the Covid-19 pandemic, by increasing psychological well-being, independence in daily functional activities, perceptions of positive health, and increasing the ability of people at risk to take advantage of the surrounding environment in facing the Covid-19.
Social support affects the ability of people at risk to solve problems (coping effort), be it with problem management coping strategies or emotional regulation, but has a weak contribution to the selection of effective coping strategies so that the level of anxiety is mild. Emotional support affects improving the quality of sleep of people at risk during the Covid-19 pandemic, by increasing psychological well-being, independence in daily functional activities, perceptions of positive health, and increasing the ability of people at risk to take advantage of the surrounding environment in dealing with the Covid-19.
CONCLUSION
The level of anxiety of people at risk in facing the Covid-19 outbreak is directly affected by emotional support (path coefficient of 0.516 and statistical T value 8.289), and information support about Covid-19 with a path coefficient of 0.286 and statistical T value 3.868. Instrumental support has an indirect effect on anxiety (significance probability (P) 0.385> 0.05). The findings in this study are that social support (emotional and informational) can reduce the level of anxiety of people at risk of Covid-19 during the pandemic.
When people are at risk of facing the Covid-19 outbreak, social support helps improve people's ability to the interpretation of stressors during the pandemic properly, and utilize the resources used in solving problems, and good coping efforts, so that the level of anxiety during the pandemic decreased. | 2021-09-28T01:09:59.285Z | 2021-07-04T00:00:00.000 | {
"year": 2021,
"sha1": "eba295947a94f64d525bade6c8d21295f4abfc07",
"oa_license": "CCBY",
"oa_url": "https://e-journal.unair.ac.id/IJCHN/article/download/27308/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "57fdffe964b25aa21abe3fe5fa8475287c8ec63c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
20001912 | pes2o/s2orc | v3-fos-license | Evaluation of the whitening and remineralization effects of a mixture of amorphous calcium phosphate, hydroxyapatite and tetrasodium pyrophosphate on bovine enamel
effects of a mixture of amorphous calcium phosphate, hydroxyapatite and tetrasodium pyrophosphate on bovine enamel Young-Eun Lee, Dong-Ok Park, Yun-Sook Jung, Keun-Bae Song Department of Dental Hygiene, Daegu Health College, Daegu, Department of Dental Hygiene, College of Science & Technology, Kyungpook National University, Sangju, Department of Preventive Dentistry, School of Dentistry, Kyungpook National University, Daegu, Korea
Introduction
The increase in novel disease treatments in an effort to improve quality of life has been accompanied by an increase in aesthetic treatments.In clinical dentistry, tooth whitening is a highly desirable aesthetic treatment, as tooth color is one of the most important factors related to patients'satisfaction with their appearance 1,2) .The growing demand for bleaching as an aesthetic improvement has led to considerable development in bleaching products 3) .Studies using scanning electron microscopy (SEM) have demonstrated that bleaching causes demineralization, degradation, and alters the microhardness and roughness of the enamel surface 4) .Roughness is considered a predisposing factor for bacterial adhesion and stain absorption 5) .
Remineralization occurs when calcium and phosphate in the environment along the enamel or dentin crystals, recrystallize on surface crystal remnants.Saliva is the primary source of calcium and phosphate 6) .Saliva does not exert a uniform effect in the mouth and it therefore has distinct localized effects 7) .
Several attempts have been made to enhance remineralization.Amorphous calcium phosphate (ACP) has been shown to be effective in remineralization, and is reported to have an additional whitening effect 8,9) .Hydroxyapatite (HA) is a precipitate generated by a reaction of calcium chloride and disodium phosphate, and increases the concentration of calcium in the oral cavity and remineralization 10) .According to a recent study 11) , a bleaching agent containing ACP showed remineralizing and whitening effects.HA, which is the basic element that comprises hard tissues in the human body, is an effective remineralization element 12) .By applying HA to the tooth surface, remineralization can occur, as HA acts as a mineral supplement of calcium and phosphate to the enamel.According to recent studies 13) combination of HA and Hydrogen Peroxide (HP) is effective in tooth whitening; and so HA provides alternatives to oxidizing bleaching agents by its whitening effect.Recently, several studies reported that nano HA repairs early carious lesions as a remineralizing agent 14,15) .Huang S et al. 16) also reported that nano-HA on initial enamel lesions have remineralization potential.Lobene 17) and Scemehorn et al. 18) reported that a toothpaste agent containing tetrasodium pyrophosphate (TSP) had not only an anticalculus effect but also extrinsic stain removal effect.Farrel et al 19) .reported that the bleaching bands were more effective when TSP was added.Attempts to minimize the adverse effects of bleaching treatments by increasing enamel remineralization have been conducted; however, the results are contradictory.In addition, there are few studies that showed a whitening or remineral-ization effect of a mixture containing ACP, HA, and TSP.
Therefore, the objective of this in vitro study was to evaluate the applicability of a new mixture containing ACP, HA, and TSP by examining the whitening effect and analyzing the remineralization ability of each compound on early carious enamel.The null hypotheses tested were that (1) inorganic ions of the mixtures increase enamel remineralization, and (2) a new mixture containing ACP, HA, and TSP has a good whitening effect.
Distilled water was used as the negative control and Opalescence F (Ultradent Product.Inc., South Jordan, CA, USA) containing 10% carbamide peroxide was used as the positive control.
Specimen preparation
The present study was approved by the Institutional Care and Use Committee of the School of Dentistry, K-University, in Daegu.For this study, we selected 300 Sound bovine teeth that were cleaned and stored in 0.1% thymol solution under refrigeration for no more than a week.Three hundred cylindrical enamel specimens, 3 mm long and 8 mm in diameter were prepared using bovine central teeth.The specimens were embedded in self-cured acrylic resin at right angles to the long axis of the acrylic cylinder.The enamel surface was ground with an Automatic Polisher (Metaserv ® 250 Grinder-Polisher, Buehler, USA) under cool water sequentially, using 240, 400, 600, and 800 grit polishing sand paper.
Specimen discoloration
A cola beverage (Coca Cola, pH 2.26±0.20),currently on the market, was stirred for over 6 hours to remove the carbon dioxide gas and then the specimens were immersed in the cola for 24 hours.Specimens were subsequently rinsed with triple distilled water.Discoloration was confirmed with the naked eye after the rinsing step.After measuring the initial color of 150 discolored specimens, 15 specimens were distributed into each group according to their brightness (L*).
Handling of mixture
Of the 150 specimens, the 90 specimens near the average brightness value were divided into 6 groups with 15 specimens per group.All test mixtures used in this study were prepared the morning of the day of use.One test-cycle was defined as follows: the specimen was treated with 20 ml of the test-mixture for 10 minutes except for Group 2 (the positive control), and was then dipped into artificial saliva for 170 minutes.The artificial saliva consisted of 2.2 g/L gastric mucin, 0.381 g/L NaCl, 0.231 g/L CaCl 2 , 0.738 g/L KH 2 PO 4 , 1.114 g/L KCl, 0.02% sodium azide, and trace of NaOH to pH 7.0 20) .Five liters of artificial saliva were freshly prepared on a daily basis.
In this way, 2 cycles were performed on every group except for Group 2 daily.The specimens were stored in distilled water at 4 o C under 100% relative humidity, except during the dipping procedures.These procedures were repeated for 6 days.
The Group 2 specimens were coated with Opalescence F, a bleaching solution, for 3 hours and the same procedure was applied except for the preparation time.After each preparation procedure, all specimens were rinsed twice with running water for 30 seconds each and then kept under 100% relative humidity conditions except for the dipping procedure.
Measurement of the shade change
To measure the shade change according to the mixture treatment, the shade of the enamel was measured before and after dipping the specimen in the mixture solution using
Morphological examination
Specimens (n=7) were sputter-coated with gold to a 180~200 Å thickness using gold ion deposition equipment (IB-3, Eiko Co., Japan) in a vacuum.Then, scanning electron microscopy (SEM, S-4200, Hitachi, Co., Japan) with an energy dispersive X-ray spectrophotometer (EDS) was used to examine the surface morphological changes at 2000x magnification at an acceleration voltage of 15 kV.
Analysis of remineralization
The remaining specimens (n=8) were cross-sectioned longitudinally using a low-speed diamond saw (Isomet; Buehler Ltd., Lake Bluff, IL, USA), and the sectioned enamel surface was polished with 100-grit silicon carbide paper and decending grades of 6-, 3-, and 1-mm diamond paste (Buehler Ltd., Lake Bluff, IL, USA), as described by Lee et al. 21) .The fluorescence images were examined using a confocal laser scanning microscope (CLSM, LSM 510, Carl Zeiss, Germany) under a fluorescein-5-isothiocyanate (FITC) field at an excitation wavelength of 488 nm and an emission wavelength of 515 nm.According to the method reported by Paris et al. 22) .3 points were selected from the enamel surface to the bottom of the decalcification area.The decalcified depth was measured and the mean was calculated using a laser scanning microscope (LSM) image browser (Carl Zeiss, Germany).The same letters indicate no significant different among the groups according to a Tukey's multiple comparison at each whitening time.
Statistical analysis
amine the color difference according to the number of treatments.All statistical analyses were carried out using SPSS 18.0 for Windows (SPSS Inc., Chicago, Il, USA).A P-value of 0.05 was considered to be significant.2).To quantitatively analyze the enamel shade change, the positive control (Group 2) was designated at 100%, and whitening effect was observed in 67% of Group 3, 77% of Group 4, 83% of Group 5, and 85% of Group 6.
SEM observations
Fig. 1 showed SEM images after 6 days of the mixture treatment.Compared to the untreated group 1, the Opalescence ® 10% treated positive control (Group 2) showed a prominent typical pattern of demineralization through the dissolution of the organic/inorganic enamel element.In Groups 3, 4, 5, and 6 treated with ACP, HA, and TSP, enamel remineralization was confirmed by the enamel rod recovery.In particular, the relatively high mixture concentrations in Groups 5 and 6 produced a higher level of enamel remineralization than Groups 3 and 4, but there was no significant difference between Groups 5 and 6 (Fig. 1).solution, showed a pattern of mineral loss similar to that of the untreated group (Group 1).In our study, weak or missing fluorescence indicated an area of remineralization.The mean fluorescence depth was measured to evaluate the quantitative remineralization (Table 3).Less mineral loss was observed in the mixed solution-treated group than the group treated with
Discussion
This study provided the first empirical evidence that a mixture of ACP, HA, and TSP causes whitening and remineralization effect while retaining the enamel's integrity.The mixture solution was applied to the tooth surface and the enamel remineralization effect was revealed by mineral supplements to enamel deficient in calcium and phosphate.The remineralization promotes the whitening effect by affecting the light transmission of the tooth surface and increasing the surface reflection.
Bleaching agents, mainly oxidizers, act on the organic structure of the dental hard tissues, slowly degrading them to chemical by-products, such as carbonates, which are lighter in color, and carbon dioxide 23) .Generally, unstable peroxides convert to unstable free radicals.These free radicals can oxidize or reduce other molecules 24) .According to Mor et al. 25) , at-home bleaching agents can roughen the tooth surface and composite restoration, increasing bacterial adhesion.The rough surface of the restoration can also cause patient discomfort and accelerate the plaque and food debris buildup by increasing surface energy, which can cause secondary caries and gingivitis.Therefore, ongoing work focuses on reducing this kind of side effect with an increase in aesthetic treatment.
First of all, we selected the soft drink (Coca cola) for the discoloration and demineralization process to confirm the whitening effect and remineralization effects of mixtures.Several researches 26) used cola, wine, coffee, and tea as staining solutions in enamel, and composites for intrinsic and extrinsic discoloration.Bayindir et al. 27) reported the highest delta E values of prosthodontic materials at the 24 hour immersion period, similar to the results of Ergun et al. 28) .Cola quantitatively determines the stain development process within 24 h, so it has advantages to other staining solution.We selected Cola for the staining model to access the effect of whitening on extrinsic discolored teeth by enhanced the remineralization activity.
To quantitatively analyze the enamel shade change, the positive control (Group 2) was designated at 100%, and whitening effect was observed in 67% to 85% at Group 3 to 6. Naked eye observation indicated that the mixed solution treated groups showed a similar effect to that of Group 2 (the positive control).
Based on the results, we concluded that the mixtures of ACP/ HA/TCP have concentration-dependent synergistic effects.
In the case of discolored dentin, bleaching agents placed on the enamel do not readily reach the stain.Better results are expected if the stain is on the enamel surface or the enamel is defective and porous.Changes in tooth structure due to extrinsic factors have been widely investigated though SEM.
The method requires proper specimen preparation and examination conditions: these procedures change the natural condition and/or part of the specimen structure.We have previously confirmed the validity evaluation of SEM in our earlier study 21) .Microhardness is usually used for quantitative measurement, but SEM of TEM is usually used in imaging analysis.SEM is usually used to measure the histomorphologic changes of the enamel surface and to confirm the demineralization or remineralization 29,30) .In this study, SEM revealed the typical demineralization effect in Group 2. On the other hand, remineralization could be easily observed in Groups 3, 4, 5, and 6.To confirm the result, when observed under confocal scanning microscopy, the depth of mineral loss in Group 2 was 60 mm.In contrast, no mineral loss was observed in the other groups, which were treated with the mixture solution, i.e., similar to Group 1 (distilled water).These findings suggested that the inorganic ions in the mixtures of ACP/HA/TCP recover enamel surface morphology by inducing the remineralization effect of enamel, and are thus effective in bovine enamel whitening.
We used the ratio of 3%, 2%, 1% for the ACP/HA/TSP.ACP is used in paste as 10% CPP-ACP, and is incorporated in various commercial products such as GC Tooth-Mousse or MI paste (GC, IL).Several reports 16) showed that 1%-10% HA has similar effects to 10% CPP-ACP.And commercial dentifrice The same letters indicate no significant difference between the groups according to a Tukey's multiple comparison.
products commonly contain 3.3% TSP (i.e.Crest Tartar Control).For synergistic effects of mixtures, we combined the 3 components and then used 1/3 rd the amount of total gram.This study could not explain which was actually the most effective, because we did not test each ingredient separately for the remineralization effect.Nevertheless, this study has important clinical implications in choice of a new bleaching system containing ACP, HA, and TSP with good whitening effect.According to the present and earlier results, we confirmed the hypotheses that inorganic ions of a new mixture containing ACP, HA, and TSP increases enamel remineralization and has a good whitening effect.
Our study had some limitations.First, a cola beverage on the market was used to discolor the teeth and the mixed solution was applied.This procedure differs from the mechanism of endogenous and extrinsic discoloration, and a range of whitening effects could be observed naturally.Second, the qualitative and quantitative analysis of the chemical composition of the specimen and the degree of inorganic ion deposition could not be compared after treatment with the mixed solution.Third, we did not test the whitening effect using additionally treated mixtures in the representative bleaching systems.Further studies are necessary to address these limitations; to simulate the oral cavity environment by inducing possible exogenous pigmentation; to check the concentration of the inorganic ions in the teeth when treated the mixures; in vitro study to discoloring the teeth using stable method like as stookey method; and in vivo safety assessment when using the mixture solution.
Conclusions
We examined the tooth whitening effect using in vitro models prior to performing clinical trials.The results revealed that the inorganic ions of the mixture of ACP, HA, and TCP had an effect on the remineralization of bovine enamel without adversely affecting the whitening efficacy as follows.
1.The degree of tooth whitening according to the number of treatments of mixed solution revealed a statistically significant difference in the shade change (ΔE*) in all groups (P<0.001).
SEM revealed the typical demineralization effect in
Group 2. On the other hand, remineralization could be easily observed in Groups 3, 4, 5, and 6.
3. Under confocal scanning microscopy, the depth of mineral loss in Group 2 was 60 mm.In contrast, no mineral loss was observed in the other groups, which were treated with the mixture solution, i.e., similar to Group 1 (distilled water).
Future clinical study is required to examine the optimal concentration of the mixture solution and confirm its whitening ability.
Shade Eye-NCC (150131, SHOFU Co., Japan).Repeated measurements were then performed three times under dry conditions.The tip of the shade measurer was consistently applied above the 0.5-1.0mm sample's surface at right angles to the surface.Three sites were chosen for the measurements.The L*, a*, and b* values were then obtained.The magnitude of the total color difference is represented by ΔE* calculated by the following equation: ΔE* ={(L*) 2 +(a*) 2 +(b*) 2 } ½ .
Fig. 2 Fig. 2 .
Fig.2showed the loss of inorganic substance and remineralization in the sectioned specimen after 6 days of treatment.The right side of the central band is the air space and the left is the enamel.In the 10% CP-treated positive control (Group 2), enamel mineral loss was observed to a depth of 55-60 mm.Groups 3, 4, 5, and 6, which had been treated with the mixed
Table 1 .
Study group and characteristics of the treatment materials
Table 2 .
Shade change (ΔE*) of the specimen according to the whitening times c Values are reported as the Mean±S.D. *Significantly different among the groups at each time point by one-way ANOVA.† Significantly different among the experimental times by repeated measures ANOVA.a,b,c
Table 2
parison of the degree of tooth whitening according to the number of treatments of mixed solution revealed a statistically significant difference in the shade change (ΔE*) in all groups (P<0.001)(Table
Table 3 .
Fluorescence lesion depth at 6 days by CLSM | 2018-01-01T00:15:09.718Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "5430528e89be40a90f7940b3b808432b6bc1a311",
"oa_license": "CCBYNC",
"oa_url": "https://synapse.koreamed.org/upload/SynapseData/PDFData/0197jkaoh/jkaoh-40-92.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5430528e89be40a90f7940b3b808432b6bc1a311",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": []
} |
248526619 | pes2o/s2orc | v3-fos-license | Transition from predictable to variable motor cortex and striatal ensemble patterning during behavioral exploration
Animals can capitalize on invariance in the environment by learning and automating highly consistent actions; however, they must also remain flexible and adapt to environmental changes. It remains unclear how primary motor cortex (M1) can drive precise movements, yet also support behavioral exploration when faced with consistent errors. Using a reach-to-grasp task in rats, along with simultaneous electrophysiological monitoring in M1 and dorsolateral striatum (DLS), we find that behavioral exploration to overcome consistent task errors is closely associated with tandem increases in M1 and DLS neural variability; subsequently, consistent ensemble patterning returns with convergence to a new successful strategy. We also show that compared to reliably patterned intracranial microstimulation in M1, variable stimulation patterns result in significantly greater movement variability. Our results thus indicate that motor and striatal areas can flexibly transition between two modes, reliable neural pattern generation for automatic and precise movements versus variable neural patterning for behavioral exploration.
Kondapavulur et al provide a description of how activity in primary motor cortex (M1) and dorsolateral striatum (DLS) change as rats learn to reach to a different target in a single pellet reach-to-grasp task.
The key result is that there is a transition period as rats begin to reach to the new target during which reach-related modulation in M1 and DLS is lost, and variability in neural activity increases, before returning to a state similar to "baseline" learning. This is an important addition to the literature on how acquired skills are adapted to changing conditions. I think the main interpretation that there are discrete "exploratory" and "automatic" modes for M1-DLS coordination during skilled reaching is probably correct. Nonetheless, I have several significant questions regarding the methodology and interpretation of results that could confound that interpretation.
MAJOR COMMENTS: 1. I think a more detailed characterization of the behavior would improve the interpretation of the physiology. a. Before the switch, rats generally reach directly to position "A". After the switch, what is classified as a reach to "A" is really a "non-B, non-low amplitude" reach. Thus, the post-switch variability in "A" reach endpoint may be much higher than for "B" reaches, which is constrained by the definition that more than half the paw must cover the pellet. This is partially quantified in Figure 1C for a single rat, but I think should be quantified across rats as a function of session type (auto, var, rel). Is it possible that the changes in M1/DLS activity are related to variability in reach endpoints (as opposed to variability in whether reaches were directed to the "A" or "B" target). b. The authors assert that the rats are making "smooth, fast reaches" across all session types, and this is key to the argument that the changes in M1/DLS activity are not linked to reach kinematics, but rather to "exploration" vs "automatic" modes. The speed of reaches is preserved based on supplemental fig 1d, e. However, it's not clear to me that the reaches are necessarily "smooth" based on these data. For example, rats could make "wobbly" reaches with similar durations, or there could be increased variability in reach trajectories despite preserved speed. There are many ways "smoothness" and trajectory variability could be quantified across rats separately from reaction time and reach duration (for example, Azim E, Jiang J, Alstermark B, Jessell TM. Skilled reaching relies on a V2a propriospinal internal copy circuit. Nature 2014; 508: 357-363.; Bova A et al. Precisely timed dopamine signals establish distinct kinematic representations of skilled movements. Elife 2020; 9:). Only a single example of reach trajectories is shown in supplementary figure 1c. If the claim is that reach kinematics are similar in "var" sessions despite changes in neural activity, there should be more detailed characterization of reach trajectories and endpoints. Specifically, variability in reach trajectory and reach endpoints should be examined as a possible correlate of changes in M1/DLS activity. c. Another possibility is that the modulation of M1 activity is related to the act of grasping. For example, Hyland et al (Hyland BI, Seeger-Armbruster S, Smither RA, Parr-Brownlie LC. Altered Recruitment of Motor Cortex Neuronal Activity During the Grasping Phase of Skilled Reaching in a Chronic Rat Model of Unilateral Parkinsonism. J Neurosci 2019; 39: 9660-9672.) found strong M1 modulation at the time of grasping. Is it possible that after several days of a pellet not being present at the "A" position, the rat stopped grasping and began feeling for the pellet with an open paw? This could be quantified by tracking the digits, or at least reviewing the videos to determine if the grasping movement changed qualitatively during "var" sessions. d. Learning curves (% success vs number of trials/sessions) for initial training to target A and relearning after the switch to target B should be shown. During initial training, rats learn to transport their paw to the pellet and grasp it sequentially (e.g., Lemke et al, 2019). Once the rats have learned to reach and grasp at target A, do they only have to learn to reach to target B, or do they need to relearn the grasp as well? My prediction would be that the learning curve is steeper for target B once A has already been acquired because the rats already know how to grasp once they figure out where the pellet is. 2. Related to point 1c, when does the rat make a grasping attempt with respect to changes in single unit activity? This could be checked by locking PETHs to the end of the reach rather than reach onset. If single unit modulation is more closely locked to grasping than reach onset, this would suggest that the changes in neural activity may be more linked to changes in grasping than "exploration" vs "automatic" modes.
3. Line 144: "…there was no change in 3-6 Hz M1-DLS LFP coherence between session types (data not shown)". This seems important enough to show and have statistics, since the claim is that there are specific physiologic changes related to variability in reach targets. 4. I don't think the ICMS studies contribute to the interpretation of the recording studies. Suppose reach trajectories had remained consistent during variably timed ICMS. How would that change the interpretation of the recording results? In fact, it's hard for me to conceive of a model in which variably timed stimulation doesn't lead to variability in movement endpoints. I'm willing to be convinced that the outcomes of variable stimulation timing add new or confirmatory information to the neural recordings, but right now I don't see it. 5. I think the terms "transfer learning" and "motor adaptation" are ambiguous (line 619). At least, I cannot find clear, consistent definitions for them. I think of "transfer learning" as the acquisition of one task making it easier (or harder) to learn another, while I think of adaptation as learning new parameters for a modified version of a previously acquired task. This task seems more like an "adaptation" than "transfer learning" since this is basically the same task (maybe transfer learning would be learning a pasta-grasping task after a single pellet task?). I think that the authors are arguing that because rats were trained only on target A, the introduction of a second target makes this more like learning a new task than adapting the parameters of an old one (if they had been trained on multiple targets, then a novel target was introduced, would that be adaptation?). The evidence that this is "transfer learning" is that the neural dynamics associated with reaches to target A disappear in "var" sessions. I'm not sure if that follows without having shown that dynamics are preserved in tasks that would more clearly be classified as "adaptation" as opposed to "transfer learning". Maybe this is just semantics. In any case, if the authors wish to make a distinction between transfer learning and adaptation, they should define what they mean by each term and clarify why they believe their observations are more consistent with "transfer learning" than "adaptation".
Minor points/clarifications: 1. Line 99 "…after reaching to location A was > 50%." Does this mean the success rate for reaches to A was > 50%, or that > 50% of all reaches were targeted to location A? 2. Were the central and lateral locations the same across rats (corrected for paw preference)? How were central and lateral locations defined? In Figure 1, it looks like the A and B locations are both "lateral" -is this an artifact of the illustration or were the A and B sites symmetric around the midline of the reaching slot? 3. Did success rate systematically differ between central and lateral locations, regardless of which location was assigned as "A" or "B"? A switch from the "hard" to the "easy" location may have different characteristics electrophysiologically and behaviorally than a switch from the "easy" to the "hard" location. Also, it would just be interesting to know if one location is more difficult than the other. 4. How far apart are the "A" and "B" locations? 5. Are there ever trials where a pellet isn't delivered (intentionally or unintentionally)? Variable reward schedules cause more habit-like behavior. If there is a significant proportion of trials during training in which a pellet is not present (and especially if this fraction differs across rats), this could explain why rats take different amounts of time to switch to reaching to target B. That is, I would expect rats that took longer to switch to the B target to have had a larger fraction of absent pellet trials during training. 6. Lines 237-239. I had to read this sentence several times to understand it -"In M1, when comparing the first reach to Location A, spiking to the average neural template for first reaches to Location A, there was a drop in neural pattern consistency during the variable session as compared to the other sessions." I think it means that the pattern of spiking activity in M1 was less consistent during "var" sessions than BL, auto, or rel sessions. 7. Lines 258-259: "but that there's a consistent temporal ordering of unit modulation soon after,…". I could not follow what the authors were trying to convey here. It sounds to me like they're suggesting that individual units fire in the same order before and after the "var" session, but I don't think that's what they mean. I think they mean that there is a similar pattern of consistent modulation (BL, auto), then variability (var), then consistent modulation (rel) after the target switch, but I'm not sure. 8. Line 310 -"Across all animals is shown in Fig 4f." Not clear what is shown in figure 4f from this text. Presumably how the timing of peaks changed between session types? 9. Lines 567-570 -"Thus, this "unlearning" process of switching from previously rewarded ensemble activity to variable firing patterns likely involves a global network state shift, within both cognitive and motor circuits, towards generation of newer ensemble activity for reward." I wouldn't use the term "unlearning", which to me suggests that the task has disappeared from memory (maybe it has, but my guess is rats would quickly be successful if the pellet were moved back to target A). Perhaps "adapting"? 10. Figure 2d -would make the vertical scale the same for both plots to facilitate comparison. 11. In the Figure 2a caption, it is implied that the same unit was recorded across sessions. This may be true, but no analyses are presented to support this assertion, other than that the units were recorded on the same electrode. Of course, it is impossible to be certain that the same unit was recorded, but there should be some indication that they are in fact the same unit if that is the claim. That said, I don't think it's critical to know whether the same units were recorded across days, so I think it would be fine to just change the caption so it's clear that there was not an attempt to verify the unit's identity across days. Fraser and Schwartz (Fraser GW, Schwartz AB. Recording from the same neurons chronically in motor cortex. J Neurophysiol 2012;107: 1970-1978) described a fairly straightforward algorithm to test single unit identity across recording sessions, but again I don't think this is necessary. 12. It may be worth commenting on how rats identify pellet location. There is evidence that they primarily use olfaction, and may not rely on vision at all (Whishaw IQ, Tomie JA. Olfaction directs skilled forelimb reaching in the rat. Behav Brain Res 1989; 32: 11-21.). I don't think this significantly changes the authors' interpretation that M1/DLS variability reflects an "exploratory" state (in fact, probably enhances it). However, it may be helpful for readers not familiar with rodent skilled reaching.
Reviewer #4 (Remarks to the Author):
This reviewer has been asked to comment specifically about the CCA analyses. The authors use CCA to find that activity shared between M1 and DLS appears to be more task-related in the automatic and relearned sessions, compared to variable sessions. This finding fits in well with the overall storyline of figuring out what is happening during the variable sessions. This is a reasonable use of CCA, although there are several points of interpretation that are currently unclear: 1) The authors consider only the top canonical variable (CV). To find the top CV, CCA was fit to the trial window (-2s to +0.5s, line 883), which includes the pre-reach period (-1 to -0.5s), the reach period (-0.1 to +0.4s), and other trial epochs. How does this choice affect interpretation? One could imagine, for example, that the M1-DLS "communication subspace" varies (e.g., rotates) considerably across these trial epochs, spanning multiple dimensions of population activity space in either M1 or DLS. How then, can we be confident that, as a 1D summary over multiple trial epochs, the top CV is indeed a good representation of M1-DLS interactions during both the pre-reach and reach trial epochs?
2) Related to the point 1), is it possible that task-related activity is present during the variable state, but it shows up in CV2 or CV3 rather than CV1?
3) Line 363 and 891: The authors state that "most" sessions included significant canonical variables between M1 and DLS, i.e., significant cross-area correlation. The authors should state how many sessions in which they could not identify any significant cross-area correlation (if it's 49% of sessions, for example, that's actually a lot of non-interactions). Fig 5d, the scatter of neural activity looks similar for Automatic (A reaches) and Relearned (A reaches). One might postulate that the neural activity should be quite different for Automatic (which is pre-learning) and Relearned (which is post-learning). Why is it reasonable that the activity looks similar in the two cases? What might that indicate about how the interactions between the two areas support (or not support) learning? (Same question for B reaches. -Line 76 and 347: If the authors are going to use the phrase "communication subspace," they should probably cite Semedo et al., Neuron 2019. They should also be careful about the use of this phrase (as in the Discussion, starting at Line 577), since a communication subspace refers to more than just the canonical variables returned by CCA, but also their relationship to the other activity patterns (if any) in M1 and DLS (i.e., are there activity patterns within each area that are not captured by the canonical variables?).
4) In
-Line 882: References 10-12 are cited for CCA. This might be a typo.
-Line 885: "mean activity in each group was subtracted." The authors should justify why this is needed and how it affects interpretation.
-Supp Fig 3B: Explain what the reader is supposed to take away from these CCA R^2 values. Also, it's unclear how they support the title of this figure: "Majority of M1 and DLS spiking not related to reach direction."
INTRODUCTION TO REVISED MANUSCRIPT
We thank the reviewers (R1/R2/R3/R4) for their thorough review and comments, which have significantly strengthened our approach. We also greatly appreciate the general enthusiasm for our study! We outline specific changes in response to the comments below: Detailed response to Reviewer #1: "The manuscript by Kondapavulur et al. examined the neural firings and the variability in the M1 and the DLS, and analyzed the temporal changes in the firings and variability in the face of a change in target location. Their results indicate that M1 and DLS flexibly transited between two modes, reliable neural pattern generation for automatic/precise movements and variable neural patterning for behavioral exploration. Although the temporal change of the firing pattern to re-learn new target location is very interesting, several major points of the experiment should be clarified." We thank for reviewer for their positive comments and appreciate the feedback.
"Materials and Methods 1. Individual rats have dominant arms to take the pellet. The authors need to describe the L/R side of electrode together with the side of arm to take the pellet." We appreciate that this was unclear in the methods, and we have clarified that the dominant reaching arm was identified prior to electrode implantation to determine implantation hemisphere, particularly in Methods, Surgery (lines 715-716) and Methods, Behavioral training (lines 750-756).
"2. The authors should add schema or picture showing the location of the electrodes in the M1 and the DLS."
We thank the reviewer for providing this clarifying suggestion and have added a schema of M1 and DLS locations in the Figure 1a inset. The figure legend has been updated accordingly (line 1166).
"3. Reach onset (RO). The authors should describe the timing and the definition of the reach onset in detail. RO is the timing of taking behavior, first touch, or taking the pellet? Since time-locked analysis is critically important, unclear onset timing may lose the typical pattern of the firings in Variable. "
We have included the definition of reach onset (RO) in the Methods section, i.e., initiation of forward displacement of the paw after the paw has completely rotated from flexion to extension, in Methods, Behavioral analysis (lines 779-781).
"4. Figure 6. Although the results may be interesting, it is difficult to find the relation or the significance to support the authors current findings." We thank Reviewer 1 (and Reviewer 3) for pointing out that the findings in Figure 6 are difficult to connect back to the main findings. We have added additional text to better frame our approach and to motivate these experiments (lines 440-450). In brief, there is now clear evidence that reliable and predictable patterns of neural activity are associated with robust movement control (e.g., Churchland and Shenoy, Nature, 2012 and JA Gallego, et al, Neural Manifolds for the Control of Movement, Neuron 2017). Notably, it is quite likely that our observed predictable neural sequences of firing in M1/DLS are analogous to these findings (indeed we also quantify the predictable dynamics).
The main question, then, is whether a loss of reliable sequencing and variable firing is still capable of driving movements? While the ICMS results are certainly not in awake animals, they do suggest that variable firing patterns in M1 are still capable of driving movements, albeit with end-point variability. While we are certainly open to removing these results, we respectfully suggest that they provide support for the notion that reliable and variable patterning in M1 are still potentially movement potent. These also motivate future, and more challenging work, in which more naturalistic patterning of M1 (e.g., with emerging methods such as holographic stimulation) might also be testable.
Minor points
Line 904 "0.5-1 cc" should be "0.5-1 ml." We have updated this line to "0.5-1mL." Line 1100 Figure 1f. Please explain what is r1 and r2. We have updated the legend for Figure 1f to reflect that r1 and r2 are the first and second reach onsets, respectively, within the demonstrated trial (lines 1172-1174).
Line 1104 Figure 1d. Please explain what indicates single black circle? We believe that this question is regarding Figure 1c. The legend for Figure 1c explains that the single black circle indicates pellet location.
Line 1104 Figure 2f, g, h, and i. It is difficult to understand the mean or the definition of Y-axes. We have updated the figure legend to clarify the metrics in Figure 2f-I (lines 1188-1193) Line 1177 Legend says "tip of digit 5 (D5, blue), tip of digit 4 (D4, purple)" but Figure 6 shows D4 and D3. We have corrected the legend to accurately reflect D4 and D3 as depicted in the figure (line 1252).
Line 1187 Supp. Fig 1d and 1e. Please explain "Normalized reaction time" and "Normalized R1 time" in the legend or main text. We have updated the legend for Supplemental Figure 1 to reflect the process of normalization, which involved subtracting the minimum value across sessions (1 minimum value per rat) for the reaction time (Supp. Fig 1g) and the first reach time (Supp. Fig 1h) separately (lines 1270-1272,1277-1278).
Detailed response to Reviewer #2:
"This paper provides an illuminating analyses of changes in neuronal assemblies in not one but two brain areas associated with success in developing skilled reaching movements. There is a load of work included and each figure is consequently dense with information that is sometimes hard to follow without some experience in the mathematical tools employed. However the paper itself is clearly written and has a convincing argument for the conclusion that these cortical and striatal neurons are deeply involved both in the effective skilled movements and in their modification during training to a different target." We sincerely thank the reviewer for their kind summary regarding the impact of the project described.
Detailed response to Reviewer #3: "Kondapavulur et al provide a description of how activity in primary motor cortex (M1) and dorsolateral striatum (DLS) change as rats learn to reach to a different target in a single pellet reach-to-grasp task. The key result is that there is a transition period as rats begin to reach to the new target during which reach-related modulation in M1 and DLS is lost, and variability in neural activity increases, before returning to a state similar to "baseline" learning. This is an important addition to the literature on how acquired skills are adapted to changing conditions. I think the main interpretation that there are discrete "exploratory" and "automatic" modes for M1-DLS coordination during skilled reaching is probably correct. Nonetheless, I have several significant questions regarding the methodology and interpretation of results that could confound that interpretation." We thank for reviewer for their positive reviews regarding the impact of this study and appreciate the feedback.
"MAJOR COMMENTS: 1a. I think a more detailed characterization of the behavior would improve the interpretation of the physiology. Before the switch, rats generally reach directly to position "A". After the switch, what is classified as a reach to "A" is really a "non-B, non-low amplitude" reach. Thus, the post-switch variability in "A" reach endpoint may be much higher than for "B" reaches, which is constrained by the definition that more than half the paw must cover the pellet. This is partially quantified in Figure 1C for a single rat, but I think should be quantified across rats as a function of session type (auto, var, rel). Is it possible that the changes in M1/DLS activity are related to variability in reach endpoints (as opposed to variability in whether reaches were directed to the "A" or "B" target)." We thank the reviewer for this observation and introduction of alternate behavioral interpretations. For animals with top-down videos, we quantified spread via standard deviation in the X and Y directions of the camera image, as well as collapsed average standard deviation, categorized by session type. Average spread was greatest in the baseline and automatic conditions. In the relearned condition, there was significantly less deviation in the Y and averaged X-Y directions than there was at baseline; no other conditions (automatic, variable) had significantly different deviations than baseline. We have added this data into Supplementary Figure 2 (lines 1280-1285), and have shifted the remaining supplementary figures accordingly. We interpret this to mean that variability in end-points are relatively stable as the mean target goal is shifting during exploration, as detailed in Results, Loss of consistent reach-locked M1 and DLS neural spiking during exploration (lines 209-216).
"1b. The authors assert that the rats are making "smooth, fast reaches" across all session types, and this is key to the argument that the changes in M1/DLS activity are not linked to reach kinematics, but rather to "exploration" vs "automatic" modes. The speed of reaches is preserved based on supplemental fig 1d, e. However, it's not clear to me that the reaches are necessarily "smooth" based on these data. For example, rats could make "wobbly" reaches with similar durations, or there could be increased variability in reach trajectories despite preserved speed. There are many ways "smoothness" and trajectory variability could be quantified across rats separately from reaction time and reach duration ( We appreciate the reviewer's comment asking for better characterization of reaching kinematics. In E Azim, et al, Nature 2014, the authors quantified velocity over distance, and they statistically compared "direction reversals" during reaching, which we qualitatively do not see visually in this experimental paradigm. In A Bova, et al, ELife 2020, the comparable metric for our study would be maximum reach velocity. Thus, we have added the following to Supplemental Figure 1: 1) panel 'i' demonstrating velocity profiles locked to grasp across session types for an example animal, additionally demonstrating that there are no "direction reversals" on average after switching location, and 2) panel 'j' demonstrating that maximum velocity during reach-to-grasp does not change across session types (for all animals) (lines 1272-1275). These findings were also added to Results, Loss of consistent reach-locked M1 and DLS neural spiking during exploration (lines 146-148). With regard to better characterization of reach endpoints, we have quantified the spread of reach endpoints across session types, detailed in the response to point 1a.
"1c. Another possibility is that the modulation of M1 activity is related to the act of grasping. For example, Hyland et al (Hyland BI, Seeger-Armbruster S, Smither RA, Parr-Brownlie LC. When we qualitatively reviewed the videos, there was no difference in grasping movements after switching the pellet location -that is the grasping was still quick, and most every reach was followed by a grasp. To further explore whether the variable sessions had more reaches and fewer grasps, we examined the proportion of reaches that had grasps by animal across session types. There was no significant difference in the proportion of reaches followed by grasping movements across session types, as seen in Supp Fig 1e (lines 1268-1269). Additionally, we have included a series of videos demonstrating reaching behavior across the session types, with qualitatively preserved reach-to-grasp sequences (Supp Vid 1, lines 1326-1336). Finally, we would also like to clarify that both M1 and DLS modulation are changing during this behavioral paradigm, not just M1. Canonically, DLS is involved non-dexterous forelimb movement (AK Dhawale, et. al, The basal ganglia control the detailed kinematics of learned motor skills, Nature Neuroscience 2021, SM Lemke, et. al, Emergent modular neural control drives coordinated motor actions, Nature Neuroscience 2019), and this patterning simultaneously changes alongside that of M1, indicating that this likely represents a change in reach-to-grasp combined strategy rather than grasping behavior alone.
"1d. Learning curves (% success vs number of trials/sessions) for initial training to target A and relearning after the switch to target B should be shown. During initial training, rats learn to transport their paw to the pellet and grasp it sequentially (e.g., Lemke et al, 2019). Once the rats have learned to reach and grasp at target A, do they only have to learn to reach to target B, or do they need to relearn the grasp as well? My prediction would be that the learning curve is steeper for target B once A has already been acquired because the rats already know how to grasp once they figure out where the pellet is." We have added these learning curves for our cohort as Supplementary Figure 1, panel A (top), in addition to the curve for relearning after switch to target B (panel A, bottom) (lines 1264-1265). To address the reviewer's question, the rats only need to learn reach to target B, and quickly apply grasping knowledge as seen by the higher accuracy (>50%) by post-switch session 2 in animals that reached the variable state more quickly. Thus, as the reviewer predicted, there is a quicker learning curve for target B if they are able to reach the variable state.
"2. Related to point 1c, when does the rat make a grasping attempt with respect to changes in single unit activity? This could be checked by locking PETHs to the end of the reach rather than reach onset. If single unit modulation is more closely locked to grasping than reach onset, this would suggest that the changes in neural activity may be more linked to changes in grasping than "exploration" vs "automatic" modes." We took this advice and re-ran the analyses from Figure 2 locked to first grasp instead of to first reach. We found that there was a similarly significant increase in minimum Fano factors during the variable session across 4/5 animals. One animal did not demonstrate this increase, likely due to high baseline variability when aligned to first grasp. Using the same variable sessions as previously identified, we also calculated trialaveraged z-scored unit modulation in M1 and DLS during the period -0.5s : first grasp : 0.5s. In M1, there was a significant decrease in unit modulation from baseline and automatic to variable, and significant increase in unit modulation from variable to relearned. In DLS, there was a significant decrease in unit modulation from the automatic to variable sessions, and significant increase from variable to relearned. We have added these findings to the Results section and to Supplemental Figure 5. Please see Results, Loss of consistent grasplocked M1 and DLS neural spiking during transfer learning (lines 237-253) and Supplemental Figure 5 (lines 1305-1311).
"3. Line 144: "…there was no change in 3-6 Hz M1-DLS LFP coherence between session types (data not shown)". This seems important enough to show and have statistics, since the claim is that there are specific physiologic changes related to variability in reach targets." We appreciate the insight that this data would enrich the discussion regarding neurophysiologic correlates related to movement variability. We have included a Supplementary Figure 4 (lines 1300-1304) that reflects the finding that baseline session reaches to A and variable session reaches to B have no significant difference in 3-6Hz M1-DLS LFP coherence and have updated the corresponding results section: Results, Stability of 3-6Hz M1-DLS coherence during variable state (lines 226-235).
"4. I don't think the ICMS studies contribute to the interpretation of the recording studies. Suppose reach trajectories had remained consistent during variably timed ICMS. How would that change the interpretation of the recording results? In fact, it's hard for me to conceive of a model in which variably timed stimulation doesn't lead to variability in movement endpoints. I'm willing to be convinced that the outcomes of variable stimulation timing add new or confirmatory information to the neural recordings, but right now I don't see it."
We have added additional text to better frame our approach and to motivate these experiments (lines 440-450); please also see the response to R1, point 4. We agree that the likely outcome was that variable ICMS patterning in M1 (as compared to reliable patterning) leads to more variable movements. However, as the reviewer points out, it was still possible that this is not the case (especially as, to the best of our knowledge, this experiment has not been done). Such a result is particularly conceivable because there are multiple other downstream areas (e.g., the red nucleus, reticular nucleus and the spinal cord) which could easily filter the output of M1 to make movements more similar. We thus felt it important to demonstrate that M1 variability (albeit using ICMS) may in fact be directly mapped to end-point variability. In our view, when the ICMS results are considered together with our recording results, they really highlight the full potential of M1 as a controller, not just as a reliable pattern generator but also an active modulator of behavioral variability and exploration. One could also consider simply removing Figure 6. However, we would like to point out, that in that case, we are less directly able to claim that M1 patterning (consistent or variable) has the potential to be directly mapped to movement control. Finally, in our view, the ICMS results perhaps help motivate future experiments in awake behaving animals where M1 patterning can be modulated with emerging methods in order to assess how ensemble patterns may be directly mapped to goal-directed movement control.
"5. I think the terms "transfer learning" and "motor adaptation" are ambiguous (line 619). At least, I cannot find clear, consistent definitions for them. I think of "transfer learning" as the acquisition of one task making it easier (or harder) to learn another, while I think of adaptation as learning new parameters for a modified version of a
previously acquired task. This task seems more like an "adaptation" than "transfer learning" since this is basically the same task (maybe transfer learning would be learning a pasta-grasping task after a single pellet task?). I think that the authors are arguing that because rats were trained only on target A, the introduction of a second target makes this more like learning a new task than adapting the parameters of an old one (if they had been trained on multiple targets, then a novel target was introduced, would that be adaptation?). The evidence that this is "transfer learning" is that the neural dynamics associated with reaches to target A disappear in "var" sessions. I'm not sure if that follows without having shown that dynamics are preserved in tasks that would more clearly be classified as "adaptation" as opposed to "transfer learning". Maybe this is just semantics. In any case, if the authors wish to make a distinction between transfer learning and adaptation, they should define what they mean by each term and clarify why they believe their observations are more consistent with "transfer learning" than "adaptation"." We thank the reviewer for bringing up this point. First, we can begin with why this behavior is less consistent with typical notions of adaptation: adaptation is the recovery of motor performance within a changed environment, a process which occurs through error-based learning (R Shadmehr, FA Mussa-Ivaldi, J Neurosci 1994; RD Seidler, et. al, Adv Exp Med Biol 2013); moreover, most experimental paradigms include testing of an "aftereffect" following the period of adaptation. Given our paradigm is so different from such experiments, we did not want to create confusion by adopting the terminology of adaptation. We also do agree that this behavior is not transfer learning in the classical sense either, as detailed by the reviewer above. Thus, we have updated the entire text to describe the paradigm as behavioral exploration in response to errors, leading to "relearning" or convergence to a new strategy.
Minor points/clarifications: 1. Line 99 "…after reaching to location A was > 50%." Does this mean the success rate for reaches to A was > 50%, or that > 50% of all reaches were targeted to location A? We have clarified this sentence to reflect the former, that success rate for reaches to A was greater than 50% (lines 98-99).
2. Were the central and lateral locations the same across rats (corrected for paw preference)? How were central and lateral locations defined? In Figure 1, it looks like the A and B locations are both "lateral" -is this an artifact of the illustration or were the A and B sites symmetric around the midline of the reaching slot? The central and lateral locations were not exactly the same across rats, as different rats had different "preferred" reaching locations with regard to cross-directionality and amplitude (i.e. a right paw-reaching rat could reach either straight centrally or across to the left without encountering the wall, with sufficient distance away from the wall such that reaching was encouraged over licking); this first location was defined as 'A.' 'B' was a location that was one pellet arm width (1cm) over from reach location A, either towards the center or laterally, with the same distance away from the center of the slit in the wall as A was (i.e. different angle from midline, same reach amplitude). We thank the reviewer for pointing out the ambiguity between locations A and B within the illustration. The A and B locations were not symmetric around midline, and this has now been updated in the illustration to be more accurate. We have additionally clarified A/B pellet within Methods, Behavioral training (lines 750-756).
3. Did success rate systematically differ between central and lateral locations, regardless of which location was assigned as "A" or "B"? A switch from the "hard" to the "easy" location may have different characteristics electrophysiologically and behaviorally than a switch from the "easy" to the "hard" location. Also, it would just be interesting to know if one location is more difficult than the other. Success rate was similarly > 50% to A at baseline, whether the location was central or lateral. Additionally, to control for this possibility of differences in difficulty, both types of switches were represented (i.e. from central to lateral, and lateral central), as detailed in Results: Transfer learning of an automatic skill is a multi-day process (line 100-102).
4. How far apart are the "A" and "B" locations? As now detailed in point 2, the distance between A and B was 1cm, with the same amplitude away from the wall.
5. Are there ever trials where a pellet isn't delivered (intentionally or unintentionally)? Variable reward schedules cause more habit-like behavior. If there is a significant proportion of trials during training in which a pellet is not present (and especially if this fraction differs across rats), this could explain why rats take different amounts of time to switch to reaching to target B. That is, I would expect rats that took longer to switch to the B target to have had a larger fraction of absent pellet trials during training. (R3): We thank the reviewer for bringing up this nuance. We calculated the proportion of trials during early learning where there erroneously was no reward presented and compared this to the number of days it took to reach the variable session. We have added this as Supplemental Figure 1d, demonstrating that among the 5 animals with a variable session, the relationship between no-reward-available trials and variable day has R 2 =0.743, p=0.0604 (lines 1267-1268). It is possible that variable reward schedules could play some role in causing habit-like, automatic behavior, as does number of trials to A.
6. Lines 237-239. I had to read this sentence several times to understand it -"In M1, when comparing the first reach to Location A, spiking to the average neural template for first reaches to Location A, there was a drop in neural pattern consistency during the variable session as compared to the other sessions." I think it means that the pattern of spiking activity in M1 was less consistent during "var" sessions than BL, auto, or rel sessions. We have simplified this sentence to reflect that during the variable session, there was a drop in trial-template neural pattern consistency in M1 (lines 267-269).
7. Lines 258-259: "but that there's a consistent temporal ordering of unit modulation soon after,…". I could not follow what the authors were trying to convey here. It sounds to me like they're suggesting that individual units fire in the same order before and after the "var" session, but I don't think that's what they mean. I think they mean that there is a similar pattern of consistent modulation (BL, auto), then variability (var), then consistent modulation (rel) after the target switch, but I'm not sure. We have edited this sentence to clarify that we mean the latter point, that there is a return to temporal consistency of firing across units at first reach onset after the variable session (lines 287-289). Fig 4f." Not clear what is shown in figure 4f from this text. Presumably how the timing of peaks changed between session types? Figure 4f details the peak population firing rate for a trial, across trials, across animals, grouped by session type; this has been clarified in the text (lines 1215-1216).
Line 310 -"Across all animals is shown in
9. Lines 567-570 -"Thus, this "unlearning" process of switching from previously rewarded ensemble activity to variable firing patterns likely involves a global network state shift, within both cognitive and motor circuits, towards generation of newer ensemble activity for reward." I wouldn't use the term "unlearning", which to me suggests that the task has disappeared from memory (maybe it has, but my guess is rats would quickly be successful if the pellet were moved back to target A). Perhaps "adapting"?. We thank the author for pointing out this distinction and have updated the phrasing per the response more fully explained in Major Comment 5 above (lines 620-623).
10. Figure 2d -would make the vertical scale the same for both plots to facilitate comparison. We thank the reviewer for this comment. We have decided to leave the vertical scale as is to highlight the point of the figure, that each animal has an increase in Fano factors of unit spiking during the variable session. Comparison across animals is less meaningful, due to the units being sampled having different baseline modulation from animal to animal. 11. In the Figure 2a caption, it is implied that the same unit was recorded across sessions. This may be true, but no analyses are presented to support this assertion, other than that the units were recorded on the same electrode. Of course, it is impossible to be certain that the same unit was recorded, but there should be some indication that they are in fact the same unit if that is the claim. That said, I don't think it's critical to know whether the same units were recorded across days, so I think it would be fine to just change the caption so it's clear that there was not an attempt to verify the unit's identity across days. Fraser and Schwartz (Fraser GW, Schwartz AB. Recording from the same neurons chronically in motor cortex. J Neurophysiol 2012; 107: 1970-1978) described a fairly straightforward algorithm to test single unit identity across recording sessions, but again I don't think this is necessary. We thank the reviewer for highlighting the ambiguity, and we are not claiming that the same unit was recorded across days. We have updated the figure legend accordingly (lines 1182-1183, 1194).
12. It may be worth commenting on how rats identify pellet location. There is evidence that they primarily use olfaction, and may not rely on vision at all (Whishaw IQ, Tomie JA. Olfaction directs skilled forelimb reaching in the rat. Behav Brain Res 1989; 32: 11-21.). I don't think this significantly changes the authors' interpretation that M1/DLS variability reflects an "exploratory" state (in fact, probably enhances it). However, it may be helpful for readers not familiar with rodent skilled reaching. We agree with the evidence from Wishaw and Tomie, 1989, that rodents identify pellet location via olfaction, and with the suggestion that this would be helpful for the broader reading audience. Thus, we have included this information within Methods, Behavioral training (lines 766-767).
Detailed response to Reviewer #4:
This reviewer has been asked to comment specifically about the CCA analyses. The authors use CCA to find that activity shared between M1 and DLS appears to be more task-related in the automatic and relearned sessions, compared to variable sessions. This finding fits in well with the overall storyline of figuring out what is happening during the variable sessions. This is a reasonable use of CCA, although there are several points of interpretation that are currently unclear: We thank the reviewer for the positive review on use of CCA and appreciate the feedback.
"1) The authors consider only the top canonical variable (CV). To find the top CV, CCA was fit to the trial window (-2s to +0.5s, line 883), which includes the pre-reach period (-1 to -0.5s), the reach period (-0.1 to +0.4s), and other trial epochs. How does this choice affect interpretation? One could imagine, for example, that the M1-DLS "communication subspace" varies (e.g., rotates) considerably across these trial epochs, spanning multiple dimensions of population activity space in either M1 or DLS. How then, can we be confident that, as a 1D summary over multiple trial epochs, the top CV is indeed a good representation of M1-DLS interactions during both the pre-reach and reach trial epochs?" Our motivation to use the broad model was first driven by our past work. Use of the top CV determined from a broad time period for comparison across animals and time has been used across two different experimental paradigms, including early learning and stroke recovery (T Veuthey*, K Derosier*, et al, Nature Communications 2020;L Guo, et al, Cell Reports 2021). One of the findings from these papers was there was always some degree of modulation in the CCA subspace, i.e. both prior to and during reach. Learning appears to substantially increase subspace activation. A second reason to use the broad time period here is that we did not want to only fit a model during reach for the variable session, which would limit conclusions surrounding when M1-DLS interactions are most modulated. Based on the past results, we anticipated that the broader model is perhaps able to better capture temporal coordination, and we can examine the RMI as a measure of changes in task-related subspace activation for each of the session types.
Here, we also followed a similar approach. One of the findings of this approach for the "broad window" CCA analysis is that the reach epoch is when M1-DLS cross-area subspace activity is maximal during baseline and relearned states (i.e. Fig 5c). This allowed us to then calculate the RMI by comparing pre-reach to reach period activations of this subspace. We then utilized the RMI to determine how CCA subspace reach-related modulation changed during the variable session; we found that dominant CV subspace activity was less increased by reaching, as evidenced by decreased RMI.
We can also answer this question of broad-time period CCA model validity by comparing R 2 values of submodels built on pre-reach (-1.5s:-0.5s relative to reach onset) and reach (-0.5:reach onset:0.5s) periods to that of the full model. For this analysis, only sessions with all three R 2 values were included for analysis. For models built on reach to A, there was no significant difference between R 2 for pre-reach (pre) vs broad (p=0.677), or reach vs. broad (p=0.585). This indicated that both epoch-based CCA models and broad time period-based CCA models are similarly generalizable. These results are consistent with our past results, that there is some degree of cross-area transmission across time during the experiment; the extent of transmission can be modulated by learning and exploratory behaviors. Of note, many sessions only had one significant CV, and thus the top CV was used for similarity of comparison across animals and sessions. 17 sessions across 5 animals met criteria for CCA analysis inclusion for reaches to A, as detailed in the Methods section -2 had no significant CVs (11.8%), 10 had 1 significant CV (58.8%), 4 had 2 significant CVs (23.5%), and 1 had 3 significant CVs (5.88%). 16 sessions across 5 animals with reaches to B met criteria -3 had no significant CVs (18.8%), 9 had 1 significant CV (56.3%), 3 had 2 significant CVs (18.8%), and 1 had 3 significant CVs (6.25%).
Rat
Finally, to examine whether reach-epoch limited models might demonstrate preserved reach-related subspace activations in contrast with our findings in Fig. 5f,g, we compared trial peak subspace modulation at -0.2s to 0.2s around reach onset across session types. Strikingly, across M1 and DLS we also found lower subspace modulation for both A and B reaches during the variable session, as compared to the automatic state and relearned state, respectively. This data has been added as Supplemental Figure 7b,c (lines 1321-1324), and to Results, Changes in M1-DLS cross-area subspace modulation (lines 421-430), demonstrating that whether the broader time window or reach-related window is used to build the CCA model, there is a drop in reachrelated cross-area activity.
"2) Related to the point 1), is it possible that task-related activity is present during the variable state, but it shows up in CV2 or CV3 rather than CV1?" We thank the reviewer for pointing out this alternate possibility. For the valid variable sessions with reaches to A, only one session (from one rat) had a second significant CV. Because we only had a single session, we tested whether the distribution of pre-reach and reach-related activity was significantly different using the Kolmogorov-Smirnov test (K-S test) of two samples, and found no significant difference (M1: ks2stat = 0.0769, p = 0.411; DLS: ks2stat = 0.0538, p = 0.835).
For the valid variable sessions with reaches to B, we similarly performed K-S tests by session to determine whether there was significantly different activity during the reach period in non-top CVs: As detailed in the response to point 1, 17 sessions across 5 animals met criteria for CCA analysis inclusion for reaches to A, as detailed in the Methods section -2 had no significant CVs (11.8%), 10 had 1 significant CV (58.8%), 4 had 2 significant CVs (23.5%), and 1 had 3 significant CVs (5.88%). 16 sessions across 5 animals with reaches to B met criteria -3 had no significant CVs (18.8%), 9 had 1 significant CV (56.3%), 3 had 2 significant CVs (18.8%), and 1 had 3 significant CVs (6.25%). We have updated Methods, Cross-area neural subspace accordingly (lines 950-955).
"4) In Fig 5d, the scatter of neural activity looks similar for Automatic (A reaches) and Relearned (A reaches). One might postulate that the neural activity should be quite different for Automatic (which is pre-learning) and Relearned (which is post-learning). Why is it reasonable that the activity looks similar in the two cases? What might that indicate about how the interactions between the two areas support (or not support) learning? (Same question for B reaches.)"
What we're seeing here might just be mass driving of M1 to DLS rather than the details of A vs. B; that is, M1-DLS communication during reaching is a consistent M1 to DLS drive in the Automatic and Relearned states.
An alternate possibility is that for the Automatic state, there is motor noise (i.e. model for A and model for B are similar) such that reaches to B are within the motor noise of reaches to A, and for the Relearned state there could be two co-existing models. However, given the fact that overwhelming majority neurons aren't selectively modulated for either reach type (Supp. Fig. 6a), evidence points towards the "mass driving" hypothesis. Fig 5f and 5g, which show some positive and some negative RMI?" The sign of CV1 is arbitrary -therefore we chose the sign of CV1 that enabled mean activity along CV1 within a session to be the same direction across sessions (e.g. positive). Thus, we could directly compare RMI of trials across sessions and across animals. In those trials where the RMI is negative, the projected trial spiking on CV1 is less correlated during reach than it is prior to reach.
Minor comments: -Line 76 and 347: If the authors are going to use the phrase "communication subspace," they should probably cite Semedo et al., Neuron 2019. They should also be careful about the use of this phrase (as in the Discussion, starting at Line 577), since a communication subspace refers to more than just the canonical variables returned by CCA, but also their relationship to the other activity patterns (if any) in M1 and DLS (i.e., are there activity patterns within each area that are not captured by the canonical variables?). We thank the reviewer for bringing up this oversight, and have cited Semedo, et. al. accordingly when introducing the concept. Given that we are only examining the CCA subspace and not the relationship to other local activity patterns, we have corrected references to "communication subspace" to "cross-area subspace" or "CCA subspace" throughout the manuscript.
-Line 882: References 10-12 are cited for CCA. This might be a typo. We thank the reviewer for catching this error, the citations have been updated accordingly.
-Line 885: "mean activity in each group was subtracted." The authors should justify why this is needed and how it affects interpretation. Subtracting mean firing rate is a defined step in the canonical correlation analysis, as the process involves comparing data with zero mean and unit standard deviation; it is built into the MATLAB function canoncorr (Guo, et. al., Cell Reports 2021, Veuthey* & Derosier, et. al, Nature Communications 2020. Theoretically, by mean subtracting, one can identify patterns of communication to downstream regions that are independent of absolute firing rate. We have clarified this point in Methods, Cross-area neural subspace (lines 939-941).
-Supp Fig 3B: Explain what the reader is supposed to take away from these CCA R^2 values. Also, it's unclear how they support the title of this figure: "Majority of M1 and DLS spiking not related to reach direction." The R 2 values measure the predictive power of the model, that is how well the model generalizes to held out data. These R 2 values come from cross-validation -we randomly partition the full dataset into 10 folds and cycle through each fold, assigning one fold to be the test data and the other nine to be the training data. We fit a CCA model to the training data, then project the test data onto this model, and compute the R 2 between the M1 and DLS projections. We find that R 2 is consistent across sessions, leading us to conclude that communication along the M1 and DLS CCA subspace occurs at the same "strength" across the relearning "1) Supp Fig 7a: More could be done to guide the reader through this figure, both regarding basic details and -more importantly --the primary conclusion of the analysis represented." "Regarding basic details: What do individual points represent? Why are there a different number of points in each panel (one can infer the answer after reading L948-954 of Methods, but descriptions here would be helpful)." There are a different number of points in each panel, because only sessions with enough units (at least 5 in each region) to run CCA analyses and models with significant R 2 values were included. These were built separately for reaches to A versus B -therefore if a relearned session no longer had reaches to A, a CCA model for reaches to A during the relearned session type could not be built. We have updated the figure legend (Lines 1324-1325) to clarify that the individual points represent top CV R 2 values only from sessions that met the CCA inclusion criteria as detailed here and in the Methods.
"Regarding the primary conclusion: If I did not read the authors' conclusion, the most salient feature to me is the variability in R^2 values. Take, as one example, the "rel" column of models fit to B-reach trials: predictive performance varies by a factor of 5 between the least predictive and most predictive models. Could the authors clarify why one should conclude from these plots that the M1-DLS CCA subspace is "similarly generalizable across sessions"?" We thank the reviewer for bringing up the alternate interpretation of the plots. R 2 values can range from 0 to 1, and the scales of the graph may in part have created the appearance of strong variability. Because we have only focused on the automatic, variable, and relearned sessions for CCA analyses, we have modified Supplemental Figure 7a to be more representative of the data: in the left panel, we demonstrate R 2 values for CCA models of reach to A across the automatic, variable, and relearned session types, with the right panel showing the variable and relearned sessions for reaches to B. Additionally, we have added the mean and standard deviation to better enable comparison of R 2 across session types. There is no significant difference in R 2 values across different session types demonstrated, for models built to A or models built to B. Thus, the reliability of the CCA model in being generally applicable is not significantly different for a given session type.
Of note, the number and quality of units in each region, that can vary from day to day, can contribute to variability of the R 2 value. The alternative hypothesis would be that as we see breakdown in spatiotemporal spiking consistency in both M1 and DLS during the variable session, CCA models could have a lower R 2 during this day as well, if M1-DLS correlated activity was not maintained in some structured fashion. However, we do not see evidence of this in the data we have. That being said, we have modified the Supplemental Figure 7 description to more broadly capture that the panels represent further CCA analyses beyond Figure 5.
"2) Could the authors more carefully define "peak subspace modulation" (used, e.g., in L423-424, L963, Supp Fig 7bc)? How does this metric differ from the original RMI metric (Fig 5e), if at all?" We thank the reviewer for raising this point. Peak subspace modulation is defined as the maximum subspace activity during -0.2s to 0.2s around reach onset, whereas the relative modulation index is the difference in median subspace activity from the reach period (-0.1s to 0.4s around reach onset) compared to the pre-reach period (-1s to -0.5s before reach onset). We have clarified the definition of peak subspace modulation in the main text (Lines 424-425), Methods (Lines 974-976), as well as Supplemental Figure 7 legend (Lines 1327-1328). " 3) The RMI metric should be explained in more detail in Methods. In the authors' response, they write: "In those trials where the RMI is negative, the projected trial spiking on CV1 is less correlated during reach than it is prior to reach." To clarify, it is not that the activity is less correlated, but rather the activity pre-reach is higher than the activity during reach, correct? If this is true, it would be helpful to make this clear to the reader." We thank the reviewer for pointing out that the RMI metric was unclearly defined in the Methods and have clarified this (Lines 969-970). For the second comment, we thank the reviewer for pointing out the nuance | 2022-05-06T06:23:44.213Z | 2022-05-04T00:00:00.000 | {
"year": 2022,
"sha1": "5c1548cdc5629f6ea9289c549a4209be91f574bf",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f2c74ff7f212409fd26a464fd644f8d100a65be8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1180514 | pes2o/s2orc | v3-fos-license | The impact of injuries study. multicentre study assessing physical, psychological, social and occupational functioning post injury - a protocol
Background Large numbers of people are killed or severely injured following injuries each year and these injuries place a large burden on health care resources. The majority of the severely injured are not fully recovered 12-18 months later. Psychological disorders are common post injury and are associated with poorer functional and occupational outcomes. Much of this evidence comes from countries other than the UK, with differing health care and compensation systems. Early interventions can be effective in treating psychological morbidity, hence the scale and nature of the problem and its impact of functioning in the UK must be known before services can be designed to identify and manage psychological morbidity post injury. Methods/Design A longitudinal multi-centre study of 680 injured patients admitted to hospital in four areas across the UK: Nottingham, Leicester/Loughborough, Bristol and Surrey. A stratified sample of injuries will ensure a range of common and less common injuries will be included. Participants will complete a baseline questionnaire about their injury and pre-injury quality of life, and follow-up questionnaires 1, 2, 4, and 12 months post injury. Measures will include health and social care utilisation, perceptions of recovery, physical, psychological, social and occupational functioning and health-related quality of life. A nested qualitative study will explore the experiences of a sample of participants, their carers and service providers to inform service design. Discussion This study will quantify physical, psychological, social and occupational functioning and health and social care utilisation following a range of different types of injury and will assess the impact of psychological disorders on function and health service use. The findings will be used to guide the development of interventions to maximise recovery post injury.
Background
Worldwide 5.8 million people die [1] and more than 45 million are moderately or severely disabled following injury each year [2], making injuries responsible annually for 10% of all deaths [1] and 16% of all disabilities [2]. Injuries were the leading cause of preventable years of life lost between the ages of 0 and 74 years in 2005 [3]. The scale of the problem is likely to increase, as injury related deaths are projected to rise by 28% between 2004 and 2030, predominantly due to deaths from road traffic injury [2]. Unintentional injuries also place a large burden on health care resources. They result in more than 11,000 deaths in England and Wales [4], three quarter of a million hospital admissions in England, resulting in more than 3.6 million bed days [5] and 5.8 million Emergency Department (ED) attendances in the UK [6]. Working age adults comprise 40% of unintentional injury deaths, 35% of hospital admissions and 50% of ED attendances [5][6][7].
Recent reviews [27][28][29][30] suggest the prevalence of psychological disorders post injury is high and that these may be associated with poorer functional and occupational outcomes [10,26,31]. A review of psychiatric morbidity after motor vehicle collisions found the most commonly reported disorders were depression (21% to 67% across studies), anxiety (4% to 87% across studies), driving phobia (2% to 47% across studies) and PTSD (0% to 100% across studies) [30]. A second systematic review reported post traumatic injury rates of PTSD ranging across studies from 2-30%, depression from 6-42%, with up to half of those with PTSD also having comorbid depression. Anxiety disorders were reported to range from 4-24%, with up to 60% of those with PTSD also having co-morbid anxiety disorders. Specific travel phobias for those injured in motor vehicle collisions were reported to range from 4-29% [28]. A third review found rates of PTSD to range from 2% to 50% across studies [29].
Few large prospective studies have measured psychological morbidity following injury in the UK. Psychiatric disorders were found to be common in injured male ED attenders in the short (48% at 6 weeks) and medium (43% at 6 months) term [32]. A second study of road traffic injured ED attenders found 8% had developed PTSD by 3 months, and nearly a quarter had psychiatric complications at one-year [33]. A third study by the same authors in a similar study population found 36% reported psychological problems at 3 months and 32% at 1 year, with PTSD being reported in 23% at 3 months and 17% at 1 year [16]. The generalisability of the findings of these studies to wider population groups and to those suffering a range of injuries is unclear. Although a recent large UK prospective study of injury related disability has been undertaken [34], this does not measure psychological outcomes.
Evidence suggests that screening tools may be useful in health care settings for identifying those at risk [35], and early interventions can be effective in treating psychological morbidity following injury. Individual traumafocused cognitive behavioural therapy (TFCBT), stress management and group TFCBT are effective in the treatment of PTSD [36] and pharmacotherapy, particularly selective serotonin reuptake inhibitors (SSRIs) are effective in reducing symptoms of PTSD and associated depression [37]. There is also limited evidence that psychosocial interventions may not help prevent physical, psychological and social disability post injury but that an intervention based on complex collaborative care may do so [38]. Current UK guidelines propose that heath and social care workers should understand the psychological impacts of trauma and, as an immediate response, offer practical, social and emotional support. In addition the guidance supports the use of TFCBT and the use of antidepressants [39]. A health service model has been proposed for identifying those who may benefit from such interventions [28], but in order to design such a service, the prevalence of such morbidity and its impact on functioning and costs, must be known. The importance of, and need for qualitative research in establishing the needs of injured patients, areas of unmet need, gaps in service provision and barriers and facilitators to accessing services for the purposes of informing service provision has also been highlighted [40][41][42][43][44]. Further exploration of the experiences of service users, carers and service providers in the UK is required in order that services can be designed which will maximise recovery post injury.
Aims
The aim of the study is to measure and characterise physical, psychological, social and occupational outcomes post unintentional injury and identify service use, gaps in service provision and information needs, and barriers and drivers to accessing services.
Objectives
The objectives of this study are to: • Measure physical, psychological, social and occupational outcomes post unintentional injury • Measure health and social care provision, use and cost • Quantify the impact of psychological problems on recovery from a range of unintentional injuries • Identify service use, gaps in service provision and information needs, and barriers and drivers to accessing services from the perspective of those with injuries, their carers and service providers.
Methods/Design
Participants This is a longitudinal multi-centre study with a nested qualitative study recruiting participants admitted to hospital with a wide range of unintentional injuries from 4 UK study centres (Nottingham, Leicester/Loughborough, Bristol, and Surrey). A stratified sampling frame (Table 1) will be used to guide recruitment to ensure a range of common and less common injuries will be included and to allow comparison with other studies of injury morbidity [20,22,34]. Participants will be recruited in Emergency Departments (EDs), on hospital wards, in outpatient departments (OPDs), or by post following hospital discharge. Participants with upper and lower limb injuries and those with multiple injuries, and their carers along with representative service providers, will be eligible for, and recruited by post to the qualitative study.
Centres for recruitment
Recruitment will be undertaken in NHS Trusts at the 4 study centres: Nottingham University Hospitals NHS Trust Queens Medical Centre campus, Leicester Royal Infirmary, Bristol Royal Infirmary, Frenchay Hospital (Bristol), and the Royal Surrey County Hospital.
Exclusion/Inclusion Criteria
Inclusion criteria: patients aged between 16 and 70 years, who are admitted to hospital in one of the participating centres following an unintentional injury, which occurred up to 3 weeks prior to the date of recruitment, and who are able to give consent will be eligible to participate in the longitudinal study. Participants with upper and lower limb injuries and those with multiple injuries and their carers and service providers will be eligible for the qualitative study.
Exclusion criteria: patients will be excluded if they are below the age of 16 and above the age of 70 at the time of their injury, do not have an address (due to inability to follow-up these patients), are not admitted to hospital, do not allow access to their medical notes, or are unable to give consent. Patients with significant head injuries (defined as loss of consciousness, amnesia or a Glasgow coma scale of < 15 at presentation) will be also excluded due to the difficulty of distinguishing between the sequelae of even mild head injury and psychological morbidity [45,46].
Measures
At baseline (day of recruitment to the study) participants will be asked to complete a questionnaire covering circumstances surrounding their injury, socio-demographic and occupational details, health status, quality of life and social and occupational functioning in the 4 weeks prior to their injury. The following standardised tools will be used: the Alcohol Use Disorders Identification Test (AUDIT) [47], the Drug Abuse Screening Test (DAST) [48], the Hospital Anxiety and Depression Scale (HADS) [49], an adaptation of the Accident Fear Questionnaire (AFQ) [50], the EQ5D [51], the HUI-3 [52], the Work Limitations Questionnaire [53] and the Social Functioning Questionnaire [54]. Participants will also undergo a shortened structured clinical diagnostic interview (SCID) [55] to determine pre-injury psychological morbidity. A small incentive (£2 high street gift voucher) will be given to participants on receipt of completed questionnaires.
At 1 month, 2 months, 4 months, and 12 months post injury [56], participants will be asked to complete follow-up questionnaires covering whether they are still affected by their injury, perceptions of recovery, factors that helped or hindered recovery, health and social care resource use, time off work and litigation and compensation. In addition, they will be asked to complete the standardised tools used at baseline as well as the Impact of Event Scale (IES) [57], the Trauma Screening Questionnaire (TSQ) [35], the Changes in Outlook Scale (CIO), [58], the Crisis Support Scale (CSS) [59], the List of Threatening Events (LTE) [60] and a visual analogue pain scale. Participants scoring above threshold values on the AUDIT, DAST, HADS, IES or TSQ will be contacted to undertake a shortened SCID administered face-face or by telephone, containing questions related only to the tool(s) for which they scored above the threshold value. Follow-up questionnaires will be i Where there are < 10 expected participants in any cell of the sampling frame, attempts will be made to recruit as many people as possible, but these cells have not been included in the total number of participants administered by post, phone, or via email depending on participant preference. Non-responders will be followed up by 2 mailed questionnaires and/or telephone reminders. A small incentive (£2 high street gift voucher) will be given to participants on receipt of completed questionnaires. Data will be extracted from the medical records to allow injury severity scoring using the Abbreviated Injury Scale (AIS) [61]. Socio-economic status will be based on area deprivation scores derived from the postcode of residence using the 2010 Index of Multiple Deprivation [62]. Aggregated data on age group, gender and injury type will be collected for a 6 month period from patients who do not consent to the study to explore the generalisability of findings.
Qualitative study
Semi-structured interviews will explore participants experience of their injury and their post injury care including factors that facilitate or hinder recovery such as access to healthcare and social support and issues surrounding the effects of litigation and compensation. Interviews will be conducted in the patient's homes or by telephone and will be audio recorded and transcribed. A maximum of 48 interviews will be undertaken in total across the 4 study centres. Maximum variation sampling will be used to obtain a sample of injured participants with injuries of varying types and severities, varying degrees and types of psychological morbidities, levels of deprivation, social support, age and gender. Interviewed participants will also identify carers and representative service providers to be interviewed. A maximum of 32 carers will be interviewed to explore perceptions of the recovery process and factors that facilitate or hinder recovery from a carer's perspective. A minimum of 32 service providers will be interviewed to explore factors that facilitate or hinder recovery from the perspective of people who deliver services. Additional interviews will be undertaken with managers or commissioners of services where these exist.
Ethical Considerations
The study has multi-centre research ethics committee approval from the Nottingham Research Ethics Committee 1 (number: 09/H0407/29).
Analysis
Baseline characteristics of participants will be described using frequencies and percentages for categorical variables and means (and standard deviations (SD)) or medians (and inter-quartile ranges (IQR)) depending on the shape of their distributions, for continuous variables.
At each follow-up time-point the prevalence of binary and categorical physical, psychiatric, social and occupational outcomes will be described using percentages (and 95% Confidence Intervals). Scores for standardised scales will be described using means (and SDs) or medians (and IQRs) depending on the shape of their distributions. Changes from baseline pre-injury health status, quality of life, social and occupational functioning will be calculated and described using means (and SDs) or medians (and IQRs) depending on the shape of their distributions. As the use of a multi-centre study design will affect the precision of estimates of prevalence and means, this will be accounted for in the estimation of 95% confidence intervals.
Random-effects generalised linear models will be used to quantify the association of psychological morbidity with EQ5D, HUI, work limitations, time off work due to injury and social functioning. This analysis will use repeated measures of both the outcomes and the psychological morbidity variables at 1, 2, 4 and 12 months, with participant as a level 2 unit (cluster) and measurement occasion as a level 1 unit, to allow for correlations of measurements within patients. The exposure variables of interest are psychiatric diagnoses, defined as meeting the Diagnostic and Statistical Manual (DSM) criteria for each disorder measured by the SCID. Analyses will be adjusted for study centre (Nottingham, Bristol, Surrey, Loughborough/Leicester). Causal diagrams will be drawn to identify confounders for inclusion and effect mediators for exclusion from models. Follow-up time will also be included in the models. Tests of interaction will be carried out between having a psychiatric diagnosis and confounding variables using likelihood ratio tests, to examine whether any association between having a psychiatric diagnosis and each outcome of interest differs according of the level of the confounding variable. Tests of interaction will also be carried out between psychiatric diagnosis and follow-up time to see whether the associations change with time after injury.
Checks of assumptions
We will assess multicollinearity through calculation of correlations and VIF values. We will calculate residuals at both levels and assess these for normality; if they do not show an approximately normal distribution then transformations will be applied. We will compare results with and without excluding observations with large standardised residuals (< -3 or > 3 standard deviations from the mean of a normal or normalised random variable).
Additional analyses
Factors associated with psychiatric diagnoses will be explored using univariate and multivariate randomeffects generalised linear models This analysis will use repeated measures of the psychiatric diagnosis variables at 1, 2, 4 and 12 months, with participant as a level 2 unit (cluster) and measurement occasion as a level 1 unit, to allow for correlations of measurements within patients. The main outcome variable will be a binary variable for any psychiatric diagnosis, Further analyses for specific diagnoses will only be undertaken where the sample size is sufficient.
Economic Analysis
The economic analysis will be carried in accordance with the statistical analysis outlined above: randomeffects generalised linear models will be used to quantify the association of psychological morbidity with resource use and costs from the NHS, Personal Social Services (PSS) and societal perspectives. Subject to sufficient statistical power being established a posteriori, resource use and costs will be compared across several a priori sub-groups, including • • With previous history psychiatric diagnosis (prior to or at time of injury) Separate analyses will be undertaken from each of the perspectives. Costs will be derived by assigning unit costs to units of patient-reported resource use; unit costs will be collected from published sources: the BNF; NHS Reference Cost Schedule and PSSRU; ONS [63][64][65][66]. This analysis will also estimate the costs attributable to psychological morbidity.
A sub-sample of 100 patients will have their medical records audited to compare with self-reported resource use, using a pre-existing data extraction form. If it appears from the sub-sample that self-reported resource use is biased systematically (i.e. consistently under -or over-reporting resource use), we will model this bias in the sub-sample, then use the model to correct the resource use reported in the full sample. We will conduct sensitivity analyses by comparing the results of the economic analyses using different estimates of resource use.
Missing data
Missing data will be subjected to sensitivity analysis with respect to the outcome and exposure variables to determine whether it is reasonable to assume missingness at random. If appropriate, we will use multiple imputation to replace missing values at baseline or follow-up. We will compare results with a complete case analysis. If data are not missing at random, either sensitivity to inclusion/exclusion/imputation will be reported, or selection models will be explored.
Sample size
680 participants will be recruited, with an estimated 456 (67%) followed up for 1 year. This provides 80% power (alpha = 0.05) to detect differences in the EQ5D, between those with, and without the condition of interest, of between 0.08 (anxiety) and 0.13 (depression) assuming a standard deviation of 0.23 based on population norms [67], or differences in the EQ5D of between 0.10 (anxiety) and 0.17 (depression) assuming a standard deviation of 0.3, as the standard deviation in an injured population may be larger than that in the general population. This is illustrated in Table 2.
Time scale
Participants will be recruited from June 2010 to June 2012.
Discussion
This will be the first UK study to provide detailed estimates of the prevalence of psychological morbidities following a wide range of injuries in working age adults, and to assess their effect on functioning and health and social care resource use. It will use a range of validated standardised outcome measures and unlike many previous studies, will not rely solely on the use of screening tools for measuring psychological morbidity but will use the SCID to make psychiatric diagnoses. Measurement of physical, social and occupational functioning will allow an assessment of the contribution of psychological morbidity to delayed or sub-optimal recovery. The economic analysis will allow quantification of the health and social care costs and the contribution of psychological morbidity to those costs. Such information is vital if services are to be further developed to maximise recovery post injury. The nested qualitative study is a unique addition to previous quantitative studies of psychological morbidity post injury and the experiences of those with injuries, their carers and service providers will provide valuable insights into service development.
Funding Source
This paper presents independent research commissioned by the National Institute for Health Research (NIHR) as | 2017-04-02T15:27:29.345Z | 2011-12-31T00:00:00.000 | {
"year": 2011,
"sha1": "e3cb5067095b59ea001b98ee66f2a4d46453a55d",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-11-963",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1bc368294b93fc55c7bf7c90abb8c8c851d5b39",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247629156 | pes2o/s2orc | v3-fos-license | Comparative transcriptome analysis of melon (Cucumis melo L.) reveals candidate genes and pathways involved in powdery mildew resistance
Powdery mildew is a major disease in melon, primarily caused by Podosphaera xanthii (Px). Some melon varieties were resistant to powdery mildew, while others were susceptible. However, the candidate genes associated with resistance and the mechanism of resistance/susceptibility to powdery mildew in melon remain unclear. In this study, disease-resistant melon cultivar TG-1 and disease-susceptible melon cultivar TG-5 were selected for comparative transcriptome analysis. The results suggested that the numbers of differentially expressed genes (DEGs) in TG-5 was always more than that in TG-1 at each of the four time points after Px infection, indicating that their responses to Px infection may be different and that the active response of TG-5 to Px infection may be earlier than that of TG-1. Transcription factors (TFs) analysis among the DEGs revealed that the bHLH, ERF, and MYB families in TG-1 may play a vital role in the interaction between melon and powdery mildew pathogens. GO enrichment analysis of these DEGs in TG-5 showed that the SBP, HSF, and ERF gene families may play important roles in the early stage of melon development after Px infection. Finally, we speculated on the regulatory pathways of melon powdery mildew and found PTI and ABA signaling genes may be associated with the response to Px infection in melon.
. Moreover, Gao et al. compared the lncRNAs between susceptible and resistant melon cultivars in response to PM infection 21 . Zhu et al. also analyzed the comparative transcriptome of the melon resistant MR-1 and susceptible Topmark cultivars to identify candidate genes [17][18][19]21,22 . Extensive studies have reported that loss-of-function mutations in one or more appropriate mildew resistance locus (MLO) genes can protect plants from powdery mildew fungi infection 23 25 .
In this study, a comparative transcriptome analysis of the resistant cultivar TG-1 and the susceptible cultivar TG-5 was performed to identify some candidate DEGs related to powdery mildew resistance. These results provided several new insights into the molecular defense mechanisms of melon cultivars exhibiting strong resistance to Px infection and valuable information for breeding powdery mildew resistant melon cultivars.
Materials and methods
Plant maintenance and pathogen infection. Cucumis Melo cultivars TG-1 (powdery mildew resistant) and TG-5 (powdery mildew susceptible) were used for inoculation with powdery mildew in this study (Fig. 1). TG-1 and TG-5 were grown in a greenhouse under a controlled temperature of ~ 28 ℃/~ 22 ℃ (day/ night), a 14 h/10 h light/dark, and an average of 60% humidity. At the 3-5 leaves stage and when plants were approximately 15-20 cm high, three plants of each cultivar were inoculated with powdery mildew by cutting the leaf blades at mid-length with sterile scissors previously dipped in the spore suspension. Three plants of each cultivar were mock-inoculated with sterile liquid medium to use as negative controls. To clarify the changes in the gene expression levels of TG-1 and TG-5 during the first stages of their compatible interactions with Px, leaf samples distal from the wound site were taken before infection (0 days, control) and after 1, 2, 3 and 5 days, respectively. Three biological replicates were prepared at each time point. In total, 30 samples were immediately frozen in liquid nitrogen and stored at − 80 ℃ for total RNA isolation and further analysis. RNA extraction, library construction and RNA sequencing. Total RNA was extracted using TRIZOL reagent (Invitrogen, Gaithersburg, MD, USA) and purified with an RNeasy mini kit (QIAGEN, Germantown, MD, USA) as described by the manufacturer's instruction. the Qubit RNA Assay Kit in a Qubit 2.0 Fluorometer (Life Technologies, CA, USA) was used to measure the RNA concentration. High-quality RNA was then used for library construction using the Illumina TruSeq Stranded RNA Kit (Illumina, San Diego, CA, USA) following the manufacturer's instructions. The purified cDNA libraries were further enriched by PCR. Transcriptome sequencing of the prepared libraries was performed on an Illumina 2500 platform with paired-end 150 bp reads (Novogene Bioinformatics Institute, Beijing, China).
Total 30 cDNA libraries were generated using the Illumina HiSeq 2500 platform in this study. All raw reads were assessed for quality using the program of FastQC (V0.11.3) (https:// www. bioin forma tics. babra ham. ac. uk/ proje cts/ fastqc/) with the default parameters and filtered using Trimmomatic (V0.38) 26 (parameters: ILLUMI-NACLIP: TruSeq3-PE-2.fa:2:30:10 SLIDINGWINDOW:5:20 LEADING:5 TRAILING:5 MINLEN:50) to acquire clean data. Then clean reads of all samples were aligned to the reference genome V3.6.1 of melon (Cucumis melo L.) (ftp:// cucur bitge nomics. org/ pub/ cucur bit/ genome/ melon/ v3.6. 1/ CM3.6. 1_ pseud omol. fa. gz) 1 by using Hisat2 (v2.0.5) 27 with the default parameters. The HTSeq 28 software with the default parameters was used to count the number of RNAseq reads that mapped to each gene of C. melo reference. The count files were then merged into a count table containing read-count information for all samples. DESeq2 V1.30.1 29 was subsequently applied to the count table separately to calculate the gene expression levels. We measured the expression levels with log2transformed expression values. To allow log2-transformation of genes with expression values of zero, we added 0.01 to the expression values before log2-transformation. www.nature.com/scientificreports/ Identification of differentially expressed genes and GO enrichment analysis. Differential gene expression analysis between pairs of samples was performed on the normalized data obtained above using the DESeq2 V1.30.1 29 package, and the adjusted p-values were calculated using the Benjamini and Hochberg method to control the false discovery rate. The R function prcomp was used to perform Principal Component Analysis (PCA) analysis on the expression matrix of all samples. The R package ggord V1.1.6 30 was used to visualize the results of PCA. The standard for screening DEGs were set as p-values < 0.05 and Log 2 (fold change) > 1 or < − 1. We used two different grouping comparison methods to identify DEGs. The first grouping is the comparison between disease-resistant materials and susceptible materials on the first 1, 2, 3, and 5 days after the inoculation of the pathogens, and the susceptible materials are used as the control. The second grouping method is diseaseresistant and susceptible cultivars at days 1, 2, 3 and 5 were compared with those expressed in the respective control group (day 0, sterile liquid medium) to find DEGs. Trend analysis is a method of clustering gene expression patterns (the shape of the expression profile over multiple phases) for multiple "continuous" samples using read counts by STEM 31 software with the default parameters. Trend analysis was used to divide the different time point expression patterns into differential clusters and thus find genes with the same expression pattern. It was applied to at least three and more consecutive type samples (samples containing a specific temporal, spatial or treatment dose size order, etc.
Identification and annotation of differentially expressed genes. Compared to control plants inoc
To further identify the function of the notable transcripts differentially expressed between the two cultivars under Px infection, we performed GO enrichment analysis of DEGs from the over-represented profile. In TG-5, the most abundant GO terms were negative regulation of endopeptidase activity (GO:0010951), cysteine-type endopeptidase inhibitor activity (GO:0004869), response to stress (GO:0006950), protein serine/threonine phosphatase activity (GO:0004722), and peptidase inhibitor activity (GO:0030414) in profiles 1 or 2, in which gene expression decreased from day 0 to 1, and low expression levels were maintained thereafter. Molecular functions analysis showed that these target genes were mainly enriched in enzyme activity (Fig. 4, Table S8). In TG-1, genes involved in protein phosphatase inhibitor activity (GO:0004864), hydrolase activity, acting on glycosyl bonds (GO:0016798), defense response (GO:0006952), intramolecular transferase activity (GO:0016866), negative regulation of peptidase activity (GO:0010466), peptidase inhibitor activity (GO:0030414), negative regulation of endopeptidase activity (GO:0010951), cysteine-type endopeptidase inhibitor activity (GO:0004869) and transmembrane transporter activity (GO:0022857) were enriched in clusters 1, 5, or 11. Interestingly, genes enriched in cluster 7 of TG-5 and cluster 2 of TG-1 mainly related to photosynthesis, such as photosystem I (GO:0009522), photosystem II (GO:0009523), photosynthesis (GO:0015979), photosynthesis and light-harvesting (GO:0009765), chlorophyll-binding (GO:0016168), chloroplast avoidance movement (GO:0009903) and chloroplast accumulation movement (GO:0009904) (Fig. 4, Table S8). Besides, the GO enrichment results for DEGs between TG-5 and TG-1 at each time point also remained mostly consistent with the above results (Fig. S6, Table S9), indicating that these DEGs were actively expressed after Px infection.
Characterization of transcription factors among DEGs.
A total of 536 TFs among the DEGs were identified in TG-1 and TG-5. Among them, 97 TFs were identified for TG-1, and a total of 439 TFs were identified for TG-5 (Table S10). The TF families that differentially expressed across the four time points in TG-1 were bHLH, ERF, MYB_related and TALE (Fig. 5). Enrichment analysis of differentially expressed transcription factors on day 1 (ST1) in TG-5 found these TFs were significantly enriched in the families SBP, HSF, and ERF (Fig. S7A). There were two differentially expressed transcription factors in common between the resistant material (TG-1) and the susceptible material (TG-5) in the four periods (Fig. S7B). The two genes, MELO3C004556 and MELO3C006431, belong to the HSF and ERF transcription factor families, respectively.
Signal transduction pathways in response to Px infection. Phytohormones responsible for signal
transduction can modulate systemic defense responses, such as PTI, ETIJA, SA, and ABA 18 , and play important roles in disease resistance. Based on these results, PTI and ABA signaling genes were found to be potentially involved in the reactions of TG-1 and TG-5 melon cultivars against the Px pathogen in this research. In the ABA signaling pathway, three genes encoding PYR/PYL proteins were down-regulated in TG-1 after Px infection. Genes encoding PYR/PYL and SnRK2 were down-regulated in TG-1 at all four time points, especially at 1 day post Px infection (Fig. 6A,C). In contrast, genes encoding SnRK2 in TG-5 were up-regulated at early stage after Px infection, then down-regulated at subsequent stages (Fig. 6B,C). In the PTI signaling pathway, www.nature.com/scientificreports/ seven genes encoding CML/CDPK, and one gene encoding MAPK were up-regulated in TG-1 at all four time points (Fig. 6A,C). In TG-5, eight CML/CDPK encoding genes, three Rboh encoding genes, and all four MAPK encoding genes were down-regulated after Px infection, while only four CML/CDPK encoding genes were upregulated (Fig. 6B,C, www.nature.com/scientificreports/
Discussion
In this study, comparative transcriptome and trend analysis revealed fundamental changes in gene expression patterns between resistant and susceptible melon cultivars at four different time points after inoculation with the fungal pathogen Px. Some GO terms, such as response to stress and defense response, exhibited a pattern in which the gene expression level was decreased at day 1 in TG-5 (cluster 2, Fig. 4), but increased at day 1 in TG-1 (cluster 1, Fig. 4), suggesting these gene functions may play vital roles in the resistance response to melon Px infection. Although the MLO gene is an important weapon in the fight against powdery mildew 23 , none of the differentially expressed MLO genes of TG-5 and TG-1 both were differentially expressed in all four stages, and no SNPs were found in the MLO genes between the two cultivars. As Howlader et al. 25 reported, MLO genes have different expression patterns after being infected by powdery mildew bacteria, and some are up-regulated and down-regulated. It is difficult to say that TG-1 resistance or TG-5 susceptibility is related to these differentially expressed MLO genes.
Systemic acquired resistance (SAR) is one of several induced defense responses in plants. It is regulated by plant hormones responsible for signal transduction and plays a vital role in disease resistance 34 . The phytohormone abscisic acid (ABA) plays a vital role in plant responses to biotic and abiotic stresses 35,36 . ABA receptors have three families of proteins, anti-Pyravbactin (PYR), anti-Pyravbactin-like (PYL) and ABA receptor regulatory components (RCAR), which form a complex to mediate ABA signaling 37,38 . Plant protein phosphatase 2C (PP2C) family members and SNF1-related protein kinase 2 (SnRK2) are key components of the ABA signal transduction pathway 39,40 . PP2C is a negative regulatory element that normally binds to the ABA receptor protein, leaving the ABA receptor protein in an inhibited state 41 . Once the plant is affected by adverse external factors and the intracellular ABA hormone level is elevated, ABA will bind to PYR/PYL and PP2C will release the ABA receptor protein, which in turn releases inhibition of PYR/PYL. PYR/PYL then mediates SnRK phosphorylation to activate the downstream transcription factor ABF to regulate cellular transcription. The phosphatase PP2C acts as a constitutive negative regulator of kinases (SnRK2) family when ABA is absent, whose autophosphorylation is required for the kinase activity of downstream targets [42][43][44] . We found that DEGs encoding the ABA receptor protein, such as gene MELO3C018394, showed an increasing expression trend after Px inoculation in the susceptible cultivar, while the opposite was true for the resistant cultivar. Expression of this gene was significantly lower at each time point in the susceptible cultivar than in the disease-resistant cultivar. More importantly, expression levels of the ABF-encoded genes (MELO3C018458 and MELO3C010850) were significantly decreased on days 1 and 2 after Px inoculation in the susceptible cultivar, indicating that the transcript levels of susceptible material were significantly repressed by Px infection. These results indicate that the up-regulation of genes encoding ABA receptor was associated with the susceptibility of melon to this PM pathogen.
ABA also induces an increase in intracellular calcium ion concentration 20 . Ca 2+ can be derived from intracellular calcium pools or extracellular sources, and Ca 2+ usually acts as an intracellular secondary messenger that activates protective enzymes and improves photosynthesis to alleviate the damage caused by low or high temperatures, drought, high salinity and pests 45,46 . Currently, Ca 2+ receptors in plants are divided into three major families: calcium-dependent protein kinase (CDPK), CaM-like protein (CML) and calcineurin B-like protein (CBL) 47,48 . When the intracellular Ca 2+ concentration increases, it promotes Ca 2+ binding by calcium-binding proteins thereby activating them, after which calcium-binding proteins indirectly activate NADPH oxidase (NOX) to generate reactive oxygen species (ROS), which further activate the MAPK cascade reaction 49,50 . Bivi et al. showed that sprayed calcium nitrate treatments significantly controlled the occurrence of stem rot in oil palms 51 . Madani et al. demonstrated that pre-harvest spraying of calcium chloride on papaya reduced the germination of anthracnose spores thus controlling disease incidence 52 . In this study, 9 DEGs within the diseaseresistant cultivars encoded CML/CDPK proteins, of which 7 showed a decreasing expression trend at days 1 and 2, followed by a certain degree of increased expression for most of them. In the disease-susceptible cultivar, 16 DEGs encoded CML/CDPK proteins, 11 of which showed a decreasing expression trend and 5 showed an increasing trend at day 1, followed by most of them showing some degree of decrease after Px injection. On day 5 after Px injection, the expression levels of the DEGs MELO3C012195 (7.11 in TG-1; 3.96 in TG-5) and MELO3C015280 (11.79 in TG-1; 8.67 in TG-5), encoding CML/CDPK proteins, were higher in the diseaseresistant cultivar than that in the disease-susceptible cultivar. Exhibiting a similar trend, expression of one DEG (MELO3C007565), encoding a MAPK protein, in the disease-resistant cultivar continued to decrease from the 1st to the 3rd day, after which it significantly increased on the 5th day after Px inoculation; 4 DEGs (MELO3C007543, MELO3C009916, MELO3C020535, MELO3C006511) encoding MAPKs were identified in the disease-susceptible cultivar showed a decreasing expression trend after which transcripts were sustained at a low expressions level. These results suggested that these up-regulated genes encoding CML/CDPK and MAPK proteins may contribute to the resistance response of melon to PM and the regulatory network of TG-1 in response to Px infection was more complex and diverse than that of TG-5. Utilizing effective defense pathways comprising a complex resistance network is necessary for melon in response to Px infection. Moreover, further investigations will be focused on functional validation of the selected DEGs, which could provide a helpful tool for the development of melon varieties resistant to Px.
Conclusion
In this study, a total of 6366 and 1660 DEGs were identified in susceptible melon cultivar TG-5 and resistant melon cultivar TG-1 in four treatment groups after Px infection, respectively. Further analysis showed that 8 DEGs identified at all four time points in TG-1 were primarily involved in the xyloglucan metabolic process, hydrolase activity, and response to oxidative stress, which related to melon resistance to powdery mildew. Furthermore, GO enrichment analysis suggested that bHLH, ERF, and MYB TF families in TG-1, SBP, HSF, and ERF gene families in TG-5 may play a vital role in PM resistance.
Data availability
Raw data from this study were deposited in the NCBI SRA (Sequence Read Archive) database numbers PRJNA791790. | 2022-03-25T06:18:23.075Z | 2022-03-23T00:00:00.000 | {
"year": 2022,
"sha1": "82be79963add70673e0ee3dce62990af14dd854b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "22619b0bc4edbe86f58944c4f526ab6de419668e",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
144659504 | pes2o/s2orc | v3-fos-license | The Political Potential of the Return Directive
This paper demonstrates how the legitimate interests of immigrants are gradually being recognized through judicial application of EU immigration law. A philosophical and theoretical introduction demonstrates how this recognition constitutes a political momentum. After a brief review of the impact of the ECtHR, we discuss the case law of the ECJEU on the Return Directive to show how, through the principles of proportionality and sincere cooperation, this legitimate interest is indirectly being calculated by the Luxembourg court. This means that national courts will have to follow suite, as is demonstrated in the last section of this paper. Hence the title of the article: the political potential is due to this indirect recognition. In the conclusion, a suggestion is made to further develop this potential.
La Mésentente-Jacques Rancière's Political Philosophy
Five elements constitute Rancière's political philosophy: the idea of a 'part without part' in society, the notions of disagreement, injustice, police and of politics.
According to Rancière the Ancient Greek identify the beginning of the (political) community with the recognition (or counting) of the different parts of society and their share therein-that is, their value as well as their contribution to it.Thus, the common power should be distributed according to each part's share.Yet, this is but an ideal representation of the good community as it lies in stark contradiction with the empirical observation of things.In fact, this counting of the parts may conceal a OPEN ACCESS sayable that sees that a particular activity is visible and another is not, that this speech is understood as discourse and another as noise ( [2], p. 29).By today's standards, this notion of policing is equivalent to the management or governance of society.It has a very technical connotation, as can be deduced from the various tasks of assigning places and tasks and the arbitration between legitimate discourse and illegitimate noise.
Politics and police are opposites because whereas the latter governs an order of things-implying that everything and everyone has the place and share they deserve in the community-the former disrupts this order demanding an equality test because of a fundamental injustice ( [1], pp.52-53).Hence, the author concludes that: The party of the poor embodies nothing other than politics itself as the setting-up of a part of those who have no part.Symmetrically, the party of the rich embodies nothing other than the antipolitical.From Athens in the fifth century B.C. up until our own governments, the party of the rich has only ever said one thing, which is most precisely the negation of politics: there is no part of those who have no part ( [2], p. 14).
The Exclusion without Justification of Normal Migrants
In On the Right of Exclusion, Bas Schotel introduces the notion of "normal migrants", "i.e., migrants who do not have a legal right to admission" ( [3], pp.1,11).Whereas asylum seekers, family members of permanent immigrants have a legal right to be admitted, normal migrants can only be admitted at the discretion of the receiving state.Furthermore, "migrant" is a socio-economic term that grasps the reality and quantity of movement and policies better than its legal equivalent "alien".In fact, normal migrants constitute the largest group of migrants.Lastly, as a socio-economic term, the word "migrant" fits easily within policy discourses where the shift is easily made from "migrant" to "migration flows" to "management of flows".Hence, whereas alien somehow indicates a legal subject, with the term migrant a slip occurs toward "an object of policy" ( [3], pp.11-13).
Schotel describes European (and other Western) admission policies as practices of the exclusion of normal migrants without justification.Based on figures of Eurostat and of the Council of the European Union (EU), he estimates that on a yearly basis Member States of the EU exclude approximately two million normal migrants ( [3], pp.[8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27].Thus, exclusion is the normal practice when it comes down to admission policies.This is proven also by both the legal doctrine and case law of the past 150 years, which "converge in stating that the exclusion and expulsion of aliens are matters of State sovereignty and discretionary in nature ( [4], p. 328)." Schotel argues that for laws and administrative acts and decisions to be legitimate, they should be justified properly.In cases of non-admission, proper justification would mean going through the specific merits of each case to show that there are imperative reasons for excluding the individual in question.The authorities must have shown that the exclusion is necessary to obtain the objectives of immigration policy and that exclusion was the only and the least burdensome measure available.In other words, the principle of proportionality should apply to acts of exclusion.The abstract and general justifications referring to provisions of the law that accompany refusals do not count as proper as they do not go beyond this mere formality.Nonetheless, as mentioned above, the standard case law of various national, supranational and international courts has supported the absence of proper justification by systematically referring to the inherent sovereign power of states to decide who they admit onto their territory 1 ([3], pp.[27][28][29][30][31][32][33][34][35][36]. For there to be proper justification, the legitimate interest of the normal migrant must be taken into consideration and balanced against the interest of the state.In fact, for laws to have authority they must be able to "have the capacity to have legitimate authority", which means that "the law must reflect the reasons that are directly applicable to the relevant individuals from whom it requires obedience", or, to put it simply, that "the law can show that it took into account the reasons that concern the relevant individuals" ( [3], p. 120).Yet, since admission policies are characterized by state discretion, they lack this capacity to have legitimate authority.Moreover, a crucial aspect of legitimate authority depends on whether, in general, its norm-subjects benefit more from following that law than from disobeying it, as it would mean that their legitimate interest is taken into account.Once more, precisely because of the discretionary nature of admission policies and of the exclusionary paradigm, normal migrants rather disobey, which means that the laws lack the capacity to have authority ( [3], pp.120, 133-35).
The Normal Migrants as the Part without Part
The reason for this somewhat abstract political philosophical introduction is that, in the light of Schotel's analysis, it captures well the current framework of Western immigration policies.The normal migrants represent the part without part, whereas the practice of exclusion, backed up by the laws, policies and case law are clear testimonials of the denial of this part without part.So each claim normal migrants-or defense associations on their behalf-make to their human rights in order to legalize and justify their presence on the territory, can be seen as a demand for an equality test of anyone with anyone else.
In fact, if we take into consideration that under international law the right to liberty is guaranteed ( [3], pp.32-33; [5], p. 575), how do we explain that this human right does not apply to normal migrants except by asserting that they have no part in it?In this respect, Schotel's observation that "[t]he law does not organize the rights of the alien but only the rights over the alien" is wholly accurate and corresponds to the field of policing described by Rancière.Against that, recognising the legitimate interest in migration of normal migrants, even if only indirectly, corresponds to the political moment where it is recognized that they constitute a part without part.
The analogy goes further.Like the freedom of the demos, human rights are the empty and negative quality of all displaced persons.They are intended to protect all humans and are therefore common to all of them.Nonetheless, in terms of Hannah Arendt's paradox, as soon as human beings were no longer able to count on the protection of their own government, their human rights were no longer guaranteed either ( [6], pp.291-92).Thus, human rights are this empty property, which migrants can invoke to claim they are part of the human family, but which will not better their situation, although they have no choice but to do so in order to become visible and disrupt the order of domination.In fact, though Arendt's human rights paradox is still certainly valid, it needs nuancing as the standing and enforcement of human rights have evolved since.As Louis Henkin pointed out, with the adoption of the UN Charter not only would human rights increasingly gain in importance, but slowly they would delimit state sovereignty so that "resistance to 'enforcement' [has become] the last bastion of 'sovereignty'" ( [7], p. 33).
As such, just like the freedom of the demos was this negative quality limiting the rights of the wealthy and the noble over it, from the second half of the twentieth century onward-especially in Europe with the adoption and implementation of the European Convention on Human Rights and Fundamental Freedoms (ECHR)-human rights started to limit state sovereignty also in immigration cases.This limitation did not come through the recognition of a human right to immigration.Rather it came along thanks to the European Court of Human Rights' (ECtHR) theory of the convention as a "living instrument".In fact, the ECtHR developed a kind of corpus iuris of immigration, reasoning on the basis of "the case law it had developed in different but similar domains such as extradition, detention or the general obligation on Contracting Parties to secure the rights and freedoms of the ECHR" ( [4], pp.350-51).What is often at stake in immigration cases before the ECtHR are the prohibition of torture and degrading treatment (article 3 ECHR), the right to liberty (article 5 ECHR), the right to family life (article 8 ECHR) or any of these rights in combination with the right to effective remedies (article 13 ECHR).None of these establish a right to immigration, which is the substantial claim of each excluded normal migrant, but they set a normative framework within which immigration policies are to be carried out.In this way, the sovereign right of states is recognized, but it is limited at the same time by this empty and negative quality we call human rights.This limitation indirectly recognizes the legitimate interest of the normal migrants and thus represents a political potential.So in recognizing their right under the ECHR, their legitimate interest is so too, though only indirectly.Consequently, the normal practice (police) of detention is disrupted by the affirmation of liberty.This is a political stake.
The ECtHR has played a fundamental role in the development of a corpus iuris of migrant's human rights and deserves some attention here.Thus we now look briefly at the impact of the human rights case law of the ECtHR on the immigration policies of Member States and of the EU, as it sets the framework within which the Court of Justice of the EU (CJEU) interprets the Return Directive (RD) and in extenso should interpret other EU migration policies.
Delimiting Sovereignty-The Human Rights Spill-Over
As the ECHR was not intended as an instrument of immigration policies and laws, but nonetheless impacted on them, the limiting of the margin of appreciation of states in implementing these laws and policies can be considered the result of a spill-over effect of human right norms ( [4], p. 321).In fact, the safeguards that were built throughout the case law can be considered to have impacted at the procedural level on the implementation of migration and asylum policies of the Member States.Thus, first the ECtHR established that the Soering doctrine was applicable to expulsion cases ( [8], § §69-70).The Court then added that this principle prevailed irrespective of the conduct of the individual in question and of issues of national security as the prohibition under article 3 "is absolute in expulsion cases" ( [9], § §79-80).Later it found that Contracting Parties had, within the scope of article 3, a duty to rigorously scrutinize the risks of ill-treatment following expulsions and were held liable under article 13 (effective remedies) if they failed to do so ( [10], § §39-42, 44-50).Furthermore, on numerous occasions, the ECtHR found that the conditions of detention raised issues under article 3, namely when the state of the detention center was dilapidated, unhygienic or when the detainees had no access to the open air (possibility to walk), no leisure, or recreational activities.( [11], § §216-22; [12], § §49-65; [13], § §43-54).To summaries the above, although the ECtHR looks into (possible) violations of a substantive nature of the ECHR, it has developed guidelines for the Contracting Parties to apply if they want to be sure they do not breach the Convention when detaining and expelling aliens (normal migrants and asylum seekers alike).Hence, these substantive rights closely resemble the procedural safeguards states must comply with.This was strengthened, reaching EU-policy, with the judgment in M.S.S. v. Belgium & Greece [11].Not only did the Court impose obligations on the Contracting Parties, but in doing so it indirectly impacted on substantive Union law, in casu the Dublin regulation.Having considered that membership of the EU does not suffice to guarantee asylum seekers the safeguards under article 3 ECHR, the ECtHR imposed the duty to make use of article 3(2) of the Dublin II regulation (sovereignty clause).Thus, it put a halt to automatic transfers of asylum seekers to the Member State responsible for the examination of the asylum claim based on mutual trust, which is one of the cornerstones of European integration [14][15][16].The Court reasoned that since Belgium could have acted according to the sovereignty clause to examine whether M.S.S. would be subjected to degrading treatment, the presumption of equivalent protection did not apply ( [11], § §339-40).
In its reception of M.S.S. the CJEU managed to avoid the potential conflict between Union law and ECHR by turning the sovereignty clause into a duty of the Member States whenever there are substantial grounds for believing that there are systemic flaws in the asylum and reception conditions for asylum applicants in the Member State responsible, resulting in inhuman or degrading treatment, within the meaning of Article 4 of the Charter [of Fundamental Rights] (…) ([17], §86).
In such cases, the transfer would be incompatible with Article 4 of the Charter of Fundamental Rights (CFR).The CJEU considered the substantial grounds taking into account the evidence gathered before the ECtHR in the M.S.S. case.Hence, the CJEU concludes that the Dublin regulation prohibits Member States, "where they cannot be unaware" of systemic deficiencies in the reception conditions of the responsible Member State, from sending the applicant to that Member State ([17], §94).In such cases, the sending Member State must continue to examine the criteria of the regulation to see whether another Member State might be responsible.If there is no other responsible Member State, the sending Member State will have to use the sovereignty clause and itself examine the asylum claim ([17], § §107-08).
The interesting shift operated in this preliminary ruling, is that Article 3(2) of the Dublin regulation, the so-called the sovereignty clause, becomes a duty under Union law [15].Thus, in the EU-order too, human rights imperatives impose new duties on Member States since these duties amount in effect to procedural safeguards, which limit their sovereignty.As it will have to interpret specific matters of EU-law in the field of migration and asylum, the further development of the right of normal migrants lies within the hands of the CJEU.In the following section, we will illustrate this through the case law of the RD.After the spill-over of human right norms into immigration and asylum matters of sovereign states, the general principles of law of the EU further limited the discretion of states in those matters.Immigration and asylum having become competencies of the Union, Member States must act according to both principles of proportionality when executing their policies in areas in which the EU has laid down rules: the first, in the measures taken to attain the objectives of the Union (article 5 of the Treaty of the European Union (TEU)) in the light of the principle of sincere cooperation (articles 4(3) and 13(2) TEU); and the second, in the measures taken in relation to individuals, guaranteeing their fundamental rights (article 6 TEU and article 52 CFR).With the adoption of Directive 115/2008/EC, also known as the Return Directive (RD) or even "Shame Directive" by human rights activists, both principles of proportionality were inscribed into the procedures of detention and expulsion of normal migrants where it had previously not applied.Hence, recital 20 of RD inscribes the first principle of proportionality stating that the issues of removal, return, entry-bans, and so on, of irregular migrants can be addressed better at the Union level.Thus, while legislating the Union should not adopt measures that go beyond the necessary ones to achieve the set out objectives.This first proportionality is measured against the second one relating to the fundamental rights (recital 13 and 16).Coercive measures and detention should only be used as a last resort and in full respect of the fundamental rights.Hence, the provisions of the RD include, and are directly limited by, the fundamental rights.For example, article 15 explicitly limits the possibilities of detention and article 16 sets out the conditions in which this must happen.
These provisions, which include fundamental rights safeguards, have thus become objectives of the EU.In other words, not just the effective removal procedures are objectives of the RD, but also the respect of the procedural safeguards.This is evident in the case law of the CJEU that developed soon after the entry into force of the Directive, and even more so once the deadline for transposition had expired.The CJEU never recognizes the fundamental rights of normal migrants directly, but only indirectly as objectives of the Union.When Member States exceed in their zeal to expel, the CJEU will consider this as a breach of the proportionality principle in the light of the principle of sincere cooperation, because the excess goes against the objectives of the EU.It does not mention the heavy burden it puts on the individual.
In the following paragraphs, we review this case law and argue that this new instrument of EU policy has opened some political space for normal migrants.I argue that since the recognition of human or fundamental rights indirectly recognizes the legitimate interest of normal migrants; and that the principles of proportionality and sincere cooperation indirectly recognize these rights as their are part of Union law, by transitivity the case law of the CJEU recognizes the legitimate interest of normal migrants in immigration.In doing so, a political momentum is created on the basis of legal reasoning 2We review all the Return Directive cases up to 30 September 2013 with the exception of the Mehmet Arslan case (C-534/11) as it concerns an asylum seeker and where the referring court asked whether the said Directive is applicable, which it is not; and the case of M.G. and N.R. (C-383/11) as it did not explain further duties or set procedures to be followed by the Member States, contrary to the other cases, but merely sheds light on a point of law for the referring court.Thus out of the eight cases, only four six of importance here.Furthermore, two of these cases are only mentioned at the end of the paragraph of the Achughbabian case as they are very similar.and challenges to be taken to court (as much as possible).This is the subject of the final section of the paper where we focus on two domestic cases in Italy and France respectively. 3
Setting the Procedural Safeguards: The Kadzoev Case [18]
Kadzoev, a Chechen national, was arrested at the Bulgarian-Turkish border on 21 October 2006 by the Bulgarians.He had no identity documents.The following day a measure of deportation was imposed on him, but it could not be carried out immediately as he still needed to be identified, travel documents found, tickets to Chechnya to be bought, and so on.For the preparation of his removal, he was placed in a detention center on 3 November 2006.Only on 14 December did Kadzoev declare his real name.His identity was considered proven by the Bulgarian courts and in the period between January 2007 and April 2008, the Bulgarian and Russian authorities consulted each other concerning Kadzoev.Whereas the former validated his identity, the latter would not recognize the documents presented to them by their Bulgarian counterparts.Between May 2007 and March 2008 Kadzoev applied three times for asylum.His first and last applications were rejected and he withdrew the second one.In the meanwhile, he was still being detained in view of expulsion.His lawyer petitioned twice for a less severe measure than detention to be applied, namely the obligation for Kadzoev to periodically sign a register kept by the police.At the end of October 2008, this last application was also rejected.On 12 March 2009, the Supreme Administrative Court (SAC) judged that since it was not possible to ascertain the nationality of Kadzoev, he was to be considered stateless.Furthermore, several NGOs and the UNCHR found it credible that Kadzoev was a victim of torture.Several attempts were made, in collaboration with Kadzoev himself and the NGOs to send him to a safe third country, but no agreement was reached, nor any travel documents obtained.
In the meantime, Kadzoev was still being detained in view of his removal.In fact, before the transposition of the RD, Bulgarian law did not provide a maximum period for detention.In 2009, the Directorate for Migration at the Ministry of Interior asked the SAC to rule on the continued detention of Kadzoev.By then the said directive had been transposed into Bulgarian law.The SAC decided to stay the proceedings and ask the CJEU for a preliminary ruling.
The first two questions of the referring court are quite similar.In question 1(a) the referring court asks whether the detention completed before the rules of the Directive became applicable is to be included when calculating the maximum duration of detention.The CJEU applies its typical teleological interpretation to say it does, because otherwise people in similar situations could be detained longer than the maximum period mentioned in the Directive and this would not be consistent with its objectives ([18], § §36-39).
In questions 1(b) and 2 is asked, respectively, whether the period during which an asylum claim is being examined and the one during which the execution of the deportation decree is suspended due to judicial review, should be included in the calculation of the maximum period.The CJEU recalls that the detention of asylum seekers is governed by Directive 2003/9/EC and that as a rule they should not be detained.So if Kadzoev were to be detained as an asylum seeker, this should have happened on the 3 For the analysis of the case law, we use the terminology of the CJEU and speak of third-country nationals (TCN) instead of normal migrants.However, they must be considered as equivalent, as none of the cases involves migrants without a legal right to admission [3].basis of a new decision and in accordance with that Directive.If this were not the case, meaning that Kadzoev was being kept in detention on the basis of the same decision, then the period of examination of the asylum application must be included in the calculation ([18], § §41-48).
In its answer to question 2, the CJEU applies its teleological interpretation again.It states that in the RD a suspension of removal for judicial review is not mentioned as a ground for extending the period of detention.Hence, this period must be taken into account.If this were not so, the duration of the detention could vary from one Member State to another, which would run counter to the objective of said Directive ([18], § §51-54).
Question 3, which is divided into three parts, seeks clarification of the concept of "reasonable prospect of removal".In questions 3(a) and 3(b) the referring court asks whether when no agreement is reached with the state of which the detained is a national nor with another third country that should be considered as there no longer being a reasonable prospect of removal, even though the authorities keep looking for a country that will receive the migrant.The CJEU's answer is vague.Though it states that detention ceases to be justified and that the individual must be released when there is no reasonable prospect of removal due to legal or other considerations, it does not seem to answer the referring court's question.The CJEU adds that detention may be maintained if arrangements are in progress and that it is necessary for successful removal.Furthermore, this must happen within the time limits set in the Directive ([18], § §63-66) Thus, the somewhat tautological answer of the CJEU that only a real prospect that removal can be carried out successfully, having regard to the periods laid down in Article 15( 5) and ( 6), corresponds to a reasonable prospect of removal, and that that reasonable prospect does not exist where it appears unlikely that the person concerned will be admitted to a third country, having regard to those periods ([18], §67).
The CJEU seems to imply that it does not suffice that a Member State is looking for a country to which to remove the individual.For it to be considered a reasonable or real prospect, arrangements must be in progress between that Member State and that other country.What is more is that where the maximum period of detention has already expired, the individual has to be released immediately.In such a case, the concept of reasonable prospect does not apply anymore-which is also the answer to question 3 (c) ([18], § §60-62).
With its last question the referring court wants to know whether detention can be further maintained even though the maximum period has expired on the grounds that the person concerned is not in possession of valid documents, has no means of subsistence and his conduct is aggressive.The CJEU's answer is clear and effective: in no case does the Directive allow the maximum period to be exceeded ([18], §69).
It is interesting to note how in both questions 2 and 4 the referring court added as elements to be considered that Kadzoev had no valid identity documents, no means of subsistence and that his conduct was aggressive.These are elements usually used to invoke public order or national security.However, the CJEU did not pay the slightest attention to these elements, thereby implying that they are not at all relevant for detention and removal under the Directive.In fact, in its answer to question 4, the Court added, without mentioning the conduct or status of Kadzoev, that the Directive may not be used for detaining someone on grounds of public order or safety ([18], §70).
This judgment is interesting for us because it sets procedural safeguards Member States must comply with to achieve the objectives of the RD.In fact, whenever the safeguards are not complied with, "the detention ceases to be justified and the person concerned must be released immediately".If this were not so, the Union law would be breached.The principle of proportionality implicit in this judgment seems to be the one relating to the achievement of Union goals, rather than relating to fundamental rights ([18], § §37, 54).Hence, only indirectly, the CJEU is protecting the fundamental right to liberty, which, in expulsion cases, seems to prevail over public order and safety.
Limiting Criminal Detention:
The El Dridi Case [19] El Dridi, a TCN who entered Italy illegally and who did not hold a residence permit, was issued a deportation decree in May 2004.A new deportation decree was issued on May 2010, at the moment of his release after having served a sentence for drug crimes in the meantime.At the end of September 2010, a check revealed that El Dridi had not complied with that order and so he was sentenced to a year's imprisonment.He subsequently appealed that decision.The Appeal Court of Trento stayed the proceedings as it was not sure whether a criminal penalty may be imposed during an administrative return procedure, it being potentially contrary to the attainment of the scope of the Directive and to the principle of sincere cooperation.Furthermore, it doubted whether such a penalty was proportionate.In these circumstances, the Appeal Court wanted to know whether the Directive precluded the possibility of a criminal penalty, i.e., imprisonment, even before the administrative procedure had been completed; and whether it precluded the possibility of an imprisonment of up to four years for a simple failure to cooperate with a deportation procedure ( [19], § §18-25; [20], p. 478).
The CJEU starts by addressing the issue of the criminalization of illegal stay.It recalls that under article 2(2)(b) of the Directive, Member States may decide not to apply it to TCNs subjected to a removal as a criminal sanction, but adds an important nuance, namely that the criminal legislation and rules of criminal procedure should not jeopardize the objectives pursued by Union law as that would deprive the Directive of its effectiveness.With all this in mind, the CJEU concludes that Members States may not imprison an illegally staying TCN on the sole ground that he is staying illegally and that he has failed to leave the territory after an order had been issued.The Member State must then pursue its efforts to enforce the return decision.A criminal penalty, in casu imprisonment, would frustrate the removal procedure and as such jeopardize the attainment of the objectives of the Directive.The national courts are thus called upon not to apply such provisions and to "take into account the principle of the retroactive application of more lenient penalties".However, where coercive measures have failed, Member States may adopt criminal law provisions to deter and dissuade TCN not to leave the territory ( [19], § §49-61; [20], pp.481-82).
The impact of this judgment on the national laws of Member States that criminalized illegal stay is clear: it is illegal if it frustrates the Directive.It is also very important as it establishes the successive stages and various possibilities of the removal procedures, which should go from the least coercivegranting the individual a period for voluntary departure-to the most coercive-pre-removal detention.Where Member States have not followed this procedure, TCNs who are being detained must be released, as that detention is unlawful.Hence, in the name of the effet utile of the Directive, the fundamental rights of normal migrants have been safeguarded, albeit indirectly.This is strengthened by the explicit reference the CJEU makes to article 4(3) TEU, i.e., to the principle of sincere cooperation ( [19], § §56-59).In fact it did not construct its reasoning on the grounds of the fundamental rights, but its reasoning produces this effect ( [20], pp.484-86) since the respect of these rights is an objective of the RD.
However, the judgment left open a possibility to criminalize illegal stay and migration, once coercive measures have failed to ensure removal, as a way of dissuading TCNs to do so.Moreover, since the CJEU did not specify when the return procedure is considered to have started, Member States have tried to use the possibility of criminalizing irregular immigration and stay to circumvent the Directive, as can be seen in the following case.
Forbidding the Circumvention of the Directive: The Achughbabian Case [21]
During an identity check on the public highway on 24 June 2011 Achughbabian, an Armenian national, was suspected of staying illegally in France.On the basis thereof, he was placed in police custody.After an examination of his situation it was revealed that he had applied for a residence permit in April 2008, but that it had been rejected in November of that same year and confirmed in January 2009 at which point he had been ordered to leave the territory within one month.Hence, on 25 June 2011 a deportation order and an administrative detention order were adopted by the French authorities.On 27 June, this detention was reviewed and prolonged beyond the 48 hours provided for by law by the competent court.Achughbabian appealed this decision to the Court of Appeal in Paris, which stayed the proceedings asking the CJEU whether the Directive precluded "national legislation which provides for the imposition of a sentence of imprisonment on a third-country national on the sole ground of his illegal entry or residence in national territory?"The background of this argument is that in France, police custody may only be applied for offences punishable by imprisonment, which is the case for illegal entry and stay in France.However, since the CJEU ruled that such imprisonment jeopardized the attainment of the objectives of the RD, the question remained whether this was still applicable before the start of a return procedure [22].
The CJEU opens its analysis by reiterating the possibility of Member States to adopt criminal law provisions in matters of illegal immigration, though they may not conflict with Union law.Furthermore, the Directive allows Member States to resort to detention in view of determining the status of the individual.In fact, the scope of the Directive would be undermined if Member States could not resort to such measures.However, this deprivation of liberty may only last for a "brief but reasonable time to identify the person under constraint and to research the information enabling it to be determined whether that person is an illegally-staying third-country national."Though the identification process may prove to be difficult, especially in cases where the individual does not cooperate or invokes the status of asylum seeker, the authorities must act with diligence "and take a position without delay on the legality or otherwise of the stay of the person concerned."In fact, once "it has been established that the stay is illegal, the said authorities must (…) adopt a return decision ([21], § §28-31)." The CJEU then turns toward the analysis of the French legal provision for imprisonment of illegally staying TCNs as it may conflict with Union obligations.It reiterates its analysis of the El Dridi case stating that such imprisonment hampers the return procedure, depriving it of its effectiveness.In the present case, the CJEU notices that the first deportation order (of 2009) was no longer operative and therefore the French authorities adopted a new one.At any rate the provisions of the Directive (article 8) compelling Member States to assure that removal takes place "in an effective and proportionate manner", whether using coercive measures or not, are applicable.In that respect a detention measure of the person concerned is allowed for the purposes of preparing and permitting the removal and it may not last more than 6 months to which an additional year can be added "only where non-implementation of the return decision during the said 6 months is due to a lack of cooperation from the person concerned or delays in obtaining the necessary documentation from third countries."Furthermore, the CJEU argues that an imprisonment during the return procedure does not contribute to the removal and thus cannot be considered to be a measure or a coercive measure within the meaning of the Directive ([21], § §33-37).
For the CJEU it is therefore obvious that such an imprisonment measure is precluded by the Directive as it would deprive it of its effectiveness.Such an imprisonment is only allowed where the person concerned is subject to criminal law expulsion.The latter, which does not fall under the scope of the Directive, is only applicable where the person concerned committed one or more offences other than that of illegal entry or stay, which was not the case of Achughbabian.Hence, even though according to the French authorities, in practice imprisonment usually does not follow in cases where no other offence was committed, the CJEU finds that this mere possibility is still theoretically available and that this should not be possible as it may "compromise the application of the common standards and procedures" introduced by the Directive ([21], § §38-44).
In the final part of its judgment the CJEU rebuts the claims according to which, although a prison sentence may not be imposed during a removal procedure, it may be imposed before the removal is carried out.This would be contrary to the obligation to immediately start a return procedure as soon as possible once it has become clear that the person concerned is staying illegally.However, referring to El Dridi, the CJEU recalls that a prison sentence may be provided for in cases where "coercive measures have not made it possible for the removal of an illegally staying third-country national to be effected" provided that there is no justified ground for non-return ([21], § §44-48).
As Raffaelli [20] points out, in the El Dridi case the CJEU had left open the possibility of criminalizing illegal stay after coercive measures of removal had failed.In such cases, the matter would no longer fall within the scope of the Directive.This gave rise to practices of Member States trying to circumvent the application of the Directive.Hence, in France, such as was the case of Achughbabian, but also in Italy and Germany for example, illegal entry as such was punished with imprisonment and a fine ( [20], pp.486-87).As such, the TCN in question fell within the scope of criminal law and it was argued that he would not fall under the scope of the Directive.
The importance of the Achughbabian case is that though Member States may, under national legislation, detain an irregularly staying TCN in order to identify him and his situation, this detention must be brief and may not lead to imprisonment.In fact, as soon as it appears that the person concerned is staying illegally in the country, the removal procedure begins.This seems to be irrespective of whether the nationality of the person is ascertained.From the judgment it seems to suffice that the person appears to be staying illegally, which can be deduced by lack of cooperation, silence, lack of identity documents, and so.Though the case may not be a victory in the eyes of human rights activists, it remains important as it further limits the margin of appreciation of Member States in how they criminalize migration.Here too, what matters to the CJEU is the correct application of the provisions of the RD so as not jeopardize the achievement of its objectives, explicitly referring to the principle of sincere cooperation ([21], § §33, 43).The CJEU does not forbid criminalization, but it certainly limited the scope and possibilities thereof.
After the Achughbabian case, two Italian courts referred to the CJEU asking whether national legislation criminalizing illegal stay as such, which, according to the wording of one of the referring courts, was intended "to circumvent or, in any event, limit the scope of the directive", was precluded by the principle of sincere cooperation ([23], §26).In both cases, the courts asked whether in the light of that principle the Directive allowed for the criminal fine imposed upon illegal stayers to be replaced with an expulsion order.The CJEU stated that this was allowed, but emphasized that this fell under the scope of the Directive even though it is regarded as a criminal sanction under domestic law [23,24].[25] In the case of Filev and Osmani, a referring German court asked whether the Directive precluded criminal sanctions for the breach of re-entry bans that had been handed out before the coming into force of the Directive and which provided for a ban of more than five years.Filev's asylum application was rejected in 1992 after which he had to leave Germany and received a re-entry ban that was not limited in time.When he came back to Germany in April 2012, he was checked at the border and subsequently subjected to criminal proceedings for non-compliance with that re-entry ban.He was held in police custody and received a fine.Osmani received an expulsion order in 1999 due to a conviction in a drug crime.In 2003 he was convicted again, served part of his sentence and was subsequently removed from Germany with a re-entry ban not limited in time.His early release was made conditional upon him serving the remaining 474 days in case of re-entry.When he came back in April 2012, he was checked at the border and criminal proceedings were initiated.
Limiting Discretion under Article 2(2)(b) RD: Filev and Osmani
As the Directive only provides for entry bans for a period of maximum five years and the German re-entry bans had been handed out more than five years before the entry into force of the Directive, the referring court asked for a preliminary ruling.The court wanted to know whether the Directive precludes Member States "from making breaches of administrative law expulsion or removal orders subject to criminal law sanctions, where the expulsion or removal order was made more than 5 years prior to re-entry" and more than five years before the entry into force of the Directive in the domestic legal order (questions 1 and 2).It also wanted to know whether national legislation, which does not provide for a time-limit on re-entry bans unless the person concerned explicitly asks for one, is compatible with the Directive (question 3).Lastly, regarding only Osmani, the court wanted to know whether new criminal proceedings may be started on the same grounds as an expulsion order that predates more than five years the entry into force of the Directive as also its transposition into the domestic order (question 4).
With regard to the third question, the CJEU says that the wording of the Directive is clear and imposes a duty upon Member States to determine a period during which the entry ban is valid and that may not exceed five years.Furthermore, had the EU "legislature intended to provide Member States with a discretionary power in relation to determining a limit to the length of an entry ban, it would have done so expressly" as it did in other articles.The objective of the Directive is that entry-bans do not exceed five years except for reasons public order or national security ( [25], § §25-34).
The CJEU then moves on to analyze the first and second questions together.It examines whether the entry bans comply with the Directive.In that respect, as the Directive does not provide for any transitional arrangements in relation to entry ban decisions, its settled case law applies, meaning that the new rules are immediately applicable except in the event of a derogation.Thus, to see if the old entry bans are still applicable one must examine for how long they have been applied before the entry into force of the Directive, lest the entry ban were to exceed five years.In the present cases, the continued effect of the old entry bans would be contrary to the provisions of the Directive and are therefore precluded by it, except in cases where they were issued against a person who posed a threat to national security or public order.Hence, not only may such entry bans no longer be applicable, neither may criminal proceedings follow from a breach of these entry bans, except in the cases of threat to national security and public order ( [25], § §35-45).
With its fourth question, the referring court basically wanted to know whether new criminal proceedings may be started on the same grounds of an expulsion order that predated by more than five years the entry into force of the Directive as also its transposition into the domestic order.Moreover, it asked whether it may fall under article 2(2)(b) of the Directive, which would remove them from the scope of the Directive.
In the view of the CJEU, a Member State may apply the exception under article 2(2)(b) so as to exclude criminal cases, such as Osmani's, from the scope of the Directive.However, if a Member State has not made use of this discretion after expiry of the said time period for implementation, older decisions will automatically fall under the scope of said Directive and the protection it confers upon the subjects of such decisions.Hence, after the expiry date for implementation Osmani was covered by the Directive and the five years rule applied.As Germany only made use of its discretion under article 2(2)(b) after this expiry date, applying it to Osmani would suddenly worsen his situation.Consequently, no criminal proceedings may be started on the basis of that old removal or expulsion order ( [25], § §50-56).Once more, the CJEU affirmed the direct effect of the Directive and the seriousness of its provisions.As these provisions are precise and confer rights upon their norm-subjects, these rights are considered objectives that must be achieved and respected ([25], § §32-37).Though its principle aim is not to protect the fundamental rights of TCNs, the scope of harmonizing practices and procedures has led to preclude old entry bans that were much more severe and to preclude further criminalization for breach of such bans, as they are no longer lawful.The consequence of this judgment can be great, as in the past many normal migrants have been expelled and received entry bans such as the ones in the case at hand.It will mean that they can try to enter the EU again without having to face criminal proceedings.Once more, the fundamental rights of TCN have been recognized indirectly.
The CJEU does however allow exceptions in cases of serious threats.In the case at hand it did not, however, examine this question, nor did it refer to any case law on the matter.Yet, it is reasonable to suppose that it would be loyal to its settled case law and that such a threat would be gauged case by case as a present and sufficiently serious threat and not simply based on past convictions ([26], § §28-32, [27], § §26-28, [28], § §24-28, [29], § §67-82, [30], § §44-55).
Intermediate Conclusions
The purpose of this section was to show how after the ECtHR indirectly recognized the legitimate interest of normal migrants by taking into consideration their human rights, the CJEU's interpretation of the RD on the basis of the principle of proportionality and sincere cooperation further limited the sovereignty of states in migration matters.This has been made possible in two distinctive moments.First, when the principle of proportionality relating to the fundamental rights of individuals was inscribed in the RD.As such, these rights became objectives of the EU.Secondly, when the CJEU found Member States to be in breach of the principles of proportionality and of sincere cooperation when they took disproportionate measures as they impeded the achievement of the objectives of the RD, which include fundamental rights safeguards.As mentioned above, indirectly the CJEU recognizes these fundamental rights, which in turn indirectly recognize the legitimate interest of TCN.So paradoxically, the RD recognizes the fundamental rights of normal migrants and thus indirectly their legitimate interest, since it limits the possibilities of the Member States.
The most important limitation is that regarding the detention of migrants.In many, if not all, Member States pre-removal detention was applied (almost) automatically even though national legislation provided less severe alternatives.Furthermore, the CJEU explicitly stated that no derogations on the basis of national security, public order, and so on, may be grounded in the RD, as was often the case in Member States to prolong detention or refuse entry.As to the prolongation of detention, the CJEU clearly stated that after six months, the additional twelve months provided for by law may only be invoked under specific conditions, namely that removal is still possible and that it has failed due to lack of cooperation of the person concerned.Last but not least, though it has asserted the Member States' right to criminalize irregular immigration and stay as a deterrent, the CJEU clearly affirmed the supremacy of EU law over national criminal law.In other words, these criminal provisions may not hamper the achievement of the objectives of the RD.In particular, this is the case for imprisonments during or before a return procedure.As to the imprisonment after such a procedure failed, this will only be possible if the person concerned is still on the territory without a justified ground for non-return.Though the CJEU did not list such possibilities, it is reasonable to consider that whenever removal failed for reasons independent of the TCN (because s/he is stateless, or because the Member State was not able to find a country willing to accept him/her, or is not able to expel without violating fundamental rights, etc.) criminal sanctions will not be applicable as this would be disproportionate.Therefore, the possibilities of legalistic criminalization, which is a competence of the Member States, have been vastly reduced.
The political potential of the RD has been realized as it disrupted the normality of detention and other measures (disproportionate entry bans, criminalization).So though normal migrants are mostly objects of policy (and police), thanks to the RD and its interpretation by the CJEU, they have gained extra ground for legal standing, thus slowly becoming ever more subjects of law.
The Impact in Italian and French Case Law: Two Cases of Political Momentum
In this section we discuss an Italian and a French case where the political potential of the Return Directive, i.e., the (indirect) recognition of the legitimate interest of normal migrants is not only particularly evident, but might actually change national policies and practices regarding the detention of migrants.[31] Aarrassi, Ababsa and Dhifalli were three undocumented migrants held in the detention center of Crotone in the South of Italy.On 9 October 2012, they started a protest against their detention and against a routine search by the police during which the people and rooms were checked for items that might be used for escape.The protest turned violent when the three occupied the roof top of the center from where they started throwing window frames, bricks, furniture, taps, and so on, in the direction of the police and the staff of the center so as to impede its normal functioning.They organized watches in order to get some rest.Attempts to mediate failed, as it was clear that all negotiation would have ended in arrest.The protest continued until 15 October.After a six day fast, exhausted they gave up and were subsequently arrested.
Self-Defense Against Detention-The Italian Judgment
For these reasons the three were prosecuted for demolition of state property and resistance to police officers.The prosecutor asked for a prison sentence of one year and eight months for these offences.The lawyers of the three men on the other hand, asked that all be acquitted as they acted out of necessity, i.e., self-defense, and that they be freed.
The single judge court thus proceeded to examine the illegality of detainees actions, the legality of the detention orders, the legality of the conditions of detention and the material conditions for invoking self-defense.
As to the illegality of the protests the three men stated they decided to (join the) protest because they all lived and worked in Italy with their family and because detention conditions were appalling (terrible conditions of hygiene such as mattresses on the floor with no or filthy blankets, having to eat on the floor for lack of chairs and tables, filthy toilets and showers, etc.).All three declared they would rather be in prison, where the detention conditions are far superior, than in the detention center.None of them denied having committed the facts for which they stood accused.Against this background, the Judge decided to examine the legality of the detention measures and conditions in order to assess, in the light of national and supranational sources of law, whether the accused were acting in defense of their fundamental rights ( [31], §4).
The judge recalls that Union law is an integral part of the domestic legal order and that there where national law is in conflict with Union law, as is the case with non-or badly transposed directives, the latter has supremacy over the former.More precisely, those provisions that are unconditional and sufficiently clear are self-executing and individuals can avail themselves of these rights before the state.The judge then turns to the El Dridi judgment and argues first of all that the CJEU explicitly established the several steps in the return procedure ranging from the least coercive to the most restrictive measure ( [31], §5.1).
The judge then looks at recital 16 and articles 15 and 16 of the Return Directive.According to recital 16 "the purpose of removal should be limited and subject to the principle of proportionality with regard to the means used and objectives pursued" and that detention "is justified only to prepare the return or carry out the removal" if the less coercive measures have failed.Article 15 of the Directive explicitly mentions the strict conditions that apply to detention.The judge then emphasized that a detention measure must be "ordered in writing with reasons being given in fact and law" and that the TCN "shall be released immediately if the detention is not lawful" or when a prospect of removal no longer exists.In that respect, he recalled and reviewed the El Dridi judgment ( [31], § §5.2-5.5).
In the light of the foregoing, the judge analyses the written detention orders, reminding that not only the national law must comply with Union law, but also the practices of the public administration.Hence, given the fact that the Italian law does not provide for less coercive measure than detention, article 7(3) of the Directive should be applied, which provides for less afflictive alternatives to detention.Furthermore, the written orders must be duly motivated, explaining the particular reasons why in each concrete case it is not possible to apply such less coercive measures.Where this is not the case the detention is unlawful and the TCN has to be released immediately ( [31], §5.6).
In the present cases, the judge considered that the written detention orders had not been duly motivated or founded.Consequently, the detention measures were illegitimate.In the case of Aarrassi, the written order stated that "it was not possible to concretely apply less coercive measures", but, notes the judge, it did not give any specific reasons why this was not possible.Thus, the order is not properly motivated and therefore null.The written order provided to Dhifalli has similar deficiencies.It merely stated that he could not be immediately removed and therefore that a detention measure was the most suitable option to secure an effective removal.However, it omitted to specify in this particular case why it was the most suitable option and why other less coercive measures could not be adopted.Hence here too, the judge finds that the detention measure was unlawful.As to Ababsa's situation, the written order justified the detention measure on the alleged danger he represented due to past convictions and the risk of absconding.In fact, he declared to have no permanent residence in Italy.According to the judge, though these motivations were sufficient to deny Ababsa a period for voluntary departure, not the slightest indication was given as to why in this concrete case no less coercive measure could be applied.In fact, the judge notices that according to the expulsion order Ababsa had already been on the Italian territory for fifteen years, implying him being well settled and contradicting the risk of absconding.Furthermore, the risk of absconding by itself does not constitute an to the principle of proportionality.The authorities are thus not exempted from their duty to search for less coercive measures.As to the danger represented by Ababsa, the judge reminds that the Directive provides for a detention measure only so as to assure the effective removal procedure and cannot be grounded on the alleged danger of the person concerned.For all these reasons, this written decision was also considered unlawful ( [31], §5.7).
After having reviewed the legality of the detention orders, the judge reviews the legality of its conditions in the light of the case law of the ECtHR and the conditions it laid out: the attainment of a minimum level of severity with regards to the objective circumstances (such as the length of the detention and the severity thereof) and the subjective characteristics of the victim (age, gender, psychological conditions, etc.) ( [31], §6.1).According to the judge, both the pictures of the detention center and the direct inspection of the places prove that the accused have been detained in conditions that are "at the limits of decency", not suited to receive human beings.The judge underlines that, though irregular migrants may be used to precarious living conditions, the standards against which the conditions of detention should be gauged are those of the average citizen without distinction as to status, nationality or race.In particular, the judge finds that the level of severity has been reached there where the detainees were obliged (1) to sleep on filthy mattresses, without linen and with equally filthy covers; (2) to clean themselves in the poor hygiene conditions: filthy toilets, sinks and towels; (3) to eat without chairs nor tables and receiving too little food to provide the necessary nutrition.These conditions are detrimental to human dignity, especially since the detainees have not been "deprived of their liberty for having committed a crime and that they were compelled to leave their countries of origin in view of improving their situation ( [31], §6.2)." In the light of both the unlawful detention orders and conditions, the judge then examines whether the three detainees, given these breaches of their fundamental rights, had no other choice but to act as they did.The judge reminds that according to Italian case law the essential conditions that constitute self-defense are composed of an unjust aggression and a legitimate reaction.The first criterion has been clearly met, as shown by the violations mentioned above.As to the reaction, it stands without doubt that the rights, liberty and dignity of the detainees were at stake and were being violated at the moment of their protest.As to the proportionality of the protest acts, the defended rights (human dignity and personal liberty) are much more important and valuable than the offended goods such as the prestige of the public administration or state property.It is also obvious that the detainees would not have obtained their release through the use of other means or methods such as a hunger strike, for example.Furthermore, this would require a value judgment, which in a secular state exclusively competes to the acting individual.Lastly, the judge criticizes the dominant attitude of the Italian prosecution according to which the public administration merely applies the law, which provides for detention.This would render any other method futile.In fact, in the case at hand, the detainees had already asked in vain to be released.For all these reasons, the judge finds that the accused acted in self-defense and ordered their immediate release if they were not being held for other-non immigration related-facts ( [31], §7).
As revolts in Italian detention centers are frequent, and as Italy has had to amend its immigration law several times since El Dridi, it is no exaggeration to say that this judgment is revolutionary.Besides recognizing the procedural safeguards stipulated by the Directive and its strict interpretation given by the CJEU, the judge not only recognizes the legitimate interest to liberty and human dignity of the detainees, but also of their reasons to immigrate.
In fact, in his analysis of the detention conditions in the light of article 3 ECHR, the judge finds it aggravating that that provision has been violated considering that the detainees have not committed any crime, but simply immigrated in the search for a better life.Furthermore, in assessing the severity of the breach of article 3, the judge explicitly states that the standards against which the conditions should be gauged are not those of irregular migrants who may be accustomed to precarious living conditions, but against those of is that of the average citizen ( [31], §6.2) Therefore, having recognized that they acted out of necessity and self-defense, the Italian judge, having focused on both procedural safeguards and substantive fundamental rights, recognized the actions of the prosecuted as legitimate.Hence, political space and voice has been given where hitherto only further criminalization would have applied.
Refusing to Cooperate in Defense of Detained Migrants-The French Cases
In France too, the reception of the Return Directive has been the subject of much legal discussion.Next to the issue of police custody that was at stake in the Achughbabian case, the issue whether article 16 of the Directive, which spells out the conditions of detention, has direct effect has been much debated.In particular, paragraphs 4 and 5 of that article as they spell out the rights of TCNs to be informed of the rules of the detention facilities and to their entitlement to contact relevant and competent organizations, which also have the right to visit such facilities.In this respect, the issue of the direct effect is quite pertinent, as the police officers of these centers have not yet been informed properly on these procedural safeguards.Hence, several appeal courts and the French Court of Cassation have found the detention of TCNs unlawful because of procedural errors [32].
This situation is even more exacerbated as the right of visit of the competent and relevant national and international bodies and organizations (article 16(4) of the Directive) may be made conditional upon authorizations determined by national law.This qualified right is at the core of the case law we discuss below.On the one hand, the authorities claim that due to this conditionality article 16, in particular its fourth paragraph, cannot have direct effect.On the other hand, the competent and relevant associations have united into a platform for the observation of the detention of migrants-Observatoire de l'enferment des étrangers (OEE)-and refuse to receive the authorization to visit as they find the conditions set out by the government to be too restrictive, lacking transparency and impeding their true independent control functions.The OEE criticizes the authorization decree for the fact that the administration has the right to refuse authorizations on the grounds that other associations are present in the center; that each association may only designate five visitors (for the whole of France); that different associations may not access the same detention center on the same day; that the associations must announce their visit twenty four hours in advance; that the decree does not mention the extent of the access to the facilities; and, lastly, they criticize the conflict of interests as the minister of Interiors is in charge of both the management of the detention centers and of the authorizations.They call for, among others, that the authorization be in the hands of an independent body; that the criteria be set out in a law (instead of a decree); that all relevant associations be granted authorization and that access to the whole of the detention facilities be granted [32,33].
The combination of both issues has led the French authorities to be in a permanent breach of article 16 (4) and (5) as there are almost no associations who are authorized and can thus be contacted by the detained TCNs [32,33].
The first cases go back to January 2011.In the light of the direct effect of article 16(5) RD the Juge des libertés et de la détention, i.e., the first judge to review the detention of irregular migrants, annulled the removal order and ordered the release of the TCN as in the administrative files presented before the court there was no indication that the rules of the detention center had been notified [34].On the same day the same court pronounced a similar judgment, which was appealed.The court of appeal confirmed the release of the detainee on the grounds that the administration had only informed the former of the only organization that is permanently present in the detention center-the Order of Malta-but not of other organizations that could provide assistance if called upon.This irregularity in the procedure entailed that the detention was unlawful [35].Exactly the same reasoning has become standard case law of the Court of Cassation, which kept rejecting the appeals filed by the administration [36][37][38].
As the associations of the OEE refuse to ask for authorization under the current decree, the administration finds itself in a dilemma because it cannot provide contact information of several associations though it is required to do so.Hence, the courts find that the rights of normal migrants are not being respected as systematically they are not properly informed [39].Furthermore, as the RD mentions organizations in the plural, the French courts consider that providing contact information of only one authorized organization breaches the safeguards of said directive and thus the rights of the detainees.Hence, the members of the OEE have been able, through their position on the decree, to raise the issue of the rights (thus indirectly of the legitimate interest) of normal migrants.The result of this is that either the detainees have to be released or the authorization conditions brought up to standard.Either way there would be an improvement.
Intermediate Conclusions
The political potential of the Return Directive finds a particular expression in these two cases.The French case displays very formal and procedural reasoning much resembling that of the CJEU.It enabled French migrant defense associations to take a stand against conditions of detention and control thereof they found unacceptable.In the Italian case, both the case law of the ECtHR as that of the Return Directive play an important role, but the judge also explicitly recognizes the legitimate interest of the rioting detained normal migrants.The great impact of the RD on national (case) law is that it has rendered a degree of legal certainty there where hitherto discretionary practices prevailed, and still do [39].Legal certainty means that norm subjects can calculate both the effects of their own and the administration's actions.This allows them to organize in order to have their legitimate interest recognized, to raise their voice.
Conclusions and Way Forward
The case law of the Return Directive has been, is and will continue to be of much relevance for the legal position of normal migrants as the principle of proportionality has entered the field of migration.Though only marginally considering the fundamental rights of normal migrants, in applying the principle of proportionality as it has always done, the CJEU has indirectly recognized their legitimate interest to liberty, dignity and maybe even to immigrate.In fact, until now each violation of a Member State examined before the CJEU has led to the liberation of normal migrants and thus de facto to a prolonged stay.
From a more legalistic point of view, through its different cases the CJEU has affirmed the direct effect of many provisions there where Member States had been reluctant to transpose them (properly).More importantly, in the light of the effet utile of the Directive the CJEU gave strict interpretations of these provisions banning more severe state practices that hitherto had been current in many Member States, in particular the use of detention as a first measure in the removal procedure, or the omission of seeking less severe alternatives, but also regarding old orders that still produced their effects as was the case of the re-entry bans.
Yet the most important impact of the Directive and its case law is that, in the light of the principles of sincere cooperation and proportionality, in Member States where old orders that do no longer comply with it but still produce their effects, these actions or omissions have become unlawful and no longer apply.Hence, in many cases this means that normal migrants be released from detention, allowed back into the country and also that the room left for criminalizing illegal immigration has been drastically reduced (from a legal perspective).Throughout the case law this has been the case irrespectively of matters of public order or national security, which are often invoked by Member States in the hope to obtain some exception to the rule of law.
The combination of these elements has given normal migrants and anyone else supporting their cause a certain degree of legal certainty and as such an extra ground to protest, to raise their voice so as to denounce the injustice of their situation.This is new.However, this novelty is inherent in the relatively recent entry into force of the RD.Member States still have to adjust to it and might even modify it in the future.At any rate, this moment of adjustment has opened opportunities for normal migrants to improve their situation there where Member States had much discretion.
This situation, especially in the examples of the Italian and French judgments discussed above, has clearly interrupted, at least at different intervals, the 'domination' of normal migrants as mere objects of policy with very little to no legal standing and given space to the request to be treated equally, with dignity.In Rancière's words: to test their equality with that of anyone else.The protests in the detention center in Italy, the coordinated action of the French organizations have made it possible to reject the order of things, i.e., the policing of the community, in order to work towards an emancipation of the part without part -the normal migrants.This is clearly a political stake in the meaning given by Rancière and it is inherent to EU immigration policies where the principle of proportionality applies.
Depending on the evolution of the matter in the future, this political moment might be the beginning of more openness toward normal migrants and thus toward equal rights or at least toward equal standards of protection before courts.In that respect, possible improvements have been suggested on the mere basis of the principle of non-discrimination there were "legitimate and appropriate analog[ies]" apply ( [4], p. 352).
Just to mention one of the suggested improvements, in civil and criminal matters the right to remain silent and the privilege against self-incrimination apply.This means that the accused has the right not to cooperate.Though this right is not absolute, one cannot be found guilty on the mere fact that one has remained silent or has refused to cooperate.There must always be some other objective evidence to infer guilt.According to the standard case law of the ECtHR 4 silence on behalf of the accused only corroborates guilt if he or she cannot explain his or her implication against the great amount of evidence acquired independently by the authorities.Yet this is not the case in immigration matters where, as we have seen in the case law of the CJEU, non cooperation or silence is considered a legitimate ground for extending detention or even for criminal imprisonment there where coercive removal has failed.Yet the authorities are capable of finding evidence on the provenance of normal migrants without their cooperation and can initiate a removal procedure without knowing where they came from as they could reach an agreement with a safe third country.Thus, it is hardly only due to lack of cooperation of the normal migrant that a removal procedure fails.Rather, it is due to lack of due diligence or investigative skills on behalf of the authorities.Hence, where such removal fails, there should be no legitimate ground left for (detention in view of) expulsion and criminal detention ( [4], pp.352-56).4 See ECtHR 25 February 1993, Funke v. France, §44-45; 8 February 1996, John Murray v. the United Kingdom, § §47-51.See also ECtHR 17 December 1996, Saunders v. the United Kingdom, §69.The right to remain silent and the privilege against self-incrimination "does not extend to the use in criminal proceedings of material which may be obtained from the accused through the use of compulsory powers but which has an existence independent of the will of the suspect such as, inter alia, documents acquired pursuant to a warrant, breath, blood and urine samples and bodily tissue for the purpose of DNA testing."(Italics added).This means that for similar practices of gathering evidence normal migrants are not being treated equally, are being discriminated.In other words, the standards of protection applicable to them are lower than those applied to legal subjects in civil and criminal matters.On what grounds is this justifiable?If such analogies are not being applied today, or if they are regarded as non-applicable, is it not because, as things are today, the normal migrant has no part?Demanding an analogous treatment is a legitimate political act that can be grounded on the principles of law that are applied elsewhere, yet discarded, for no justified reason, when it comes down to immigration matters.These interesting legal challenges seem inevitable and will grow in the future along with the number of normal migrants.Therefore, the question rises whether Member States will continue to be stubborn and remain antipolitical, as Rancière would put it, or whether they will extend the principles and ideals of democracy to anyone within their jurisdiction?
Laws 2014, 3 123 3 .
Limiting State Discretion Even More-The Unattended Effect of the Return Directive 2 | 2019-01-04T20:13:39.674Z | 2014-01-27T00:00:00.000 | {
"year": 2014,
"sha1": "5f00cc6ee247ae1a7e3df56b8ae70dcb31e2d4e4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-471X/3/1/117/pdf?version=1390842353",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5f00cc6ee247ae1a7e3df56b8ae70dcb31e2d4e4",
"s2fieldsofstudy": [
"Political Science",
"Law"
],
"extfieldsofstudy": [
"Political Science",
"Sociology"
]
} |
16597456 | pes2o/s2orc | v3-fos-license | High quality AlN epilayers grown on nitrided sapphire by metal organic chemical vapor deposition
Influence of sapphire pretreatment conditions on crystalline quality of AlN epilayers has been investigated by metal organic chemical vapor deposition (MOCVD). Compared to alumination treatment, it is found that appropriate sapphire nitridation significantly straightens the surface atomic terraces and decreases the X-ray diffraction (0002) full width at half maximum (FWHM) to a minimum of 55 arcsec, indicating a great improvement of the tilting feature of the grain structures in the AlN epilayer. More importantly, there is no inversion domains (IDs) found in the AlN epilayers, which clarifies that optimal sapphire nitridation is promising in the growth of high quality AlN. It is deduced that the different interfacial atomic structures caused by various pretreatment conditions influence the orientation of the AlN nucleation layer grains, which eventually determines the tilting features of the AlN epilayers.
AlGaN-based ultraviolet (UV) emitters and detectors have drawn much attention due to a number of applications, e.g., water purification, disinfection of medical tools and UV curing [1][2][3] . Due to the lack of low-cost bulk AlN substrates, commercial devices are usually fabricated on AlN/sapphire templates grown by metal organic chemical vapor deposition (MOCVD). However, the large lattice and thermal mismatch between AlN and sapphire generally leads to high dislocation density, which acts as nonradiative recombination centers and then seriously restricts device performances 4 . Therefore it is crucial to obtain high crystalline quality AlN epilayers on sapphire substrates. Considerable efforts are devoted to enhancing surface migration of Al adatoms, and techniques such as pulsed atomic layer epitaxy (PALE) 5 , modified migration-enhanced epitaxy (MEE) 6 and high-temperature (> 1300 °C) MOCVD growth 7 have been adopted. Sapphire nitridation pretreatment originating from the GaN growth 8,9 has also been essayed to improve AlN quality. However, unlike GaN growth on sapphires, where it is well established that nitridation is the key point to obtain high-quality epilayers, sapphire nitridation for AlN growth is still controversial and suggested to prohibit, as formation of inversion domains (IDs) in AlN epilayers and the resulting rough surfaces 10,11 are the biggest obstacles in the procedure.
In this paper, we have studied the impact of sapphire either alumination or nitridation pretreatments on AlN growth by MOCVD. Our results show that compared to the case of alumination, appropriate sapphire nitridation considerably improves the AlN quality, e.g., straightening the surface atomic terraces and decreasing the X-ray diffraction (0002) full width at half maximum (FWHM). More importantly, the aforesaid nitridation condition is demonstrated to effectively avoid the generation of inversion domains (IDs). The surface morphology and orientation of AlN NL grains on sapphires under different pretreatments is further investigated to seek after the mechanism accounting for the improved quality of AlN epilayers on nitrided sapphires.
A series of AlN samples (A-F) were prepared on (0001) sapphire substrates pretreated under different conditions. The pretreatment conditions were shown in Table 1, including sapphire alumination (Sample A), none pretreatment (Sample B) and sapphire nitridation (Sample C-F). Figure 1 shows the typical in-situ monitoring curves for AlN growth, where the black and red lines correspond to the growth temperature and optical reflectance curve, respectively. For Samples A-E (all similar to Sample C as shown in Fig. 1(a)), the average reflectance value stays constant at high temperature, indicating that the expectant layer-by-layer growth mode dominates the HT-AlN growth. While for prolonged nitrided Sample F, the damping reflectance in Fig. 1b) suggests rough surface morphology 12,13 , which may be caused by island growth of the HT-AlN epilayer.
To further explore the surface morphology of HT-AlN epilayers on sapphires under different pretreatments, AFM images in 3 × 3 μ m 2 scan size of all samples are taken as shown in Fig. 2, where serial number (a)-(f) corresponds to Sample A-F, respectively. Well defined step terraces are observed in Sample A-E, wherein it is noted that terraces on 7 s nitrided sapphire (Sample C) are much straighter than the others. It means that appropriate sapphire nitridation effectively reduces the planar tensions as well as the density of screw threading dislocations, since both of them are demonstrated to be responsible for the terrace meander 10,14 . However, when prolonging the sapphire nitridation time to 100 s (Sample E), obvious terrace meander can be observed, suggesting that an optimal sapphire nitridation condition exists, hereon it is 7 s with 2400 sccm NH 3 at 950 °C. Besides, it is also observed that Sample F features a rough surface morphology delineated by hexagonal faceted nanocolumns similar to the observations in literatures 10,15 , which suggests the possible formation of IDs in AlN epilayers on the long-time nitrided sapphire. This three-dimensional (3D) surface morphology is consistent with the damping reflectance as shown in Fig. 1(b).
The crystalline quality is further checked by FWHM values of XRD symmetric (0002) and asymmetric (10-12) ω -scan curves for samples A-F listed in Table 1. Compared to the alumination (Sample A) and none pretreatment (Sample B) cases, proper initial nitridation (Sample C-E) of sapphires dramatically reduces the FWHM values of both (0002) and (10-12) scan curves, suggesting a lower dislocation density in HT-AlN epilayers. It can be found that when prolonging the sapphire nitridation time from 7 s (Sample C) to 100 s (Sample E), the (0002) FWHM values increase, while (10-12) values decreases. The minimum FWHM values of (0002) and (10)(11)(12), 55 and 734 arcsec, are obtained for Sample C and E, respectively. Similar variation trend was also reported in GaN epilayers 16 , though less related physical mechanism has been put forward so far. When prolonging the nitridation time to 600 s (Sample F), (0002) FWHM changes little, but (10-12) value increases to 922 arcsec, which is believed to result from the coalescence of 3D nanocolumns with different polarities as shown below.
Taking into account both AFM and XRD results, it can be found that HT-AlN epilayers grown on 7-100 s nitrided sapphires present better crystallographic quality than ones on aluminized or prolonged nitrided sapphires, of which 7 s is identified as the optimal condition for subsequent HT-AlN growth.
In addition, polarity of these AlN samples has also been checked. Wet etching is performed in molten KOH for 4 minutes to verify the existence of IDs in Sample F, as the Al polarity AlN crystals are more inert than N-polarity ones in this process 17 . Figure 3 displays the AFM image of etched Sample F, where 20-nm-deep triangular etch pits appear in the place of 3D nanocolumns. This phenomenon manifests that the fast growing nanocolumns correspond to N-polarity domains, while the surrounding regions are Al-polarity. The same treatments are also carried out for all the other samples but no significant change of surface morphology is observed, suggesting that Al-polarity dominates Sample A-E. This indicates that the formation of IDs is directly dependent on the nitridation degree of sapphire, that is, only excessive sapphire nitridation would result in IDs in HT-AlN epilayers. Possible mechanisms of sapphire pretreatment have been further investigated. For the case of alumination, it is generally recognized that excess Al atoms from TMAl adhere to the surface of sapphires by weak metallic Al-Al bonds, and the saturated Al atom film should modify the surface energy of the sapphire substrate, and further affects the surface migration of Al and N atoms 13 . While for the case of nitridation, controversy still exists that a thin intermediate phase Al-O-N compound with cubic 18 , rhombohedral 19 or amorphous 8 structures as well as a surficial hexagonal AlN layer 20 has been reported. In any case, the effect of different sapphire pretreatments is directly displayed by the surface morphology of the AlN nucleation layer (NL). Figure 4(a and b) show the AFM images in 1 × 1 μ m 2 scan size of AlN NL layers on 7 s aluminized and 7 s nitrided sapphires, respectively. It is found that dense grains with larger dimension are observed in Fig. 4(a), while there are only isolated slim grains presented in Fig. 4(b).
The orientations of the NL grains under different sapphire pretreatments are further investigated by XRD ω -scans of (0002) peak. For AlN NL on 7 s aluminized sapphire in Fig. 5(a), obvious multi-curve superposition can be observed, therefore two superposed Gaussian equations are adopted to fit the measured curve. Both the two peaks locates near 18.02°, consistent with the expectance for AlN measurement. Compared with the result of bare sapphire in the same measurement range, we further confirm that the two peaks come from AlN NL. Peak 1 has a FWHM value of 335 arcsec, meaning the uniform orientation of grains with || [0001] [0001] , while the FWHM for the stronger Peak 2 is extracted to be 3438 arcsec. Similar FWHM broadening has been reported in ref. 21, where the crystallographic tilt characterized by scanning electron microscopy (SEM) resulted in a significant increase of (0002) FWHM. Researches of grain tilt have been reported in GaN growth on sapphires 22,23 , where the GaN plane parallel to the surface of Al 2 O 3 was confirmed to be (3-302) instead of (0001), and a tilt angle of 19° is observed by transmission electron microscope (TEM). This disorientation was attributed to the lattice mismatch between the epilayers and substrates. Therefore, AlN NL grains on aluminized sapphire will incline even though the mismatch of AlN/Al 2 O 3 is smaller than that of GaN/Al 2 O 3 , which broadens the FWHM of XRD (0002) ω -scan. Besides, the much stronger intensity of Peak 2 suggests that a mass of grains incline off the sapphire c-axis.
For the 7 s nitridation case, similar two Gaussian peaks are observed in Fig. 5(b), but the intensity ratio of the two peaks changes significantly. The narrow Peak 3 with FWHM of 270 arcsec dominates, while Peak 4 with FWHM of 2905 arcsec is suppressed comparing with the results for alumination case. We conclude that the optimal nitridation condition will effectively relieve the lattice mismatch between AlN NL and sapphire, so that a majority of NL grains have the uniform orientation with || [0001] [0001] AlN A l O 2 3 . Based on the above results, physical mechanisms of sapphire alumination and nitridation pretreatments are schematically depicted in Fig. 6. For aluminized sapphires, a saturated Al film adheres on the surface, keeping the internal atomic structures of sapphire unchanged 13 . This Al film would reduce the atomic migration energy on the surface of sapphire, which is beneficial for the formation of NL grains with large dimension. However, this atomic configuration would maintain the lattice mismatch between AlN and Al 2 O 3 , further resulting in the tilted orientation of partial NL grains as shown in Fig. 6(a). Due to the smaller lattice mismatch between AlN and Al 2 O 3 than that of GaN on Al 2 O 3 , it is reasonable that the disorientation of AlN grains exist but a much smaller tilt angle than GaN case.
While for the nitridation case as shown in Fig. 6(b), the formation of an AlN/AlON composite layer is endorsed from the point of reducing the lattice mismatch between AlN NL and sapphire. When exposing sapphire to ammonia, topmost O atoms have a maximum probability to be substituted by N. The formation of AlN on the surface of sapphire is energetics stable by theoretical calculation 24 and well-founded in experimental study 20,25 , and its thickness depends on NH 3 flow and nitridation time 20,24,25 . Beneath AlN, an AlON intermediate layer is proposed as the result of the substitution of O by N in Al 2 O 3 , which has been verified by X-ray photoelectron spectroscopy (XPS) 26 and transmission electron microscopy (TEM) 19 . The AlN/AlON stepwise structure will effectively relieve the lattice mismatch between AlN NL and sapphire, so that most of NL grains have the uniform orientation with || [0001] [0001] as analyzed in Fig. 5(b). This will lead to much less tilt between different grains during the coalescence process compared to the case of aluminization, which corresponds to the small (0002) FWHM of samples on nitridation sapphires. Moreover, recent research turns out that specific AlON phase is the planar inversion domain boundary (IDB) to change the polarity from N/O to Al, leading to the similar results of Sample C-E in this paper.
The possible generation mechanism of IDs in Sample F is speculated. Excessive sapphire nitridation will greatly increase the substitution of O by N, and then thoroughly change the AlON structure in comparison with the structure proposed for Sample C, leading to the generation of IDs. Similar destruction and disappearance of AlON IDB layer is reported 19 , where it was attributed to the elevated ambient temperatures, the excessive annealing of the buffer, etc. Moreover, when prolonging nitridation time, AlN layer will thicken on the surface of sapphire, forming a barrier for N inward diffusion since N has a lower diffusion coefficient (1.33 × 10 −16 cm 2 /s) in AlN than in Al 2 O 3 (8 × 10 −16 cm 2 /s) 15 . Thus a mass of N atoms adhere on the surface of substrate, building a kind of N-rich condition. Theoretical calculation indicates that IDs can form as Al-polarity and N-polarity structures have very similar formation energies in such N-rich condition 27 .
Based on the optimal sapphire nitridation condition, three alternation cycles of the low-and high-temperature (LT-HT) growth 28 was adopted to further improve AlN crystal quality. The same growth conditions and film structures as ref. 28. were adopted. It is found that the combination of these two growth techniques can effectively decrease both (0002) and (10-12) FWHM values. Compared to the conventional LT-HT alternation technique (311 acrsec for (0002) FWHM, 548 acrsec for (10-12) FWHM), the introduction of sapphire nitridation pretreatment decreases (0002) and (10-12) FWHM to 130 and 457 arcsec, respectively. Besides, straight atomic steps and a root mean square roughness (RMS) of 0.257 nm (3 × 3 μ m 2 ) by AFM indicate that the atomically smooth surface of the AlN epilayers can be maintained by combining these two growth techniques.
In summary, influence of sapphire pretreatment conditions on crystalline quality of AlN epilayers has been investigated. It is found that, appropriate sapphire nitridation significantly straightens the surface atomic terraces and decreases the XRD (0002) FWHM to a minimum of 55 arcsec, suggesting the great improvement of the tilting features of grain structures in the AlN epilayers. More importantly, there is no inversion domain found in the AlN epilayers, which clarifies that the method of sapphire nitridation should be promising in the growth of high quality AlN. It is deduced that the different strain state caused by different interfacial atomic structure influences the orientation of the AlN NL grains, which eventually influences the tilting features of AlN epilayers. Methods Samples Preparation. The samples (A-F) were grown on 2-in. 0.2° off-cut c-sapphire substrates by MOCVD, using an AIXTRON 3 × 2 in. close coupled showerhead (CCS) system. Trimethylaluminum (TMAl) and ammonia (NH3) were used as Al and N precursors, respectively. The growth pressure was maintained to be 85 mbar. Two-step growth procedure was adopted as follows: first, a 20 nm-thick AlN nucleation layer (NL) was deposited on sapphire at 950 °C, and then the chamber temperature was raised to 1240 °C for the growth of 1 μ m-thick high temperature AlN (HT-AlN) epilayers. V/III ratio for NL and epilayers was 7500 and 500, respectively. Prior to the NL growth, sapphire substrates were pretreated under different conditions, and the other growth parameters were kept the same for all samples.
Measurements. LayTec
EpiTT was equipped to in-situ monitor the reflectance curve (405 nm) as well as the emissivity-corrected surface temperature of the susceptor. The surface morphology were characterized by a Bruker Dimension ICON-PT atomic force microscopy (AFM). The symmetric (0002) and asymmetric (10)(11)(12) -scan curves of all samples were measured by a Bruker AXS D8 Discover HRXRD. | 2018-04-03T01:38:31.906Z | 2017-02-21T00:00:00.000 | {
"year": 2017,
"sha1": "084e72beb8fe203985e5d36100688a0a0fe4887b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep42747.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "084e72beb8fe203985e5d36100688a0a0fe4887b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
59457519 | pes2o/s2orc | v3-fos-license | An analysis of the influence of grain size on the strength of FCC polycrystals by means of computational homogenization
The effect of grain size on the flow stress of FCC polycrystals is analyzed by means of a multiscale strategy based on computational homogenization of the polycrystal aggregate. The mechanical behavior of each crystal is given by a dislocation-based crystal plasticity model in which the critical resolved shear stress follows the Taylor model. The generation and annihilation of dislocations in each slip system during deformation is given by the Kocks-Mecking model, which was modified to account for the dislocation storage at the grain boundaries. Polycrystalline Cu is selected to validate the simulation strategy and all the model parameters are obtained from dislocation dynamics simulations or experiments at lower length scales and the simulation results were in good agreement with experimental data in the literature. The model is applied to explore the influence of different microstructural factors (initial dislocation density, width of the grain size distribution, texture) on the grain size effect. It is found that the initial dislocation density, $\rho_i$, plays a dominant role in the magnitude of the grain size effect and that dependence of flow stress with an inverse power of grain size ($\sigma_ y -\sigma_\infty \propto d_g^{-x}$) breaks down for large initial dislocation densities ($>10^{14}$ m$^{-2}$) and grain sizes $d_g>$ 40 $\mu$m in FCC metals. However, it was found that the grain size contribution to the strength followed a power-law function of the dimensionless parameter $d_g\sqrt{\rho_i}$ for small values of the applied strain ($<$ 2 \%), in agreement with previous theoretical considerations for size effects in plasticity.
Abstract
The effect of grain size on the flow stress of FCC polycrystals is analyzed by means of a multiscale strategy based on computational homogenization of the polycrystal aggregate. The mechanical behavior of each crystal is given by a dislocation-based crystal plasticity model in which the critical resolved shear stress follows the Taylor model. The generation and annihilation of dislocations in each slip system during deformation is given by the Kocks-Mecking model, which was modified to account for the dislocation storage at the grain boundaries. Polycrystalline Cu is selected to validate the simulation strategy and all the model parameters are obtained from dislocation dynamics simulations or experiments at lower length scales and the simulation results were in good agreement with experimental data in the literature. The model is applied to explore the influence of different microstructural factors (initial dislocation density, width of the grain size distribution, texture) on the grain size effect. It is found that the initial dislocation density, ρ i , plays a dominant role in the magnitude of the grain size effect and that dependence of flow stress with an inverse power of grain size (σ y − σ ∞ ∝ d −x g ) breaks down for large initial dislocation densities (> 10 14 m −2 ) and grain sizes d g > 40 µm in FCC metals. However, it was found that the grain size con-
Introduction
The bonds between metallic atoms lead to crystalline materials with high stiffness that can withstand plastic deformations and dissipate large amounts of energy before failure. These properties are ideal for structural applications but the stress necessary to promote plastic deformation is very low in most metals. Different strategies have been developed to overcome this limitation and solid-solution, precipitation and strain hardening are often combined to increase the density and strength of obstacles to the dislocation motion and to enhance the flow stress of metals and metallic alloys. Moreover, metallic alloys are often used as polycrystals and it is well established that the strength of polycrystalline metals can also be increased by reducing the grain size. The pioneer work of Hall [1] and Petch [2] established a phenomenological dependence of the yield strength, σ y , with the grain size, d g , of the form, where σ ∞ is the yield strength of a polycrystal with very large grain size and C HP is a material constant. Eq. (1) was supported by the work of Eshelby et al. [3] for the stress necessary to move a dislocation in front of a dislocation pile-up formed at the grain boundary and also from work hardening models that assume that flow stress increases with the square root of the dislocation density [4]. Further support for eq. (1) was provided by Ashby [5], who analyzed the plastic incompatibility between grains with different orientation within the polycrystal. The increase in dislocation density that leads to hardening could be separated into two different contributions. Statistically stored dislocations (SSDs) account for a uniform deformation, while geometrically necessary dislocations (GNDs) are required to preserve the lattice continuity between grains with different orientation. The density of the former is grainsize independent, while that of the latter is concentrated around the grain boundaries and depends on the grain size.
However, the generality of eq. (1) was challenged, as many authors reported that most of the experimental data could also be fitted with d −x g with 0 < x ≤ 1 [6]. Other authors [7,8] found that the experimental data better supported x = -1 or a dependence on grain size of the type (ln d)/d [4]. The former exponent was in agreement with a grain representation formed by a soft core surrounded by hard shell around the grain boundary [9,4] while the latter was consistent with a mechanism in which the grain size constrains the size of the dislocation sources [10].
Today it is acknowledged that the increase of the strength of polycrystals with grain size is a manifestation of the general size effect found in plasticity [10,11] and the dominant mechanism(s) (and, thus the value of the exponent x) depends on many factors, such as the elastic anisotropy of the crystal, the range of grain sizes examined, the texture, the number of slip systems, the initial dislocation density, the presence of other obstacles to dislocation motion, etc. [12,13]. As the specific influence of each of these factors is very difficult to be accounted for separately in experiments, numerical simulations become very useful to understand the role played by each one. In the particular case of polycrystalline aggregates, computational homogenization in combination with crystal plasticity has demonstrated its potential to simulate the effective properties of polycrystals while the details of the deformation within the grains are taken into by the crystal plasticity constitutive equation [14,15,16,17].
Several attempts can be found in the literature to simulate the effect of grain size on the mechanical behavior of polycrystals. The first attempt was due to Weng [18], who introduced a grain-size dependent constitutive equation for the slip deformation of slip systems. However, the influence of grain size is a macroscopic result and should be an outcome and not an input of the model. Another attempt to capture grain size effects was based on the self-consistent homogenization scheme in which each grain of the polycrystal was represented as a two-phase composite: a core region in which the strain hardening results from the evolution of SSDs and an interphase layer which corresponds to the grain boundary region, where plastic strain gradients and associated GNDs are present [19]. This model was successfully applied to predict the effect of grain size on the flow stress of ferritic steels with different grains sizes (in the range 5.5 µm to 120 µm) but it should be noted that the thickness of the grain boundary region was an adjustable parameter to fit the experimental data.
Homogenization models of polycrystals based on classical plasticity cannot capture the grain size effect because the constitutive equation does not involve an intrinsic materials length scale. This limitation can be overcome by introducing a length associated with strain gradients in continuum crystal plasticity models [20,21,22,23,24]. Hardening around the grain boundaries comes about as result of the strain gradients (and the associated density of GNDs) which arise to maintain the lattice compatibility between grains with different orientation. However, a direct comparison of these models with the actual hardening found in polycrystals has not been carried out and the physical origin of the length scale included in the formulation is not clear in the case of phenomenological models although this parameter controls the magnitude of the size effect [24,25]. More recently, Wagoner and co-workers [26,27] presented another approach that did not invoke any arbitrary length scale. Polycrystal simulations were carried out using a dislocation-based crystal plasticity model. This information was used at another scale to enforce local slip transmission criteria at the grain boundaries depending on the orientation and on the grain boundary strength.
In this investigation, a multiscale approach is used to analyze within the framework of the computational homogenization of polycrystals. The mechanical response of each crystal follows a rate dependent physically-based crystal plasticity model in the context of finite strain plasticity. The critical resolved shear stress on each slip system in the model is linked with the dislocation densities by a Taylor model [28] in which the strengthening provided by the different types of interactions among dislocations are obtained from dislocation dynamics simulations. The evolution of dislocation density in each slip system was governed by a Kocks-Mecking law [29,30] in which the term that controls the multiplication of dislocations, which is inversely proportional to the dislocation mean free-path, also takes into account the dislocation storage at the grain boundary [31]. The model parameters in the case of Cu were obtained from simulations at lower length scales so the predictions of grain size strengthening in polycrystals are free of adjustable parameters. The multiscale approach was validated by comparison with experimental data in the literature and the influence of different microstructural factors (grain size, grain size distribution, texture, initial dislocation densities, etc.) on the Hall-Petch behavior was ascertained.
The outline of the paper is the following. After the introduction, the crystal plasticity model is presented in Section 2 and the computational homoge-nization strategy in Section 3. The simulation results and the corresponding comparison with experimental data are included in Section 4, while the main conclusions of the paper are summarized in the last section. In the following, vectors, second and fourth rank tensors are denoted by a, A, A. A Cartesian coordinate system is used with respect to the orthonormal basis (e 1 , e 2 , e 3 ). The notations for tensor product, contraction and double contraction products are: a ⊗ b = a i b j e i ⊗ e j ; A · B = A ik B kj (e i ⊗ e j ) and A : B =A ij B ij . Finally 1 and I stand for the second and fourth order identity tensors, respectively.
Crystal plasticity model
The crystal plasticity model assumes a multiplicative decomposition of the deformation gradient F into elastic F e and plastic F p parts according to [32], where the configuration defined by F p is called the relaxed or intermediate configuration. The velocity gradient L can be expressed as where the superposed dot denotes the total derivative with respect to time and L e and L p are defined as Plastic deformation in the single crystal takes place along different slip systems α where n is the total number of slip systems. The crystallographic split on the plastic flow rate is given by whereγ α stands for the plastic shear strain on the slip system α and s α and m α denote, respectively, the unit vectors in the slip direction and perpendicular to the slip plane normal in the intermediate configuration.
The second Piola-Kirchhoff stress tensor S is expressed in terms of the elastic Green-Lagrange strain tensor E e , both relative to the intermediate configuration, as where C is the elastic stiffness tensor of the crystal. The resolved shear stress, τ α , can be defined as the projection of the Piola-Kirchhoff stress on the corresponding slip system, and it is given in the intermediate configuration by Finally, the Cauchy stress σ can be obtained as The relationship between the resolved shear stress in the slip system α, τ α , and the corresponding plastic strain rate,γ α , is given by dislocation theory according to ( [33,34]) where m is the strain-rate sensitivity coefficient,γ 0 the reference shear strain rate and τ α c the critical resolved shear stress on the slip system α. Physically-based hardening models assume that the CRSS is proportional to the dislocation density [28]. This relationship was generalized by Franciosi et al. [35] to account for the anisotropy of the interactions between different slip systems according to where µ and b denote the shear modulus and the Burgers vector, respectively, and ρ β stands for the dislocation density in the slip system β. The dimensionless coefficients a αβ of the dislocation interaction matrix represent the average strength of the interactions between dislocations in pairs of slip systems.
Recent 3D dislocation dynamics simulations [36], carried out in cylindrical single crystals with a diameter D in the range 0.25 µm ≤ D ≤ 20 ≤ µm, have shown that the traditional Taylor model in eq. (10) should be modified by adding another term of the form βµ/(D √ ρ), where β = 1.76 × 10 −3 is a constant and D the diameter of the cylinder. This new term accounts for the strength of the weakest dislocation source in the crystal and it is relevant in the case of small crystals with low dislocation densities. In the range of crystal sizes (> 10 µm) and dislocation densities (> 10 12 m −2 ) analyzed in this investigation, the magnitude of this hardening contribution is negligible and, thus, this term was not included in eq. (10).
FCC crystals have 12 {111} < 110 > slip systems but only six independent coefficients are necessary to determine the 12 × 12 coefficients of the interaction matrix due to symmetry considerations [37]. Three of them account for different types of forest interactions between dislocations: self-interaction of dislocations in the same slip system (same slip plane and Burgers vector), coplanar dislocations (same slip plane but different Burgers vector) and collinear interaction (dislocations on different planes with the same Burgers vector). The remaining three coefficients stand for the effect of dislocation junctions in FCC crystals. They include the formation of glissile junctions between coplanar dislocations with different Burgers vector (leading to a glissile dislocation), the Hirth lock formed by the intersection between two perfect dislocations with non-coplanar Burgers vectors that glide on intersecting planes and the Lomer-Cottrell lock that develop between Shockley partial dislocations on two intersecting {111} planes [37,38]. The magnitude of interaction coefficients for different types of interactions in various lattices (FCC, HCP, BCC) can be determined by means of discrete dislocation dynamics simulations [39,40,41]. In the particular case of FCC crystals, they were obtained in [37,38] and can be found in Table 1.
The overall hardening of the crystal during deformation is controlled by the evolution of the dislocation density. According to Kocks and Mecking [29,30] and Teodosiu [42], the accumulation rate of dislocations in each slip system α,ρ α , can be expressed aṡ The first term within the parenthesis expresses the dislocation accumulation rate and depends on the dislocation Mean Free Path (MFP), α , which stands for the distance travelled for a dislocation segment before it is stopped by an obstacle. The second term within the parenthesis stands for the dislocation annihilation due dynamic recovery and depends on the actual dislocation density ρ α and on y c , which stands for the critical annihilation distance for dislocations. This annihilation distance depends on the type of dislocation (either edge or screw) and on the deformation regime. Experimental observations in Cu single crystals [43,44] have indicated that the annihilation distance for edge dislocations is around 1.5 nm during stage I and stage II deformation. In the case of screw dislocations, the annihilation distances were much larger due to cross-slip: in the range of 10 -15 nm during stage I and below 50 nm in stage III. Thus, an average value of y c = 15 nm was selected.
The dislocation MFP can be expressed as [45,46] where ρ β is the total dislocation density on a latent system β and K is a dimensionless constant. In the case of Cu, K = 6 was obtained from the experimental relationship between the dislocation MFP and critical resolved shear stress for dislocation slip assuming that the later follows the Taylor model [45,47].
Experimental results [48] as well as dislocation dynamics simulations [11,49] have shown that the storage rate of dislocations increases as as the grain size decreases and this behavior can be explained following simple arguments [30,11]: a dislocation loop that sweeps a cubic grain of dimensions d × d × d leads to a shear strain ∆γ ≈ b/d. The associated increase in dislocation density is given by ∆ρ ≈ 1/d 2 and thus ∆ρ/∆γ ∝ 1/bd. Thus, the dislocation storage rate is not only governed by the dislocation MFP in the bulk but also by the grain size [50,30,11]. Moreover, dislocation dynamics simulations in polycrystals with different grain size [31] have shown that the dislocation density is not constant within the grain but increases as the distance to the grain boundary decreases. Based on these observations, Lefebvre [47] modified eq. (11) to include the distance from the material point considered to the grain boundary, d b , according to where K s is another dimensionless constant that controls the storage of dislocations on the grain boundary. Dislocation dynamics simulation of FCC crystals with different sizes have shown that K s ≈ 5 [31]. Thus, this physicallybased, phenomenological modification of the Kocks-Mecking law can take into account the increase in dislocation density near the grain boundaries, which naturally leads to a grain size effect.
The strain hardening rate for the slip system α,τ α c , can be obtained by differentiation of eq. (10) with respect to time. Taking into account eqs. (12) and (13), this leads toτ where the hardening matrix h αβ is expressed as This constitutive model was implemented in Abaqus/Standard as a UMAT following the strategy presented in [51].
Polycrystal homogenization framework
The mechanical behavior of the polycrystal is obtained by means of the finite element simulation of the deformation of a Representative Volume Element (RVE) of the microstructure, following the standard procedures in computational homogenization [14,15,16,17]. The cubic RVE is made up of a regular mesh of N ×N ×N cubic finite elements or voxels (C3D8 elements in Abaqus with 8 nodes at the cube corners and full integration).
The grain size distribution of the polycrystal followed a lognormal distribution characterized by the average grain size, d g , and the corresponding standard deviation, d SD . The grains were equiaxed and the microstructure in the RVE was generated using Dream3D [52] (Fig. 1). Most simulations were carried out in RVEs with random texture but one set of analysis was carried out with the typical rolling texture of Cu to assess the influence of this factor on the Hall-Petch effect.
The microstructure of the RVE was periodic along the 3 directions of the RVE and periodic boundary conditions were applied to the cube faces according to where L is the length of the cube size, u the displacement vector, F the far field macroscopic deformation gradient and e i , i = 1, 2, 3 is the orthogonal basis with corresponding coordinates x i , i = 1, 2, 3.
The far-field deformation gradientF applied to the RVE is obtained by prescribing the displacements of three master nodes M i corresponding to three different faces of the RVE, If some components of the far-field deformation gradient are not known a priori (mixed boundary conditions, as in under uniaxial tension), the corresponding components of the effective stressesσ are set instead. This is carried out by applying a nodal force P j to the master node M i and degree of freedom j according to where A i is the projection of the current area of the face perpendicular to e i in this direction.
Finally, the macroscopic Cauchy stresses acting on any cube surface can be computed by dividing the reaction forces F j of the master nodes M i by the actual area of the face perpendicular to that master node A i .
The constitutive equation developed in the previous section includes the distance to the nearest grain boundary for each slip system. This information was computed and stored at the beginning of the simulations for each slip system in each Gauss point. The deformation gradient in these simulations was small and it was assumed that this distance to the nearest grain boundary did not change during the analysis.
Results and discussion
The computational homogenization strategy was used to analyze the influence of grain size on the tensile response of Cu polycrystals with average grains sizes in the range 10 to 80 µm. The elastic constants, strain rate sensitivity and reference strain rate of single crystal Cu are well known from the literature and shown in Table 1. The parameters that control the hardening, storage and annihilation of dislocations during deformation were also determined for Cu using results in the literature from dislocation dynamics simulations and experiments and they are included in Table 1. All the simulations presented below were carried out at a constant strain rate of 7.0 10 −4 s −1 .
In order to check the critical size of the RVE, preliminary simulations were carried out using 27000 (N = 30) voxels and 50 grains and 125000 (N = 50) voxels and 200 grains in the RVE. These numbers were selected so the same number of voxels was used to discretize each grain in both models. The initial dislocation density in each slip system was 10 11 m −2 , leading to a total initial dislocation density ρ i = 1.2 10 12 m −2 and the grain size distribution (d g = 20 µm, d SD = 4 µm) is depicted in Fig. 2a). Three different grain size realizations with random texture were simulated for each discretization and the corresponding stress-strain curves are plotted in Fig. 2b). The differences in the stress-strain curves among the three realizations for each discretization are small (below 5% in the case of the finest discretization) as well as the differences in the curves obtained with 27000 and 125000 voxels. These results indicate that homogenized properties are independent of the RVE size and can be used to obtain the effective properties of the polycrystals, in agreement with previous results [56,51,57].
All the stress-strain curves reported below were obtained with RVEs including 125000 voxels and 200 grains. Each grain in the polycrystal was discretized with ≈ 625 voxels and the voxel length was ≈ 1 µm in the case of a polycrystal with an average grain size of 10 µm, which is equivalent to the average distance between dislocations (1/ √ ρ) for a dislocation density of 10 12 m −2 . The finite element model assumes that the plastic deformation is homogeneously distributed in all the voxels within the grain but this assumption may not represent adequately the inhomogeneous plastic deformation that occurs in small grains (below 10 µm) with low dislocation densities. Moreover, the standard Taylor model (see eq. (10)) is no longer valid below this grain size for dislocation densities < 10 12 m −2 , according to the dislocation dynamics simulations [36]. Thus, the minimum average grain size of the polycrystals in the simulations was 10 µm and the minimum value of the initial dislocation density 1.2 10 12 m −2 .
Influence of the grain size on the flow stress of Cu polycrystals
The tensile behavior of polycrystals with d g = 10, 20, 40 and 80 µm and d SD = 0.2 d g was computed for three initial values of the dislocation density, ρ i = 1.2 10 12 m −2 , 1.2 10 13 m −2 and 1.2 10 14 m −2 , and the corresponding stress-strain curves are plotted in Figs. 3a), b) and c), respectively. The results obtained neglecting the effect of dislocation storage at the grain boundaries (K s = 0) are also plotted as broken lines in these figures. The stress-strain curves in this case were superposed, regardless of the grain size, because the constitutive equation does not include any size-dependent term. Thus, they were considered representative of a polycrystal with "infinite" grain size.
The initial flow stress of the polycrystals in Fig. 3 is independent of the grain size and depends only on the initial dislocation density. However, the initial strain hardening rate after yielding increases rapidly as the grain size decreases, following the experimental trends, due to the accumulation of dislocations at the grain boundaries. The strengthening induced by grain boundaries is associated to the region near the grain boundary in which the storage of dislocations induced by the presence of the boundary reduces the actual dislocation MFP. The thickness of this region and the magnitude of the size effect mainly depends on K s , which controls the storage of dislocations at the grain boundaries. Thus, it is obvious from these simulations that the grain size as well as the initial dislocation density are key parameters to take into account the influence of grain boundaries on the strengthening of polycrystals.
The strain hardening rate drops very rapidly for applied strains > 2%, and this reduction is faster in the polycrystals with small grain size. This phenomenon is controlled by the annihilation of dislocations in the grain boundaries and depends on the critical distance for dislocation annihiliation, y c . Finally, the hardening rate seems to be independent of the grain size for applied strains > 4% (and very similar to that found in polycrystals with infinite grain size), indicating that the storage and annihilation of dislocations at the grain boundaries have reached an steady-state which is independent of the grain size at this stage.
The influence of the grain size on the deformation pattern of the polycrystal can be assessed from Figs. 4, 5 and 6, in which the contour plot of the accumulated plastic slip on all the slip systems (Γ = α |γ α |dt), the total dislocation density and the Von Mises stress are plotted, respectively, for polycrystals with average grain sizes of 10 µm, 40 µm and "infinite" grain size. In the case of polycrystals with "infinite" grain size, the accumulated plastic slip, the dislocation density and the Von Mises stress are fairly homogeneous throughout the microstructure, Figs. 4a), Fig. 5a) and 6a). Isolated "hot spots" in which the dislocation density and the Von Mises stress are higher can be seen in a few grains boundaries as a result of the elastic anisotropy and of the incompatibility in the plastic deformation between grains with different orientation. Nevertheless, their contribution to the overall flow stress of the polycrystal is negligible. On the contrary, the plastic strain distribution becomes more heterogeneous throughout the microstructure as the grain size decreases, Figs. 4b) and c). Thus, plastic deformation tends to localize in large grains which are suitable oriented for slip, while it remains low in small grains because of the constraint of the grain boundaries. This is clearly shown in Fig. 5, in which the dislocation densities are plotted for the three cases. They are homogeneous and around 10 14 m −2 in most of the microstructure in the simulations with "infinite" grain size, Figs. 5a), and much higher around the grain boundaries in the other two cases, reaching values > 10 15 m −2 when the average grain size is around 10 µm, Figs. 5c). As a result, the stresses necessary to promote plastic deformation at the grain boundaries increased with respect to the stresses within the grains and the contour plots of the Von Mises stresses show very clearly the network of grain boundaries in the polycrystal, Figs. 6b) and c). The volume of material affected by this strengthening mechanism (as well as the maximum stress values) increase as the average grain size decreases, leading to the grain size effect on the flow stress. The experimental data for an applied strain of 0.5 % can be found in [58] while those corresponding to an applied strain of 5% were obtained from [59]. The simulations were carried out using the parameters in Table 1 with an initial dislocation density of 1.2 10 12 m −2 .
Comparison with experiments
One critical test of the approach presented is its ability to provide a good estimation of the experimental evidence, taking into account that there are not adjustable parameters in the model. Li et al. [10] reviewed recently the experimental results available in the literature on the effect of grain size in the flow stress of polycrystalline Cu and those from Armstrong et al. [58] for an applied strain of 0.5% and from Hansen and Ralph [59] for an applied strain of 5% could be directly compared with the simulations in this paper. They are shown in Figs. 7a) and b) in which the flow stress after 0.5% and 5% applied strain is plotted as a function of d −0.5 g and d −1 g , respectively. The results of the polycrystal homogenization simulations were carried out using the parameters in Table 1 and an initial dislocation density of 1.2 10 12 m −2 , which corresponds to a well-annealed polycrystal.
It should be noted that the experimental data and the numerical predictions of the flow stress can be fitted to both d −0.5 g and d −1 g within the range of grain sizes and applied strains studied. There is no information in the experimental report about the initial dislocation density but the Cu poly-crystals were well annealed, so values of ρ i ≈ 10 12 m −2 are reasonable. The numerical results obtained with this initial dislocation density are very close to the experimental data for grain sizes ≥ 20 µm although they overestimate slightly the flow stress at an applied strain of 0.5 %. This latter difference may be explained by the fact that the grain boundary strengthening model in the constitutive equation assumes that all grain boundaries store dislocations and does not take into account the orientation of the crystals at both sides of the grain boundary. However, the contribution of some grain boundaries to the storage of dislocations is minimum because slip transfer between neighbour grains can be easily accommodated. The anisotropy of grain boundaries from the viewpoint of dislocation transmission and storage is very important for applications where the relative grain boundary fraction is significant, e.g. ultra fine-grained metals, thin films, micro-devices and in low symmetry crystals (because of the limited number of slip systems and the differences in the critical resolved shear stresses among the different systems) but it is very challenging from the simulation viewpoint [60]. However, the influence of this mechanism is more limited in FCC polycrystals and, thus, the model predictions for FCC Cu are in good agreement with the experimental data.
The model tends to overestimate the flow stress of the polycrystals with an average grain size of 10 µm and this difference can be attributed to two factors, Firstly, the overestimation of the strengthening effect of the grain boundaries by neglecting easy slip transfer, as indicated above. Secondly, the finite element crystal plasticity model may not represent adequately the inhomogeneous plastic deformation that occurs in small grains (below 10 µm) with low dislocation densities because the voxel size is equivalent to the average dislocation distance.
Scaling laws for the flow stress
As indicated in the introduction, the experimental results for the effect of grain size on the flow strength of polycrystals are often approximated by a generalized Hall-Petch equation, where σ y is the polycrystal flow stress at a given applied strain, σ ∞ the flow stress of the polycrystal with "infinite" grain size at the same applied strain and C and x are materials constants with 0 < x ≤ 1 [6]. It should be noted, however, that large discrepancies are found in the experimental literature in the value of x even for nominally identical metals and alloys [6] and the simulations in this paper can provide valuable information about the range of validity of eq. (20). To this end, the results of the numerical simulations for σ y −σ ∞ vs. the average grain size, d g , are plotted in bilogarithmic coordinates in Figs. 8a), b) and c) for microstructures with initial dislocation densities of 1.2 10 12 m −2 , 1.2 10 13 m −2 and 1.2 10 14 m −2 , respectively. The first value represents a well-annealed polycrystal with an initial yield stress of ≈ 10 MPa while the third one represents a work hardened material with an initial yield stress close to 100 MPa (Fig. 3). Data for three different values of the applied strain (1%, 2.0% and 5%) are plotted in each figure. The numerical results for ρ i = 1.2 10 12 m −2 and ρ i = 1.2 10 13 m −2 (Figs. 8a and b) can be well approximated by eq. (20) with x ≈ 0.85 in the former and x ≈ 1 in the latter for applied tensile strains of 1% and 2%. However, the linear relationship between log (σ y − σ ∞ ) and log (d g ) begins to disappear for both initial values of the dislocation density for = 5%. The breakdown of the the linearity expressed by eq. (20) in bilogarithmic coordinates is more obvious in the polycrystal with ρ i = 1.2 10 14 m −2 (Fig. 8c) and strengthening provided by the grain boundaries drops very rapidly for large grain sizes (> 40 µm), regardless of the applied strain.
The results in Fig. 8 show the competition between the two mechanisms that dictate the effect of grain boundaries on the mechanical properties of the polycrystal. Strengthening is induced by the storage of dislocations at grain boundaries but this process is limited by the annihilation of dislocations around the grain boundaries when the dislocation densities reach very high values. The former process dominates when the initial dislocation density and the applied strain are small (ρ i ≤ 10 13 m −2 and ≤ 2%, respectively) and the strengthening provided by the grain boundaries follows the generalized Hall-Petch law expressed by eq. (8). However, annihilation of dislocations at the grain boundary becomes relevant for large applied strains ( > 2%) and/or high values of the initial dislocation density (ρ i > 10 14 m −2 ) and the strengthening contribution of the the grain boundaries becomes irrelevant for large grain sizes (> 40 µm), leading to a break down of the Hall-Petch effect. However, it should be noticed that it could have been possible to find a good correlation between the numerical results and eq. (20) if the data set was limited to grain sizes ≤ 40 µm.
Thus, the simulations presented above indicate that the strengthening provided by grain boundaries in polycrystals do not depend only on the average grain size but also on the initial dislocation density. In the case of well annealed polycrystals (within initial dislocation densities < 10 13 m −2 ), the effect of grain size on the flow stress of FCC polycrystals can be represented by eq. (20) and the exponent x is closer to 1 than to the original value of 0.5 proposed by Hall-Petch, in agreement with experimental observations [7,8]. This scaling law breaks down, however, for FCC polycrystals with large initial dislocation densities (> 10 14 m −2 ) and grain sizes larger than 40 µm. This result is in agreement with theoretical results [61] and dislocation dynamics simulations [36] which show that the strengthening associated with size effects in plasticity, σ y − σ ∞ has to be expressed as where ∆(d g √ ρ) is a function of the ratio between two length scales: the physical length scale (d g in the case of polycrystals) and the average dislocation spacing (1/ √ ρ). This hypothesis is checked in Fig. 9, in which the strengthening of polycrystals due to the grain size, 1 − σ y /σ ∞ , is plotted vs. d g √ ρ i , where ρ i is the initial dislocation density. The simulation results for an applied strain of 1% or 2% are shown in Fig. 9a) and support this hypothesis. Regardless of the initial dislocation density, the strengthening due to the grain size can be approximated by an expression on the form where C = 15.6 and x = 0.87 for = 1% and C = 8.61 and x = 0.78 for = 2%.
In the case of an applied strain of 5% (Fig. 9b), the strengthening provided by the grain size decreases as d g ρ i increases but the actual magnitude of 1 − σ y /σ ∞ also depends on the initial dislocation density.
The results in Fig. 9a) point out that eq. (22) is able to capture the strengthening due to grain size for small applied strains when dislocation storage at the grain boundaries is the dominant mechanism and annihilation of dislocations at the grain boundaries was negligible. As the applied strain increases up to 5%, dislocation annihilation at the grain boundaries starts to play an important role that is not included in the dimensionless parameter d g ρ i . Thus, the strength provided by grain boundaries still decreases as d g ρ i increases at large applied strains but all the results do not collapse into a single line in bilogarithmic coordinates.
Effect of microstructural features: grain size distribution and texture
The polycrystal homogenization strategy allows the exploration of the influence of different microstructural factors on the strengthening due to the grain size and two of them (grain size distribution and texture) will be addressed in this section. RVEs with 200 grains and random texture were generated using three different grain size distributions indicated in Fig. 10a). The average grain size, d g , was constant and equal to 20 µm in all cases but the standard deviation of the grain size distribution, d SD , varied from 2 µm (a narrow distribution with d SD = 0.1d g ) to 8 µm (a wide distribution with d SD = 0.4d g ). The influence of the width of the grain size distribution on the stress-strain curve is plotted in Fig. 10b) for simulations carried out with an initial dislocation density of 1.2 10 12 m −2 . Two sets of simulations were carried out for each grain size distribution, with and without the effect of dislocation storage at the grain boundaries. The former are shown with solids lines and the latter with a broken line because the grain size distribution did not influence the flow stress of the polycrystal if the dislocation storage at the grain boundaries is not included in the model. However, narrower grain size distributions led to higher strengths if this effect was accounted for in the simulations. The effect of the width of the grain size distribution was not large but it was noticeable and this is another factor -together with the initial dislocation density -that may be responsible for the large scatter found in the experimental data of the grain size effect.
The analysis of the influence of the initial texture on the grain size was carried out using an RVE with 200 grains. Representative {001}, {110} and {111} pole figures are plotted in Fig. 11a) for the 200 grains in the RVE, which were obtained from the experimental texture of a rolled sample using a Monte Carlo lottery to assign the grain orientation within the RVE. They show the typical texture of Cu with respect to RD, TD and ND (rolling, transverse and normal directions of the sheet), respectively. The {111} pole figure clearly indicates that the material is highly textured and that the {111} planes lie parallel to the rolling plane, which is a common rolling texture developed in pure FCC metals [62,63].
The stress-strain curves obtained by computational homogenization along the rolling direction (RD), normal direction (ND) and transverse direction (TD) are plotted in Fig. 11b) for a grain size distribution characterized bȳ d g = 20 µm and d SD = 4 µm and an initial dislocation density of 1.2 10 12 m −2 . The grains were assumed to be equiaxed (although it is known that this is not the case for rolled Cu) to account only for the grain orientation effect. Two simulations were carried out in each orientation with different texture realizations obtained by means of the Monte Carlo lottery. The corresponding stress-strain curves were very close in all cases, indicating that simulations with 200 grains were large enough to capture the effect of texture. In addition, polycrystal simulations in which the storage of dislocations at the grain boundaries was not accounted for are also included in this figure for the three orientations. The simulation results show that expected influence of the texture on mechanical behavior: the polycrystal was slightly stronger along the RD and the softest response was found along the ND. However, the differences in the flow stress are small, as is typical of FCC alloys because of the large number of slip systems, which lead to a rather isotropic plastic deformation even in the presence of a strong texture. Storage of dislocations at the grain boundaries led to a similar size effect in the three orientations and, thus, texture did not influence the magnitude of the grain boundary strengthening.
Conclusions
The influence of grain size on the mechanical response of FCC polycrystal has been studied using a multiscale approach based on computational homogenization of the polycrystal behavior. The constitutive equation of the single crystals was given by a rate dependent physically-based crystal plasticity model in the context of finite strain plasticity. The critical resolved shear stress to produce plastic slip was obtained by a Taylor model in which the strengthening mechanisms due to dislocation/dislocation interactions and junctions were included. The generation and annihilation of dislocations in each slip system during deformation was given by the Kocks -Mecking model, which included an extra term to account for the dislocation storage at the grain boundaries. All the model parameters have a clear physical meaning and could be obtained from dislocation dynamics simulations or experiments in the case of Cu.
The results of the numerical simulations showed that the yield stress was controlled by the initial dislocation density and was independent of the grain size. However, the strain hardening rate showed a strong effect of the average grain size, which was mainly attributed to the storage of dislocations at the grain boundaries. In the absence of this mechanism, the effect of the grain size on the mechanical behavior due to the elastic anisotropy and to the plastic deformation incompatibility between neighbour grains was negligible. The model predictions effectively captured the experimental trends for the grain size effect in polycrystalline Cu, validating the multiscale computational homogenization strategy. Two main factors were found to determine the strengthening provided by grain boundaries in polycrystals: the average grain size and the initial dislocation density. Other microstructural factors (width of the grain size distribution, texture) played a secondary role in the magnitude of the size effect. It was found that the scaling law σ y − σ ∞ ∝ d −x g was fulfilled for well annealed polycrystals (with 0.85 ≤ x ≤ 1) but did not hold in polycrystals with large initial dislocation densities (> 10 14 m −2 ) and grain sizes larger than 40 µm. These results explain the large differences in the literature in the proportionally constant and the exponent of the size effect law because very different values can be obtained as a function of the initial dislocation density or of the range of grain sizes explored. Finally, the simulation results showed that the contribution of the grain size to the strength followed a power-law function of the dimensionless parameter d g √ ρ i for small values of the applied strain (< 2 %), in agreement with previous theoretical considerations for size effects in plasticity [61].
Acknowledgments
This investigation was supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (Advanced Grant VIRMETAL, grant agreement No. 669141). Support from the Spanish Ministry of Economy and Competitiveness (DPI2015-67667) is also gratefully acknowledged. | 2018-01-16T08:47:13.000Z | 2018-01-16T00:00:00.000 | {
"year": 2018,
"sha1": "f2b75fb95bd8c52394db925fb1b6429fb75065e3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.05155",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f2b75fb95bd8c52394db925fb1b6429fb75065e3",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
231603240 | pes2o/s2orc | v3-fos-license | The history of LHCb
In this paper we describe the history of the LHCb experiment over the last three decades, and its remarkable successes and achievements. LHCb was conceived primarily as a b-physics experiment, dedicated to CP violation studies and measurements of very rare b decays, however the tremendous potential for c-physics was also clear. At first data taking, the versatility of the experiment as a general-purpose detector in the forward region also became evident, with measurements achievable such as electroweak physics, jets and new particle searches in open states. These were facilitated by the excellent capability of the detector to identify muons and to reconstruct decay vertices close to the primary pp interaction region. By the end of the LHC Run 2 in 2018, before the accelerator paused for its second long shut down, LHCb had measured the CKM quark mixing matrix elements and CP violation parameters to world-leading precision in the heavy-quark systems. The experiment had also measured many rare decays of b and c quark mesons and baryons to below their Standard Model expectations, some down to branching ratios of order 10-9. In addition, world knowledge of b and c spectroscopy had improved significantly through discoveries of many new resonances already anticipated in the quark model, and also adding new exotic four and five quark states.
Introduction
LHCb is an experiment at the CERN LHC, dedicated to the study of heavy flavours with large statistics. The resulting high precision makes possible the observation of tiny deviations from the predictions of the Standard Model (SM) in CP violation and rare phenomena, variations which could hint at New Physics (NP) processes. LHCb started taking data in 2010. The so-called Run 1 commenced at an initial centre of mass energy of √ s = 7 TeV which was then increased to √ s = 8 TeV, collecting an integrated luminosity of 3.23 fb −1 until the end of 2012. After a two-year shutdown, LHC operation continued from 2015 to 2018 (Run 2), when the experiment took data at √ s = 13 TeV, recording an integrated luminosity of ∼ 6 fb −1 . Throughout the running periods, LHCb collected and analysed an unprecedented number of b decays, and also enlarged its scope to include charm physics, W and Z measurements, jets and nuclear collisions.
In this paper the motivation for the LHCb experiment is described, recalling how its design was developed and evolved. The experiment's major achievements in terms of physics results are then summarised. Finally we discuss the plans for the future upgrade, in the period when the Super-KEKB collider will also operate.
The layout of this paper is as follows. The Introduction (Sect. 1) describes the status of the b physics programme at the time when the LHCb detector was conceived, and provides an evolution of its design. In Sect. 2 the basic elements of the detector are described, optimised for the requirements of heavy-flavour physics measurements, together with a description of the triggers. The measurements which were originally the major aims of the experiment, i.e. the CKM matrix measurements and CP violation and very rare decays of the b-quark, are reviewed in Sects. 3 and 4, respectively. In Sect. 5, a summary of the wide-ranging results in b-and c-spectroscopy is presented. A review is given of the many non planned physics areas in Sect. 6, such as results on jets, electro-weak (EW) physics, and searches for new particles not associated with heavy flavours. In all these domains, LHCb proves to be an extremely versatile detector, providing complementary measurements to those of the LHC General Purpose Detectors (GPDs). The paper concludes in Sect. 8 with a short description of the upgrade plans, which will ensure LHCb operation beyond 2030.
The paper presents the authors' subjective summary of LHCb's many major physics results, however the review inevitably omits a substantial number of important measurements. To this end, additional information can be found in the paper's exhaustive bibliography.
b physics at the end of the XXth century
In 1970 Glashow, Iliopoulos and Maiani postulated the existence of a fourth quark, charm, necessary to explain the smallness of K 0 oscillations in the framework of the Standard Model of Weak Interactions [1]. The so-called Glashow-Iliopoulos-Maiani (GIM) mechanism generalized Cabibbo's idea of rotated weak currents to the quark model with two doublets, introducing a 2 × 2 mixing matrix written in terms of the Cabibbo angle. An experimental confirmation of the fourth quark hypothesis came in 1974 with the discovery of J/ψ [2,3], soon identified as a cc bound state, and followed by the discovery of open charm. In 1973 Kobayashi and Maskawa [4] proposed a third heavy-quark doublet in order to describe CP violation in the framework of the SM, thus generalizing the Cabibbo matrix to the 3 × 3 CKM (Cabibbo, Kobayashi, Maskawa) matrix. The discovery of the Υ in 1975, followed by the charged and neutral B-mesons in 1983 [5] proved the validity of their idea and held the prospect of understanding quantitatively CP violation.
The level of CP violation in b quark decays was expected to be orders of magnitude larger than in the neutral K system, but unfortunately the relevant decay channels had only tiny branching fractions, so the lack of intense "b-quark sources" slowed the progress of beauty physics: in particular, in 1986 the PDG only listed five decay modes of B 0 and B ± . However in 1987 a new impetus came from the discovery at ARGUS of B 0 − B 0 oscillations [6]. It was clear that the forthcoming LEP machine, designed for an entirely different purpose, and the symmetrical CESR collider, could not yield an exhaustive answer to all the questions related to the CKM hypothesis, despite their valuable contributions to many facets of b-physics [7,8]. When in 1989 P. Oddone proposed an asymmetric e + e − collider [9] operating at the Υ (4S) energy with a luminosity above 10 33 cm −2 s −1 , an intense period of accelerator studies ensued. This gave birth to the PEP-II and KEKB B Factories, which were approved in 1994, and started operating in 1998, soon reaching and passing their design luminosity.
Around this time, proponents pursued the idea of exploiting hadron beams to attack the problem of detecting CP violation in the b sector. The idea behind this was that the large hadronic b production cross-section plus the high-intensity hadron beams at the already existing and planned proton accelerators would produce a large number of bb pairs, sufficient to gather evidence for CP violation at least in the so-called "golden channel" B 0 → J/ψK 0 S . To achieve an adequate background rejection, the experimental difficulties were formidable because of the small ratio of the b cross-section to the total hadronic cross-section at the √ s values available in fixed-target and collider experiments.
In 1985, the fixed-target WA75 hybrid experiment at the CERN SPS [10] observed in emulsions the first partially reconstructed bb pair produced by an extracted pion beam of 350 GeV/c, confirming that the production cross-section at such low energies was very small. To circumvent this problem, simple experiments were proposed [11] for the CERN SPS and for the planned UNK machine [12] at Serpukhov, with a brute-force approach based on a high-intensity extracted beam and a minimalist detector designed to reconstruct the B 0 → J/ψK 0 S decay and to provide flavour tagging ( Fig. 1.1). There was no time-dependent analysis of the decay since the beam-dump character of the experiments made the use of a micro vertex detector impossible. There were exploratory fixed-target experiments, at CERN (WA92) [13] and at Fermilab (E653, E672, E771, E789) [14], which tried to observe and measure beauty events, albeit without (or very limited) success.
In 1989 P. Schlein proposed a dedicated Beauty experiment exploiting the large b cross-section expected at the CERN SPS proton-antiproton collider ( √ s = 630 GeV) [15].
The authors of the proposal (P238) remarked that the bulk of bb production occurred at very small angles with respect to the beams, therefore making a compact experiment practical. The heart of the detector was a Silicon Microvertex Detector operating very close to the beam (1.5 mm) coupled to a fast readout and track-reconstruction electronics. The Microvertex Detector provided the trigger by requiring that accepted events had to be inconsistent with a single vertex. P238 was not approved but the CERN R&D Committee, established to support new detector developments in view of the LHC, approved in 1991 a test of the Microvertex Detector [16] in the SPS Collider. This proved very successful [17] and paved the way towards the future COBEX experimental proposal. At the time when the e + e − colliders were approved, the HERA-B experiment had been decays. The K 0 S would traverse the conical slit before decaying [11].
conceived at DESY [18]. Approved in 1994, HERA-B exploited the 920 GeV HERA proton beam on a fixed target made of metallic wires, placed inside Roman Pots in the vacuum pipe, and immersed in the beam halo. HERA-B was approved to take data in 1998, one year before PEP-II and KEKB. The sophisticated apparatus consisted of a single-arm spectrometer, including a RICH, a large microvertex silicon detector, a high-resolution tracker, plus an electromagnetic calorimeter. HERA-B was designed primarily for the detection of the B 0 → J/ψK 0 S decay and its trigger was based on J/ψ reconstruction at the first level.
At √ s = 40 GeV, the bb cross-section is about 10 −6 of the total hadronic cross-section, hence HERA-B had to achieve a background rejection around 10 −11 for the B 0 → J/ψK 0 S decay. Data taking conditions were similar to those of the current LHCb experiment (a 40 MHz interaction rate), as were the requirements of radiation resistance. HERA-B started data taking in 2000 but it soon emerged that the detector did not have sufficient rejection power against background and the track reconstruction was not as efficient as expected. The large number of detector stations and their total thickness in terms of radiation lengths made secondary interactions an important issue for event reconstruction. Eventually HERA-B could not observe b events efficiently, but taught several valuable lessons for any future experiment working in a crowded hadron-collision environment. These were a need of a robust and efficient tracking and a flexible trigger systems able to adapt to harsher environments than may have been expected, as well as the need to design the thinnest and lightest detector (in terms of radiation and interaction lengths).
Towards the LHC
Over the same period, the planned LHC and SSC machines, with their large energies, promised spectacular increases of the bb cross-section, thus making the task of background rejection much simpler. This was particularly true for operation in collider mode, but even in fixed-target mode ( √ s ≈ O(130) GeV) the b cross-section was expected to be a respectable 1 µb at the LHC and 2 µb at the SSC [19]. The large cross-section and corresponding good background rejection would facilitate a hadron B Factory, competitive and complementary with e + e − colliders, which could focus primarily on the measurement of CP violation and also allow the study of the spectrum of all b particles. Given the intrinsically "democratic" nature of hadronic production, the new hadron machines would also give access to large samples of B 0 s and of b baryons, something not possible at e + e − colliders operating at the Υ resonances.
Two schools of thought soon emerged: one pursuing a fixed-target (FT) strategy and the other based on a collider mode. The more favourable ratio of the bb to the total hadronic cross-section, about two orders of magnitude larger in collider mode, gave this a competitive advantage.
There was, however, a strong reason in favour of the FT concept: a collider B experiment could not operate at the design luminosity of the machine (10 33−34 cm 2 s −1 ) because of the significant number of overlapping interactions (pile-up) with multiple vertices. This would have required dedicated low-luminosity running, creating a potential conflict with the major experiments and considerably reducing the data-taking time. Later it was ascertained that the individual experiment luminosities could be tuned over a broad range with an appropriate design of beam optics in the interaction regions, hence this would become a moot point, but at the time it was a serious one.
Moreover, for the advocates of the FT approach, the advantage of the larger crosssection in collider mode was partially offset by the higher event multiplicity and by the shorter flight path of beauty particles. In addition, while the p T of the b decay products would to a good approximation be the same in the two modes, the p T of the other collision products would be smaller in FT mode, thus making the trigger simpler. Active silicon targets were also possible with an extracted beam, where the track of a charged b-hadron would be directly measured.
Finally, bb production kinematics is forward peaked in the centre-of-mass (CM) system, but the Lorentz boost of the centre-of-mass in the FT mode (βγ > 60) concentrates the event at smaller angles than in collider mode. It was therefore possible, in principle, to build a more compact detector, achieving larger angular acceptance at lower cost. There was also the possibility of recycling components (in particular dipole magnets) from existing detectors. The cost was an important consideration, since a dedicated B experiment at the LHC (or SSC) was generally considered to have secondary importance with respect to the general-purpose experiments.
The anticipated demise of the SSC led three groups to study and propose dedicated b experiments at the LHC. The three Letters of Intent (LoI) were presented in 1993. COBEX [20], an acronym for Collider Beauty Experiment, with P. Schlein as spokesperson, was a collider experiment with a backward-forward geometry. The other two proposals, Gajet [21] (spokesperson T. Nakada) and LHB [22] (spokesperson G. Carboni) were fixed target experiments, the first, as the name suggests, using a gas jet target, the latter exploiting an extracted beam. Since no traditional beam extraction was foreseen for the LHC, LHB (Large Hadron Beautyfactory) used a parasitic extraction technique, based on channeling in a bent silicon crystal placed close to one of the circulating beams ( Fig. 1.2). A dedicated R&D experiment, RD22, approved by the CERN DRDC [23] to test the feasibility of this idea at the SPS, demonstrated that high-efficiency beam extraction (larger than 10%) was possible [24]. The three proposed experiments were presented in their final form in 1994 at the Beauty'94 Conference [25]. It should also be noted that, by then, CDF had already contributed important b-physics results, offering a glimpse of what would later prove to be the extraordinary success of b-physics at hadronic machines. Following the submission of the LoIs, the LHC Committee (LHCC) considered the proposals in June 1994. One of its concerns was the small size of the three collaborations: the total number of physicists involved was barely one hundred. In addition the Committee remarked that beam extraction by channeling could not be guaranteed at that stage, because accelerator experts feared possible interference with normal LHC operation. Finally the following recommendation was issued: "The collider model approach has the greater potential in view of the very high rate of b production, the much better signal/background ratio and the possibility of exploring other physics in the forward direction at 14 TeV". The LHCC encouraged the three collaborations to join together and design a new experiment, incorporating attributes of each, and operating in collider mode. The LHCC noted that its ambitious request was justified by the fact that, at the startup of the LHC, the experiment would not be an exploratory one, since CP violation would already have been observed "at HERA-B, FNAL or B Factories". The LHCC also issued guidelines requiring a number of issues to be addressed and solved. The three experiments combined into the new LHC-B Collaboration (then named), which published its Letter of Intent in 1995 [26].
The new experiment derived several of its characteristics from the parent proposals: notably the concept of a silicon vertex detector in retractable pots, the calorimeter design, and the high p T first-level trigger.
In the transition to LHC-B, a number of proponents from the former three collaborations decided not to continue. This unfortunately included P. Schlein and his UCLA colleagues who had pushed very strongly for the collider-mode idea with COBEX, and who had been a driving force up to that point. The LHCC decision sent a strong signal to the high-energy physics community: that CERN were prepared to give their strong support to one dedicated B experiment. This encouraged many physicists from institutions around the world to join the new collaboration over the subsequent years. T. Nakada, who was an instigator of the Gajet proposal, was elected LHC-B spokesperson. The 1995 Letter of Intent of LHC-B established the basis for the new detector design, which was refined in the following years, until its approval in 1998. By that time, the name had changed from LHC-B to simply LHCb. Fig. 1.3 shows the detector as it appeared in the Letter of Intent. The LHC-B detector as it was proposed in 1995. All the basic components shown would be part of the final detector, albeit with many refinements and optimisations that will be described in the following Section. Soon afterwards, a competing B-physics experiment, BTeV [27], was proposed to run at the Fermilab Tevatron, incorporating a single magnet, a double arm spectrometer and a vertex trigger at the first level, in order to recover the reduced b production cross section. Following the decision to shut down the Tevatron, BTeV was not approved, however several of the experiment's innovative ideas were carried through to the future LHCb experiment.
The LHCb Detector
The basic mechanism for heavy-quark production at the LHC is via gluon-gluon fusion. The angular distribution of cc or bb pairs is peaked at small angles with respect to the beam-line, with high correlation between the constituents of the pair. This allows the detection with good acceptance of the resulting hadrons in a rather limited solid angle. QCD calculations give cross-section values of σ cc 1.5 mb and σ bb 0.5 mb, respectively. The large event multiplicity requires a high granularity of the detector, together with minimal thickness in terms of radiation and interaction lengths to reduce secondary interactions.
The LHC-B LoI [26] presented a forward spectrometer with 400 mrad acceptance and a single large dipole magnet. The apparatus would be located inside the former DELPHI cavern at LEP with little modifications to the existing infrastructure but, for reasons of cost, the detector sacrificed half of the solid angle by only being a single arm spectrometer. The LoI design inherited important features from the three ancestor experiments and from the contemporary HERA-B, which had rate and radiation issues similar to those expected at the LHC. In contrast to the latter experiment, LHC-B had in addition a hadron calorimeter and a second (upstream) RICH with two radiators. Initially it was thought that an efficient tracking system in a harsh environment would require a large number of tracking stations, so, paralleling HERA-B, LHC-B had twelve tracking stations in the large-angle region. This number was reduced to ten in the Technical Proposal presented in 1998 [28], which by then had changed its name to LHCb.
The disappointing performance of HERA-B was largely ascribed to the large amount of material in the detector, which prompted the LHCb collaboration to perform a thorough review of the apparatus, with the aim to reduce material without sacrificing performance. A Technical Design Report submitted in 2003 presented the LHCb "Reoptimized" Detector [29]. This is the basis on which the experiment was eventually built and is described in the following subsections.
Overview
The LHCb detector [30] is a forward spectrometer, shown in Fig. 2.1, and is installed at Intersection Point 8 of the LHC. A modification to the LHC optics, shifting the interaction point by about 11 m from the centre, allowed maximum use of the cavern space. This results in a detector length of approximately 20 m, and with maximum transverse dimensions about 6 × 5 m 3 . The angular acceptance ranges from approximately 10 mrad to 300 mrad ( Fig. 2.2) in the horizontal magnetic-bending plane, and from 10 mrad to 250 mrad in the vertical plane ( Fig. 2.1). With this geometry the detector is able to reconstruct approximately 20% of all bb pairs produced.
To measure the momenta of charged particles, a dipole magnet producing a vertical magnetic field is used. It is a warm magnet providing an integrated field of 4 Tm, with saddle-shaped coils in a window-frame yoke, and with sloping poles in order to match the required detector acceptance. The design of the magnet allows for a level of fringe field inside the upstream Ring Imaging Cherenkov detector (RICH 1, see Sec. 2.4) of less than 2 mT whilst providing a residual field in the regions between the upstream tracking stations.
The tracking system consists of a silicon VErtex LOcator detector (VELO) [31], surrounding the interaction region, and four planar tracking stations, the TT tracker upstream of the dipole magnet and three tracking stations T1-T3 downstream of the magnet [32,33]. The T1-T3 stations consist of an Inner Tracker (IT), located at the centre of the stations and surrounding the beam-pipe, and an Outer Tracker (OT) for the outer regions. A minimum momentum of around 1.5 GeV/c is required for a track to reach the downstream stations [34]. Particle identification (PID) is a fundamental to the goals of the LHCb experiment by separating pions, kaons and protons produced in heavy-flavour decays. Accurate reconstruction of electrons and muons is crucial for flavour tagging. All aspects of PID are accomplished by a set of specialized detectors.
The PID system is based around two RICH detectors, designed to cover almost the full momentum range of tracks in LHCb. The upstream detector, RICH 1, covers the low momentum region from 2 to 60 GeV/c using aerogel (in Run 1 only) and C 4 F 10 radiators, whilst the downstream detector, RICH 2, covers the high momentum range from 15 GeV/c up to and beyond 100 GeV/c using a CF 4 radiator.
Two calorimeters, one electromagnetic (ECAL) and the other hadronic (HCAL), supplemented by a Preshower Detector (SPD/PS) [35], provide identification of electrons, photons and hadrons and a measurement of their energy. This measurement is used at the trigger level to select candidates on the basis of their transverse energy. Muons play a crucial role in many of LHCb's measurements because of the cleanliness of the signature. Their identification is achieved by five muon stations (M1 -M5), interspersed with iron filters. The muon system also supplies measurements of muon transverse momenta for the trigger.
The Vertex Locator
The role of the VELO is to measure the impact parameters of all tracks relative to the primary vertex (PV), to reconstruct the production points and decay vertices of hadrons containing b-and c-quarks and to allow precision measurements of their mean lifetimes. The subdetector accepts particles with pseudorapidities in the range 1.6 < η < 4.9 and which have PVs within |z| < 10.6 cm from the nominal collision point along the beam direction. The VELO is split into two halves surrounding the beam-pipe, each containing 21 modules. Each module is then made up of two silicon half discs of 300 µm thickness, one with strips in the radial, r coordinate, the other in the polar, φ coordinate. This cylindrical geometry allows a fast track-and vertex-reconstruction to be made at the second stage of trigger. The strip segmentation is such to limit the highest occupancy of the strips to less than 1.1 %.
The VELO is positioned, with an accuracy better than 4 µm, at the closest distance possible from the beam, about 7 mm during data taking, The sensors operate within a so-called Roman pot configuration, located inside a secondary vacuum of less than 2 · 10 −7 mbar pressure, separated from the primary LHC vacuum. The sensors are retracted during beam injection and are then quickly moved in for physics operation when the LHC beams are stable.
The vessel containing the silicon discs and the front-end electronics (RF-box) has aluminium walls of 300 µm thickness to minimize multiple scattering. The average material budget of the detector for tracks in the LHCb acceptance is 0.22 X 0 . In order to minimize radiation damage and to dissipate the produced heat, a cooling system keeps the temperature range between -10 to 0 • C. Fig. 2.3 summarizes the VELO performance in terms of impact parameter and decay time resolution [31].
The TT and Downstream Tracking System
Following the VELO, the tracking system is composed of the TT station, located between RICH 1 and the magnet, and three stations (T1,T2,T3) downstream of the magnet. The TT is composed of four stations grouped in pairs, called TTa and TTb, spaced by 30 cm. Each station consists of silicon microstrip planar modules covering a rectangular area of 150 cm × 130 cm (width times height), covering the LHCb acceptance of 300 mrad in the horizontal plane and 250 mrad in the vertical. The strips of the first and the fourth stations are vertical and measure the bending x coordinate, whilst the second and third planes have stereo angles of ±5 • , respectively.
Tracking stations T1 -T3 each consist of an inner part (IT) surrounding the beam pipe, and an outer part (OT) beyond. Each IT station consists of four overlapping silicon layers, two rotated by a stereo angle of ±5 deg and two aligned to the y (vertical) axis. Each layer is made up of four independent modules placed around the beam pipe, covering about a 120 × 40 cm 2 area, as shown in Fig. 2 The spatial resolutions of both the TT and IT are approximately 50 µm per hit, with strip pitches of about 200 µm. The hit occupancies vary between 1.9% for the inner sectors to 0.2% for the outermost modules. To minimize radiation damage, the sensors operate at 5 • C temperature.
The OT is a drift detector [32] consisting of straw tubes with internal diameters of 4.9 mm, each filled with an Ar/CO 2 gas mixture in a ratio 70 − 30%. The straws provide a 35 ns maximum drift time and 205 µm spatial resolution with 17% maximum straw occupancy. Each of the three stations is made of four modules, shown schematically in Fig. 2.5. A picture of the assembled OD is shown in Fig. 2.6. In the first and third modules the straw tubes are aligned to the vertical axis while the third and fourth modules have stereo angles of ±5 deg. The total active area is about 5.97 × 4.85 m 2 , covering the full LHCb acceptance.
The overall tracking efficiency for "long" tracks (i.e. those tracks measured in all the tracking detectors including the VELO) is greater than 96 % for 5 < p < 200 GeV/c. The momentum resolution dp/p is 0.5% at low momentum, increasing to 1.1% at 240 GeV/c. The mass resolution is 14.3 MeV/c 2 for the J/ψ resonance.
The RICH system
The role of the RICH system [36] is to provide π/K/p discrimination for LHCb, which is essential for most CP -violation studies, background rejection and flavour tagging. The momentum range which contains 90% of kaons, pions and protons from B meson decay is between 2 and 150 GeV/c and, to achieve this separation, two Cherenkov detectors, RICH 1 and RICH 2, are employed.
The RICH 1 detector differentiates particles with low and intermediate momenta, from 1 to ∼60 GeV/c. It is located close to the interaction region, upstream of the magnet, and covers the acceptance from ±25 mrad to ±300 mrad (horizontal plane) and to ±250 mrad (vertical plane). RICH 1 initially contained two different radiator materials: an aerogel layer 5 cm thick with refractive index n = 1.03 and a C 4 F 10 gas layer of length 85 cm with refractive index n = 1.0014. Aerogel has the power to provide π/K discrimination from about 1 up to 10 GeV/c, however it was removed for Run 2 due to occupancy problems. The C 4 F 10 radiator extends the positive π/K identification from about 10 GeV/c to 60 GeV/c, however π/K discrimination below 10 GeV/c is still possible by operating the RICH in kaon veto mode. RICH 2 has a smaller angular acceptance of ±15 mrad to ±120 mrad (horizontal plane) and to ±100 mrad (vertical plane) and covers the region where high momentum particles are most abundant. It is located downstream of the magnet, between T3 and the first muon station M1. RICH 2 uses a CF 4 gas radiator with refractive index n = 1.00046.
In both detectors, Cherenkov photons are detected by a combined system of plane and spherical mirrors to focus photons onto a pair of photo-detector planes, where Hybrid Photon Detectors are employed to detect the Cherenkov rings. The photo-detectors are located outside the detector acceptance, in regions of low magnetic field and relatively low radiation.
The Cherenkov angles for the three different RICH radiators and for different particles as a function of momentum are shown in Fig. 2.7 (left). A measurement of RICH performance in LHCb data is shown for the two gaseous radiators In Fig. 2.7 (right) [36]. The SPD and PS are used at the trigger level and offline, in association with the ECAL, to indicate the presence of electrons, photons and neutral pions. The detectors have two plastic scintillator layers separated by a 15 mm thick lead plate where electrons and photons can radiate; the downstream scintillator then samples the radiated energy. The light from the scintillators is sent to photomultipliers by wavelength-shifter (WLS) optical fibers.
Calorimeters
The ECAL employs the Shashlik technology, where independent modules, constructed from scintillating tiles and lead plates, are alternated (see Fig. 2.9). The ECAL has 66 layers of such modules consisting of 2 mm of lead followed by 4 mm of scintillator material. The ECAL also uses WLS optical fibers to guide the light from the detector to photomultipliers, placed on the back face of each module. The energy resolution achieved [35] is where E is the electron energy expressed in GeV. The HCAL is also a sampling calorimeter. It is constituted of iron absorber with scintillating tiles as the active material. The innovative feature of this sampling structure is the orientation of the scintillating material: the tiles run parallel to the beam axis. In the lateral direction, tiles are spaced with 1 cm iron, while longitudinally the length of the tiles and iron spacers correspond to the hadron interaction length λ I 20 cm in steel. Light is collected by WLS optical fibres running along the detector towards the back side where the photomultiplier tubes are located (see Fig. 2.9). The HCAL is used to measure the hadronic shower transverse energy for the Level-0 trigger and to improve the high momentum electron/hadron separation. The energy resolution achieved is where E is the hadron energy in GeV.
The Muon System
The Muon System consists of five stations, M1-M5, of rectangular shape. The complete system is made up by 1368 Multi Wire Proportional Chambers supplemented by 12 Triple GEM Chambers in the inner region of the first station, to cope with the very high particle rate. The chambers employ a variety of readouts, optimized for a precise p T measurement for the trigger. The complete system has an acceptance in the bending plane from 20 mrad to 306 mrad, and in the non-bending plane from 16 mrad to 258 mrad. This results in a total acceptance of about 20% for muons from semileptonic inclusive b decays. The M1 station is located in front of the calorimeters and is used in order to improve the p t measurement for the trigger. The geometry of the five stations is projective; all the transverse dimensions scale as the distance from the interaction point. Stations M2-M5 are placed downstream of the calorimeters and are interleaved with 80 cm thick iron absorbers. The total absorption thickness, calorimeters included, is about 20 interaction lengths. In this way the minimum momentum for muons crossing the five stations is about 6 GeV/c.
Each muon station is designed to achieve an efficiency above 99% in a 20 ns time window with a noise rate below 1 kHz per physical channel, as described in [37]. To reach such an efficiency, four chamber layers per station are used in M2-M5 (two layers in M1). The time resolution is achieved by a fast gas mixture Ar/CO 2 /CF 4 in the ratio 40:55:5. A ratio 45:15:40 is employed in the Triple GEM chambers.
The trigger
Even with the relatively large bb cross-section at LHC energies, only approximately 1% of visible pp interactions result in a bb event. Moreover, only about 15% of those events will produce at least one b-hadron with all decay products passing within the acceptance of the spectrometer. The branching fractions of decays used to study CP violation are typically less than 10 −3 . Further reductions are unavoidable in the offline selection, where stringent cuts must be applied to enhance signal over background. Therefore the purpose of the LHCb trigger is to achieve the highest efficiency for the events later selected in the offline analysis while rejecting drastically most of the uninteresting background events. To achieve this goal, the trigger uses information from all LHCb sub-detectors.
The trigger is organised in two different levels: the Level-0 (L0) trigger based on custom electronic boards, and the High-Level Trigger (HLT), implemented in a computer farm. Level-0 uses the information from the calorimeter and muon systems, performing a selection in order to reduce the event rate from 40 MHz to below 1 MHz, which is the maximum frequency allowed to read out the entire detector. The HLT is a software application running on a processor farm that further reduces the rate of events in the kHz range for storage (see Fig. 2.10).
The HLT has significantly evolved over time from the original design in the LHCb Technical Proposal (TP) [38] in 1998, to the trigger design in the Technical Design Report (TDR) [39] in 2003, to the Run 1 (2010-2012) actual implementation [40] and finally to the additional features introduced during Run 2 (2015-2018) [41]. In the TP it was assumed that a first HLT trigger level (L1) would reduce the 1 MHz input rate to a 40 kHz output rate with a variable latency of less than 256 µs, using coarse information from the vertex detector to reconstruct vertices and tracks with no momentum information (the VELO r-φ geometry was designed for this purpose). A second HLT trigger level (L2) was fashioned to extrapolate VELO tracks into the magnetic field to the tracking stations downstream of the magnet and reduce the output rate to 5 kHz with an average latency of 10 ms. Finally a third level (L3) would implement the full event reconstruction and a set of exclusive selections to bring down the rate to 200 Hz.
By the time of the trigger TDR in 2003, it became clear that the LHC was not going to start before the end of the decade when much more powerful processing units would become available. In addition a series of test-beam and detailed simulation studies convinced the collaboration of the need to have momentum information at the first stage of the HLT. Therefore a new tracking station just upstream of the magnet was introduced (the TT station). In addition, a shield which had been protecting RICH1 from stray magnetic fields was removed to allow for a rough estimation for the momentum of tracks reconstructed between the VELO and TT stations. The software trigger then had two levels: Level-1 able to reduce the output rate to 40 kHz using L0, VELO and TT information with an average latency of 1 ms, and HLT to reduce the output rate to 200 Hz with a combination of inclusive and exclusive selections. Between the time of the trigger TDR and the first physics run (Run 1), the interest in having a more performing HLT for charm physics (cc with a factor 20 larger production cross section than bb) and a much more robust system, convinced the collaboration to push for much more inclusive selections in the final trigger stage and a much larger trigger output rate (3)(4)(5). This implied a complete redefinition of the offline data processing model. Furthermore, it had been assumed that the LHC would operate with a 25 ns bunch separation, limiting the number of overlapping events to a mean number of µ 0.4 per bunch crossing at a luminosity of 2 × 10 32 cm −2 s −1 . When, from 2011, a separation of 50 ns was adopted for early LHC operation, the experiment decided to run at µ ≈ 1.4 to compensate for the lower number of bunches. Therefore the HLT had to adapt to running conditions rather different than first assumed. This was made possible by the highly flexible design of the HLT.
After the success of the LHCb trigger performance in Run 1, the good understanding of the trigger reconstruction allowed the introduction of the "real-time analysis" concept during Run 2. After the first HLT trigger level (HLT1), events are buffered to disk storage in the online system. This is done for two purposes, firstly events can be processed further during inter-fill periods, and secondly the detector can be calibrated and aligned run-by-run before the HLT2 stage. Once the detector is calibrated and aligned, events are passed to HLT2, where a full event reconstruction of "offline quality" is performed. This allows for a wide range of inclusive and exclusive final states to trigger and obviates the need for further offline processing. In addition, new techniques to reduce the amount of information saved per event [42] allowed to increase significantly the output rate to 10-15 kHz, as in Fig. 2.10, while the output of HLT1 could be increased to O(110 kHz). The decrease in requests for offline reconstruction also helped to mitigate the pressure on the offline computing model.
Level-0 hardware trigger
The L0 trigger is divided into three independent components: the L0-Calorimeter trigger, the L0-Muon trigger and the L0-PileUp trigger. The latter is used to reject multiple visible interactions in a bunch crossing by means of the "ad hoc" Pile-Up System detector housed in the VELO. The first two components are briefly described below.
The L0-Calorimeter part of the trigger obtains informations from the SPD, PS, ECAL and HCAL subdetectors and computes the transverse energy deposited by incident particles: E T = E 0 cos θ, where E 0 is the energy of the particle and θ is the polar angle given by the cell hit in the detector. Together with energy information, the total number of hits in the SPD (SPD multiplicity) is also determined in order to veto large multiplicity events that would take too large a fraction of the available processing time in the HLT. From the calorimeter information, three types of candidates are built and selected according to specific E T criteria: i) Hadron candidate (L0Hadron); ii) Photon candidate (L0Photon); and iii) Electron candidate (L0Electron).
The L0-Muon part of the trigger requires a muon candidate to have a hit in all five muon stations. The L0 muon processor boards select the two highest p T muon tracks in each quadrant of the muon system with a maximum of eight candidates. The trigger sets a single threshold either on the largest muon p T (L0 muon trigger) or on the product of the largest and the 2nd largest (L0 dimuon trigger). Events with SPD multiplicity > 600 are excluded in the L0 muon trigger in order to minimize the track multiplicity. This limit is raised to 900 in the L0 dimuon trigger at the expense of a small increase in rate.
The total output rate of the L0 trigger is limited to 1 MHz, which is the maximum rate accepted by the HLT1. Such an output rate consists of about 400 kHz of muon triggers, about 450 kHz of hadron triggers and about 150 kHz of electron and photon triggers (the individual triggers have an overlap of about 10%).
High Level Trigger
Data from L0 are sent to the Event Filter computer Farm (EFF) which runs the HLT algorithms. The HLT is a software application whose 29500 instances run on the EFF. Each instance is made up of independently operating trigger lines; each line consists of selection parameters for a specific class of events.
The HLT is divided into two stages. The first stage (HLT1) processes the full L0 rate and uses partial event reconstruction to reduce the rate to about 110 kHz. The second stage (HLT2) reduces the rate to about 12.5 kHz, performimg a more complete event reconstruction [41].
HLT1 reconstructs the trajectories of charged particles traversing the full LHCb tracking system which have a p T larger than 500 MeV. The hits in the VELO are combined to form straight-line tracks loosely pointing towards the beam line. Next, at least three hits in the TT are required in a small region around a straight-line extrapolation from the VELO. The TT is located in the fringe field of the LHCb dipole magnet, which allows the momentum to be determined with a relative resolution of about 20%, and this estimate is used to reject low p T tracks. Tracks are then extrapolated to the T-stations downstream of the magnet. The search window in the IT and OT is defined by the maximum possible deflection of charged particles with p T larger than 500 MeV. The search is also restricted to one side of the straight-line extrapolation by the charge estimate of the track. Subsequently, all tracks are fitted with a Kalman filter to obtain the optimal parameter estimate using a simplified geometry description of the LHCb detector. The set of fitted VELO tracks is re-used to determine the positions of the PVs.
Tight timing constraints in HLT1 mean that most particle-identification algorithms cannot be executed. The exception is muon identification due to its clean signature. Hits in the muon stations are searched for in momentum-dependent regions of interest around the track extrapolations. Tracks with p < 3 GeV cannot be identified as muons, as they would not be able to reach the muon detectors.
HLT1 has two inclusive trigger lines which select events containing a particle whose decay vertex is displaced from the PV: a line which selects a single displaced track with high p T , and a line which selects a displaced two-track vertex with high p T . Both lines start by selecting good quality tracks that are inconsistent with originating from the PV. The single-track trigger then selects events based on a hyperbolic requirement in the 2D plane of the track displacement and p T . The two-track displaced vertex trigger selects events based on a multivariate discriminant whose input variables are the vertex-fit quality, the vertex displacement, the scalar sum of the p T of the two tracks and the displacement of the tracks making up the vertex. The two-track line is more efficient at low p T , whereas the single track line performs better at high p T , such that in combination they provide high efficiency over the full p T range.
The HLT1 muon lines select muonic decays of b and c hadrons, as well muons originating from decays of W and Z bosons. There are four main lines: one line that selects a single displaced muon with high p T , a second single muon line that selects very high p T muons without displacement for electroweak physics, a third line that selects a dimuon pair compatible with originating from a decay of a charmonium or bottonium resonance or from Drell-Yan production, and a fourth line that selects displaced dimuons with no requirement on the dimuon mass. During Run 2, typically about 80 kHz were allocated to the inclusive HLT1 lines, while about 20 kHz to the muon lines. The rest of the HLT1 output is dedicated to special low multiplicity triggers and calibration trigger lines.
HLT2 can perform the full event reconstruction since the output of HLT1 is buffered. The full event reconstruction consists of three major steps: the track reconstruction of charged particles, the reconstruction of neutral particles and particle identification. The HLT2 track reconstruction exploits the full information from the tracking sub-detectors, performing additional steps of the pattern recognition which are not possible in HLT1. Tracks with a p T larger than 80 MeV are reconstructed in HLT2, without the requirement to have hits in the TT station. This is to avoid inefficiencies due to the TT acceptance, which is crucial for part of the charm and kaon physics programme. In addition, tracks produced by long-lived resonances that decay outside the VELO are reconstructed using T-station segments that are extrapolated backwards through the magnetic field and combined with hits in the TT. Similarly, the most precise neutral cluster reconstruction algorithms are executed. Finally, in addition to the muon identification available in HLT1, HLT2 exploits the full particle identification from the RICH detectors and calorimeter system.
The HLT2 inclusive b-hadron trigger lines look for a two-, three-, or four-track vertex with sizeable p T , significant displacement from the PV, and a topology compatible with the decay of a b-hadron, using a multivariate discriminant. Whenever one or more tracks are identified as muons, the requirements on the discriminant are relaxed to increase the efficiency. As in the case of HLT1, several muon lines are used to select muonic decays of b and c hadrons and of W and Z bosons. However in HLT2, the muon reconstruction is identical to the offline procedure, having access to exactly the same information. During Run 2, typically about 3 kHz of the trigger rate is from the inclusive b-hadron trigger while the muon lines take about 1 kHz. A large fraction of the trigger bandwidth (2-4 kHz) is allocated to exclusive selection of charm decays, where a reduced amount of information is saved per event. The rest of the trigger bandwidth is due to other special triggers and calibration trigger lines.
LHCb contributions to CKM measurements and CP violation
The violation of the combined operation of charge conjugation and parity, CP , was first observed in 1964 in decays of neutral kaons [43]. The BaBar [44] and Belle [45] B Factory experiments and the CDF experiment [46] established CP violation in the decays of neutral B 0 mesons. LHCb now extends measurements to much greater precision, and also probes the B s system, which is vital to explore the full range of CP violation measurements.
In the Standard Model, the Cabibbo-Kobayashi-Maskawa (CKM) unitary matrix [4,47], V CKM , describes the electroweak coupling strength V ij of the W boson to quarks i and j: CP is violated in the Standard Model if any element of the CKM matrix is complex. The parametrisation of the CKM matrix due to Wolfenstein [48] is given by 2) for the four Standard Model parameters (λ, A, ρ, η). The expansion parameter, λ, equal to the sine of the Cabibbo angle, has a value |V us | = 0.22 [49], and in Equ. 3.2 the expansion is given for terms up to order λ 5 .
The unitarity of the CKM matrix leads to six orthogonality conditions between any pair of columns or any pairs of rows of the matrix. The orthogonality means the six conditions can be represented as six triangles in the complex plane. The interesting relations for CP violation are those given by: The unitarity triangle has sides with lengths that are the same order in λ, namely O(λ 3 ), which implies large CP asymmetries in B 0 and B ± decays. The B s triangle has two sides of O(λ 2 ) and the third of O(λ 4 ). Hence CP violation in B s mixing is significantly smaller than in the B 0 system. Moreover, the charm triangle has two sides of O(λ) and the third of O(λ 5 ), hence CP violation in the charm system is expected to be extremely small. Note that all three triangles have equal area [50].
To study CP violation, the B−physics experiments measure the complex phases of the CKM elements and measure the lengths of the sides of the triangles to check for a self-consistent picture. CP violation is predicted in many (often very rare) B hadron decays, hence LHCb utilises large samples of B, B s , B c mesons and B−baryons. New physics can be discovered and studied when new particles appear in, for example, virtual loop processes of rare B decays, leading to observable deviations from Standard Model expectations, both in branching ratios and CP observables. Hence the LHCb strategy is to determine with high precision the CKM elements and to compare measurements of the same parameters, especially those where one is sensitive to new physics and the other to Standard Model processes.
The status of the unitarity triangle before LHCb
The first generation B Factory experiments to study CP violation in the B−system, BaBar and Belle, made huge in-roads into testing the Standard Model description of CP violation; the status was summarised extensively at the Beauty 2009 Conference [51]. CDF and D0 extended these studies at the Tevatron, and make first explorations in the B s sector. Fig. 3.1 shows the status of the unitarity triangle measurements compiled by the CKM-Fitter Group [52] in 2009, when the B Factories had been running for around ten years. Here graphical results are displayed in the ρ − η plane and the best fit to the apex of the triangle (Equ. 3.3) to the 95% confidence level is shown. The fit to the CKM parametrisations include measurements of the sides of the triangle through measurements of the CKM elements and the angles, information from rare K and B meson decays, and B 0 s − B 0 s mixing. Before 2009, when the LHC turned on, the B Factory experiments, CDF and, to a lesser degree D0, measured the parameters of the unitarity triangle with varying degrees of precision: • The quantity sin 2β was measured in all channels, including the "gold plated" channel B 0 → J/ψK 0 S , to a precision of around ∼0.03; • The sides |V td /V ts | and |V ub /V cb | were known from B 0 s − B 0 s mixing and from b → u decays, respectively each to ∼10%, but limited by theory. The B s mixing phase (φ s ) was unmeasured; • The angle α was measured in the channels B → ππ, ρπ and ρρ with a statistical precision of ∼ 5 • ; • There was a statistics-limited measurement in B → DK modes of the angle γ to around 20 − 25 • . A measurement of γ from B s modes such as B 0 s → D + s K − had been completely unexplored.
• The parameter ε K , measured in kaon decays, provided a very loose constraint on the triangle vertex; • The B Factories were statistics limited for very rare processes with branching ratios 1 × 10 −6 , such as b → s flavour-changing neutral current (FCNC) transitions, e.g. b → sγ and b → sl + l − . Super-rare transitions such as B (s,d) → µ + µ − were also unobserved.
In contrast, Fig. 3.2 shows the status of the unitarity triangle measurements today [52].
Heavy quark mixing measurements
Since flavour is not conserved in the weak interaction, mixing between B 0 q and B 0 q mesons (where q = d or s) is possible via the box diagrams shown in Fig. 3.3. The probability for finding a B q (or a B q ), given the initial state was a B q (or a B q ) at time ∆t after production, is given by: Here ∆m q is the mass difference m H − m L , where m H,L are the masses of the heavy and light mass eigenstates and τ B 0 is the B lifetime. At the LHC, the two neutral B mesons produced can oscillate independently at any time after production. Any CP measurement from a time-dependent analysis of neutral B decays needs the determination of the B flavour (b or b) at production. This requires b−quark "tagging", and several algorithms have been developed by LHCb involving the combination of so-called opposite side [53] and same side taggers [54].
. The time-dependent fit is shown in Fig.3.4 (lower) for 1 fb −1 of data. The no-mixing scenario is excluded at 9.1σ [58].
All distributions rely on flavour tagging, and the curves correspond to the fitted oscillations.
LHCb measurements of the unitarity triangle parameters
The LHCb experiment performs a high-statistics study of CP violation with unprecedented precision in many different and complimentary channels, providing a sensitive test of the Standard Model and physics beyond it.
Measurements of the CKM angle β
The time-dependent decay asymmetry of the channel B 0 → J/ψK 0 S allows a measurement of the angle β. This is known as the "golden" decay mode because the channel is virtually free of penguin pollution (which enters with the same overall phase), resulting in very small theoretical uncertainty, of order 1% [59]. CP violation in this channel occurs in the interference between mixing and decay, where the mixing process introduces a relative CP -violating weak phase of 2β. Experimentally the CP asymmetry is measured from the ratio of the numbers of B and B mesons, N B→f and N B→f , decaying into final state f : The LHCb measured and fitted asymmetries for the J/ψ (1S) and (2S) states are shown in Fig. 3.5 for 3 fb −1 of data at 7 and 8 TeV [60]. These measurements are cos (∆m d t) = −0.017 ± 0.029 and sin (∆m d t) = 0.760 ± 0.034, where an observation of a direct CP -violation contribution proportional to cos (∆m d t) would be an indication of new physics [59]. The LHCb measurement is now competitive with BaBar and Belle measurements; the current world average of sin 2β = 0.695 ± 0.019 [61] is dominated by LHCb together with the B Factory measurements in the complementary channels B 0 → J/ψK 0 S and B 0 → J/ψK 0 L . The measurement by LHCb of sin 2β in gluonic penguins will further contribute to this study.
Measurements of the CKM angle α
The primary method at LHCb for the measurement of α is through an amplitude analysis via the B → ρπ decay modes [62], however these channels are difficult at LHCb due to the need to efficiently reconstruct π 0 s. Penguin pollution is present and must be constrained, with the additional application of isospin symmetry. The precision on α at LHCb is expected to be dominated by systematic uncertainties, and any measurement is not expected to improve on a combination of the B Factory measurements, α = 86.4 +4.5
Measurements of the CKM angle γ
A precise measurement of the angle γ is key to understanding the closure (or otherwise) of the unitarity triangle. Constraints on the unitarity-triangle apex largely come from loop decay measurements which are very sensitive to the presence of new physics. γ is the only angle accessible at tree level and hence forms a SM benchmark to which the loop measurements can be compared (assuming no significant new physics in tree decays). The γ measurement also relies on theoretical input which is very well understood [63,64]. Determination of γ from a combined fit to all measured parameters of the unitarity triangle currently gives a value γ = 65.8 +1.0 Hence reaching degree-level precision from direct γ measurements is crucial.
LHCb makes measurements of γ by a variety of methods, where complementary is vital. Examples of the most sensitive LHCb measurements are outlined below.
The measurement of γ is made in direct CP-violation via B ± → D 0 K ± by three different methods: the GLW method (decay into a CP eigenstate) [65,66], the ADS method (decay into a flavour-specific mode) [67], and the GGSZ method (Dalitz analysis) [68]. These all access γ through interference between the B ± → D 0 K ± and B ± → D 0 K ± decay paths, where the D 0 and D 0 decay to the same final state. When using these methods, the decay modes are self-tagging. In addition time-dependent analyses are not necessary. For the ALD and GLW modes, the charge-conjugate event yields are simply counted to determine the CP asymmetries (i.e. effectively a "counting experiment"). [69].
Here the D 0 is produced in a Cabibbo favoured mode (V cb ) but decays via a suppressed mode (V cd ) into K + π − . This interferes with the D 0 charge-conjugate state which is produced in a suppressed mode (V ub ) but decays to the same final state K + π − via a favoured mode (V cs ). The branching fraction for the favoured B decay is only ∼ 10 −4 , so these measurements require high statistics. The asymmetry observed in Fig. 3.6 has a magnitude of around 40% and has a significance of 7σ. A specific example of an LHCb analysis using the GGSZ Dalitz method is in the There is a rich Dalitz plot structure with the presence of large interference effects. The Dalitz space is divided up into symmetric bins, chosen to optimise sensitivity. An amplitude analysis can then be used to extract γ.
In all B ± → D 0 K ± modes and decays listed above, γ can also be extracted from the corresponding B ± → D 0 K ± modes, albeit with reduced γ sensitivity. In addition B 0 → D ( * ) K ( * ) GGSZ modes are also included in the global fit to extract the γ average value.
• γ from the "time-dependent" B 0 s → D − s K + mode The channel B 0 s → D − s K + , and its charge conjugate states, provide a theoretically clean measurement of the angle (γ + φ s ) where φ s is the (small valued) B s mixing phase, with no significant penguin contribution expected [71]. Here both B 0 s , and via the mixing diagram B 0 s , can decay to the same final state D − s K + , resulting in interference which is sensitive to γ. The same is true for decay into the charge conjugate state D + s K − . Hence four time-dependent decay rates are measured: The method is then to fit two asymmetries of the form These measurements yield values for the strong phase difference δ QCD between the amplitudes B → f and B → f , the amplitude ratio, and (γ + φ s ).
The current measurement by LHCb in 1 fb −1 of data yields a value γ = 115 +28 −43 This complements the measurements in B ± → D 0 K ± , although with less statistical precision.
• The γ combination The LHCb measurement of γ averaged over all the above methods, which includes all B 0 , B ± and B s modes, is γ = 74.0 +5.0 −5.8 • [73]. This measurement dominates the current world average. The confidence limits as a function of γ for the combination is shown in Fig. 3.7 for the various measurement channels. The agreement between B s and B ± initial states is currently at the 2σ level.
The sides of the triangle
• The side opposite to β Currently the closure test of the unitarity triangle is limited mainly by the side opposite to β which has a length proportional to |V ub |/|V cb | in the Standard Model.
This limitation is a consequence of tension between B Factory inclusive and exclusive |V ub | measurements which differ by ∼ 3.5σ [61]. |V ub | 2 is directly proportional to the decay rate B 0 → X u µ − ν µ , where X u is a meson containing a u quark. Theoretical input from Heavy Quark Effective Theory and lattice calculations are also necessary to calculate |V ub |, although several of the theoretical uncertainties cancel in the ratio to calculate the side.
|V ub |/|V cb | is a very difficult measurement at LHCb due to presence of a neutrino, the identification of which was never in LHCb's original plans. Although the B Factory favoured channel B 0 → π + µ − ν µ cannot currently be identified at LHCb, the equivalent baryonic channel Λ b → pµ − ν µ has been measured. The signal is separated from the lower-mass backgrounds, shown in Fig. 3 This is to be compared to the world average of |V ub | = (3.94 ± 0.36) × 10 −3 [61]. • The side opposite to α The mass difference ∆m s measured in B 0 s mixing (Fig. 3.3), which is dominated by the top-quark loop, provides a measurement of the third side of the triangle, . This is proportional to the ratio of mixing frequencies ∆m d ∆m s . Corrections are calculated from the lattice with a theoretical error of ∼5-10% and systematic errors largely cancel in the ratio.
Following the measurement by LHCb of the mixing parameters presented above, the ratio |V td /V ts | is 0.210 ± 0.001 ± 0.008 [49]. Systematic errors can be reduced in the future by improved lattice QCD calculations.
Other CP violation measurements
s → J/ψφ It can be seen in Fig. 3.3 that V ts appears twice in the B 0 s − B 0 s mixing process, introducing a relative "weak mixing phase", of φ s to fourth order in λ. The B s mixing phase can be measured in the channel B 0 s → J/ψφ, which is governed by a single tree-level diagram with a negligible penguin contribution. Hence this mode is the strange-quark analogue of the golden mode B 0 → J/ψK 0 S in the B 0 system. In the B s system CP asymmetry arises from the interference of the B 0 s → J/ψφ with the mixed process B 0 s → B 0 s → J/ψφ. In the Standard Model, φ s is expected to be very small, ∼ 0.036 ± 0.002 rad [52], hence this channel is a very sensitive probe for new physics.
LHCb reconstructs B 0 s → J/ψφ events in the decay modes J/ψ → µ + µ − , and φ → K + K − [75]. This B 0 s final state is an admixture of CP -even and odd contributions, therefore an angular analysis of decay products is required. Good tagging performance of B 0 s and B 0 s is important, with a total tagging power in this analysis of 4.73 ± 0.34. The fitted value of φ s is correlated with ∆Γ s , the width difference of the B 0 s mass eigenstates. The decay B 0 s → J/ψπ + π − is also added to improve the sensitivity [76]. Contours in the (φ s , ∆Γ s ) plane are plotted in Fig. 3.9. The LHCb measurements are ∆Γ s = 0.0816 ± 0.0048 ps −1 with the CP -violating phase φ s = −0.041 ± 0.025 rad.
CP violation in charm
The Standard Model prediction of CP violation in the charm system is expected to be very small O (10 −4 ) → O (10 −3 ), where CP violation can arise in Cabibbo-suppressed (CS) decays in the interference between tree and penguin amplitudes. In particular LHCb has measured asymmetries in the direct CP -violating channels D 0 (D 0 ) → π + π − and In the LHCb analysis, D 0 and D 0 decays are identified via two self-tagging decay paths. "Prompt" decays (D decays originating from the primary vertex) are characterised by the presence of a "soft" low-momentum pion from a D * i.e. D * + → D 0 π + soft and the charge-conjugate mode. "Semileptonic" decays are secondary D's which originate from prompt B decays, i.e. B + → D 0 µ + X and its charge-conjugate state.
The raw asymmetry (A) for which includes both physics and detector terms: Detection asymmetry arises from small charge differences associated with the π ± soft or µ ± . Production asymmetry arises from different production rates of D * and B in pp collisions. To eliminate these two contributions and cancel associated systematics, the ∆A CP parameter is measured in LHCb: (3.10) The raw symmetries are obtained from mass fits, and then by simply counting the numbers of D's decaying to π + π − and K + K − , respectively. A measurement performed with Run 1 and Run 2 LHCb data combined gives ∆A CP = (−15.4 ± 2.9) × 10 −4 . This is a 5.3σ measurement of CP violation in the charm system and opens a new window for the study of CP violation in the future.
CP violation in beauty baryons
CP violation has been observed in B, K, and D decays, but not yet in baryon decays. A search for CP violation in the multi-body mode Λ 0 b → p + π − π + π − decays was performed on LHCb Run 1 data [78]. This decay proceeds via tree and loop diagrams with similar contributions and through numerous intermediate resonances, enhancing the possibility for CP violation, although in areas where re-scattering effects can play a role. A 3.3σ deviation from CP symmetry was observed, however introducing 6.6 fb −1 of Run 2 data has not confirmed this result [79]. Hence this measurement awaits further statistics, and will be improved when cleaner 2-body B-baryon decays can be added to the study.
Rare decays
Within the SM the interplay of weak and Higgs interactions implies that Flavour Changing Neutral Currents (FCNCs) can occur only at higher orders in the electroweak interactions and are strongly suppressed by the GIM mechanism. This strong suppression makes FCNC processes natural candidates to search for physics beyond the SM. If the new degrees of freedom do not have the same flavour structure of the quarks/leptons-Higgs interactions present in the SM, then they could contribute to FCNCs at a comparable (or even larger) level to the SM amplitudes.
In B-meson decays, experimenters have measured b → s and b → d quark transitions, while c → u and s → d transitions have been measured in D-meson and K-meson decays, respectively. At first order, these transitions can occur through two kinds of Feynman diagram shown in Fig. 4.1. The first corresponds to the so-called "box" diagram and describes the mixing between neutral mesons, discussed in Section 3, the example of Fig. 4.1 shows B 0 s mixing. The second kind of diagram, the so-called "penguin" diagram, is responsible for a large variety of FCNC rare decays. The example shown in Fig. 4.1 is that of a b → s + − transition. In particular, if the radiated bosons are of electroweak type (Z, W or γ-like), the uncertainties in the calculation of the SM predictions due to non-perturbative QCD effects are drastically reduced as compared with the case where a gluon is radiated. These "electroweak penguins" are the subject of the discussions in this section. Before the first physics run of the LHC accelerator in 2010, the main contributors to the study of rare B and D-meson decays were the B Factory experiments (BaBar and Belle) and the Tevatron experiments (CDF and D0). However the production rate of bb pairs in the e + e − B Factories was typically five orders of magnitude smaller than at the LHC. In addition, the lower pp collision energy of the Tevatron (with a correspondingly reduced bb cross section which is proportional to the collision energy), and the detector's reduced trigger acceptance for rare B and D-meson decays, implied that LHCb would already be the most sensitive experiment even after 1 fb −1 accumulated in 2011. For example, prior to the LHC, the rarest B-meson decay ever measured was B(B + → K + µ + µ − ) ∼ 5 × 10 The B 0 s → µ + µ − very rare decay has been searched for ever since the discovery of B mesons, around 40 years ago. Thanks to the ingenuity and persistence of the experimenters, it has been eventually measured at the LHC and found to be in agreement with the SM within current uncertainties, as shown in Fig. 4.3. Over the next decade it will be extremely interesting to see how the measurement of B (B 0 → µ + µ − ) evolves, for which only upper limits are currently available. [84]. Superimposed on the data points in blue (solid line) is the combined fit, and its components as quoted in the insert.
The language of effective field theory is used to parameterise NP contributions in terms of a sum of local four-fermion operators (Q i ) which depend only on SM fermions modulated by Wilson coefficients (C i ) which in turn depend on the heavy degrees of freedom i.e. NP particles. The decay B 0 → K * 0 µ + µ − is the so-called "golden mode" to test new vector (axial-vector) couplings, i.e. the C 9 and C 10 Wilson coefficients contributing to the b → s transition. The B 0 → K * 0 µ + µ − channel also complements the b → sγ decay which is mostly sensitive to NP dipole operators (i.e. C 7 ). and also the B 0 s → µ + µ − decay which is mostly sensitive to NP (pseudo-) scalar operators (i.e. C S and C P ). The charge of the pion in the decay K * → Kπ defines the flavour of the B meson and an angular analysis can be performed unambiguously to test the helicity structure of the electroweak penguin.
The above system is completely defined by four variables: q 2 , the square of the invariant mass of the dimuon system, θ l , the angle between the positive lepton and the direction opposite to the B-meson in the dimuon rest frame, θ K , the equivalent angle of the K + in the K * rest frame and φ the angle between the two planes defined by (K, π) and (µ + , µ − ) in the B-meson rest frame. The four-fold differential distribution contains a total of eleven angular terms that can be written in terms of seven q 2 -dependent complex decay amplitudes. These amplitudes can be expressed in terms of five complex Wilson coefficients (C S , C P , C 7 , C 9 and C 10 ), their five helicity counterparts and six form-factors, which play a role of nuisance parameters in the fit.
The LHCb experiment, with 3 fb −1 of data collected in Run 1, has triggered and selected about 2400 B 0 → K * 0 µ + µ − candidates in the range 0.1 < q 2 < 19 GeV 2 with signal over background (S/B) > 5. This is about one order of magnitude larger than the samples available at previous experiments (BaBar, Belle and CDF) and similar to the samples collected by ATLAS and CMS with ten times the luminosity, however with significantly worse S/B.
The statistics and the quality of the data accumulated by the LHCb experiment allows for a full angular analysis of B 0 → K * 0 µ + µ − decays to be performed for the first time.
The results [87] of this "tour de force" analysis mostly agree with SM predictions, however with some hints of disagreement for some specific distributions. In Fig.4.4, two examples of the CP -averaged angular coefficients (i.e. the average of the coefficients measured with B 0 and B 0 decays) are shown as a function of q 2 . For these two examples, A FB (modulating the sin 2 θ K × cos θ l angular term) and S 5 (modulating the sin(2θ K ) × sin θ l × cos φ angular term) seem to agree less well with SM predictions. However these are early days, and more data will be required (the Run 2 data analysis will be released soon). Also a careful reassessment of the SM uncertainties are needed before drawing definitive conclusions.
Several authors have already attempted to see if the overall pattern of the angular measurements is consistent with a given value of the relevant Wilson coefficients. As previously discussed, the inclusive b → sγ measurements strongly constrain non-SM values for C 7 . The scalar C S and pseudo-scalar C P coefficients are constrained, for example, by the measurement of the branching fraction of the very rare decay B 0 s → µ + µ − . Therefore, the small disagreements observed in the angular analysis of the decay B 0 → K * 0 µ + µ − and other decays, seem to be consistent with a non-SM value of the C 9 Wilson coefficient, as can be seen in Fig. 4.5 taken from Ref. [88].
Lepton Universality
In the SM, the electroweak couplings of leptons are flavour independent, or lepton "universal". However, this may not necessarily be the case for new particles beyond the SM. In particular, if the hints described in the previous section are an indication of new particles modifying the penguin diagram in Fig. 4.1, it is interesting to measure the ratios of branching fractions in decays of different lepton families. For example, the ratios between B decays to final states with muons and electrons, where X is a hadron containing an s-quark or d-quark, are predicted to be very close to unity in the SM [90][91][92]. The uncertainties from QED corrections are found to be at the percent level [93]. LHCb has measured the above ratio in several channels. Using Run 1 and part of Run 2 data (4.4 fb −1 ), LHCb measures [94] R K = 0.846 +0.060 −0.054 (stat) +0.014 −0.016 (syst) in the range 1.1 < q 2 < 6 GeV 2 , about 2.5σ below the SM prediction, and using only Run 1 data (3 fb −1 ) LHCb measures [95] R K * = 0.69 +0.11 −0.07 (stat) ± 0.05(syst) in the same q 2 range, with a similar level of disagreement with the SM prediction. The latest R K results from LHCb in bins of q 2 can be seen in Fig. 4.6, compared with previous results from the BaBar and Belle collaborations. The consistency between different experiments, and different channels, although with very different precision, has motivated many theoretical studies that relate these hints of lepton non-universality with the discrepancies described in the previous section. Figure 4.5 shows the status of the compatibility of both sets of measurements when assuming that only the bsµµ Wilson coefficients are modified (and bsee coefficients are as predicted by the SM). Whilst the initial results showed remarkable consistency between different sets of measurements as shown by the dotted lines in Fig. 4.5, the 2019 latest updates show a less clear picture. As of today, it is difficult to draw reliable conclusions, and more data is eagerly awaited.
Spectroscopy
A deep understanding of quantum chromodynamics, the theory of strong interactions, is vital for precision tests of the Standard Model and in searches for new physics beyond. QCD is intensively tested in deep-inelastic-scattering processes and heavy vector boson production, however in the low-energy regime, there is a lack of precise QCD predictions. QCD, being a non-pertubative theory, does not calculate hadron properties, namely masses and decay widths from first principles. Alternative theoretical approaches are developed, such as heavy quark effective theory, heavy quark expansion or lattice calculations. These approaches require verification with experiment in various regimes, e.g. testing the agreement with data for hadrons with different quark content and quantum numbers. Spectroscopic measurements of hadron masses and widths or lifetimes provide a wide variety of tests for QCD models.
Huge production cross sections of charm and beauty in high-energy pp collisions in the forward region at LHCb [96][97][98][99][100][101][102][103][104][105][106][107][108][109][110][111][112], together with a good reconstruction efficiency, versatile trigger scheme and an excellent momentum and mass resolution, opens up exciting opportunities for spectroscopy measurements. The employment of LHCb's powerful hadron identification system [34,36,113] enables a substantial reduction in the combinatorial background specific to high-energy hadron-hadron collisions. The unique hadron identification is also especially important for spectroscopy measurements involving charged kaons and/or protons in the final state. The excellent momentum and vertex resolutions provided by the LHCb tracking system allows unprecedented precision on mass and width measurements: indeed the most precise measurements of mass for all open beauty particles and lifetimes of all open heavy flavour particles, currently result from the LHCb experiment [114]. Additional good control over the momentum scale and the detector alignment [115,116] also allows the natural widths of hadronic resonances to be probed with world-leading sub-MeV precision [117][118][119][120][121][122][123].
Charm hadrons
Two complementary methods for the study of spectroscopy of charm hadrons have been exploited in LHCb: • the study of promptly produced charm hadrons, • the study of charm particles produced in weak decays of beauty hadrons, e.g. from exclusive B → D ( * ) ππ decays.
The first technique allows the most efficient exploitation of the huge prompt cc production cross section in high energy hadron collisions, but this is usually affected by a large background from light hadrons produced at the pp collision vertex. The second technique exploits a full amplitude analysis of the exclusive decays of beauty hadrons and therefore involves much lower statistics, however the method often allows the determination of quantum numbers of the charm hadrons. The study of Dπ final states enables a search for natural spin-parity resonances, (P = (−1) J , labelled as D * ) whilst the study of D * π final states provides the possibility of studying both natural and unnatural spin-parity states, except for the J P = 0 + case, which is forbidden because of angular momentum and parity conservation. In inclusive D ( * ) π production, the production of any J P state is permitted. An amplitude analysis of B decays allows a full spin-parity analysis of the charmed mesons present in the decay. Both the above approaches are complementary, and have resulted in discoveries of several new charm hadrons, amongst them the excited charm mesons. Many previous meson and baryon states, discovered earlier by other experiments, have been confirmed with high statistical significance, and their masses and widths have been measured with high precision. For many, the quantum numbers were either measured or constrained.
Excited Λ + c baryons were studied in their decays to the D 0 p final state via the amplitude analysis of Λ 0 b baryon decays using an integrated luminosity sample of 3 fb −1 collected at √ s = 7 and 8 TeV [153]. The analysis uses a sample of 11 212 ± 126 sig- where the D 0 mesons are reconstructed in theK − π + final state.
The amplitude fit is performed in the four-phase space regions in the Dalitz plot. For the near-threshold m D 0 p region, an enhancement in the D 0 p amplitude is studied. The enhancement is consistent with being a resonant state, dubbed the Λ c (2860) + , with quantum numbers J P = 3 2 + , and with the parity measured relative to that of the Λ c (2880) + state.
The other quantum numbers are excluded with a significance greater than 6 standard deviations. The phase motion of the 3 2 + component with respect to the non-resonant amplitudes is obtained in a model-independent way and is consistent with resonant behavior. The mass of the Λ c (2860) + state is consistent with predictions for an orbital D-wave Λ + c -excitation with quantum numbers 3 2 + , based on the nonrelativistic heavy quark-light diquark model and from QCD sum rules in the HQET framework. Also the fit allowed the most precise determination of the masses and widths of the known resonances Λ c (2880) + and Λ c (2940) + , as well as constraining their quantum numbers.
Three excited Ξ * 0 c baryons have been observed in the Λ + c K − mass spectrum using the Run 2 data-set [154]. The mass difference δm ≡ m( Fig. 5.1. Three narrow structures, denoted Ξ c (2923) 0 , Ξ c (2939) 0 and Ξ c (2965) 0 are clearly visible with a significance exceeding 20σ for each signal. The data and fit show the least compatibility in the region δm ≈ 110 MeV, that could be evidence for a fourth Ξ * 0 c state. Figure 5.1(right) shows the δm distribution for the signal samples, where a structure in this region is added into the fit. A large improvement in the fit quality is achieved. [154]. Fits accounting for (left) three and (right) four excited Ξ * 0 c states are superimposed.
Five narrow excited Ω * c baryons have been observed in the Ξ + c K − mass spectrum using the Run 1 data-set [155]. A large sample of Ξ + c candidates were reconstructed in their Cabibbo-suppressed mode Ξ + c → pK − π + . In total around 1.05 × 10 6 Ξ + c → pK − π + candidates with with a purity of 83% were selected, shown in Fig. 5.2(left). The mass distribution for Ξ + c K − combinations is shown in Fig. 5.2(right), and five narrow peaks are clearly visible. The natural widths of the peaks are found to be between 0.8 and 8.7 MeV, and two of them, named Ω c (3050) 0 and Ω c (3119) 0 , are found to be extremely narrow, with 95%CL limits of 1.2 and 2.8 MeV, respectively. It is found that the fit improves if an additional broad Breit−Wigner function is included in the 3188 MeV mass region. This broad structure may represent a single resonance, be the superposition of several resonances, be a feed-down from higher states, or some combination of the above. The interpretation of the narrow states is still an open question. The naive quark model expects five states in the region, but some have to be relatively broad. The molecular model predicts two states with J P = 1 2 + and two with J P = 1 2 + , and three of the observed states are in remarkable agreement, both in mass and width, with this hypothesis [156].
Double-charm baryons
Three weakly decaying states with charm number C = 2 are expected in the quark model: one isospin doublet Ξ cc and one isospin singlet Ω ccs , each with spin-parity J P = 1 2 + .
The properties of these baryons have been calculated with a variety of theoretical models. In most cases, the masses of the Ξ cc states are predicted to lie in the range 3500 to 3700 MeV/c 2 [157]. The masses of the Ξ ++ cc and Ξ + cc states are expected to differ by only a few MeV/c 2 due to approximate isospin symmetry. Most predictions for the lifetime of the Ξ + cc baryon are in the range 50 to 250 fs, and the lifetime of the Ξ ++ cc baryon is expected to be three to four times longer at 200 to 700 fs, While both are expected to be produced at hadron colliders the longer lifetime of the Ξ ++ cc baryon should make it significantly easier to experimentally observe than the Ξ + cc baryon. Experimentally, there is a longstanding puzzle in the Ξ cc system. Observations of the Ξ + cc baryon in the Λ + c K − π + final state at a mass of 3519 ± 2 MeV/c 2 with signal yields of 15.9 events over 6.1 ± 0.5 events background (6.3σ significance), and 5.62 events over 1.38 ± 0.13 events background in the final state pD + K − (4.8σ significance), were reported by the SELEX collaboration [158,159]. The SELEX results included a number of unexpected features, notably a short lifetime and a large production rate relative to that of the singly charmed Λ + c baryon. The lifetime was reported to be shorter than 33 fs at the 90% confidence level, and SELEX concluded that 20% of all Λ + c baryons observed by the experiment originated from Ξ + cc decays, implying a relative Ξ cc production rate several orders of magnitude larger than theoretical expectations. Searches from the FOCUS [160], BaBar [161], and Belle [162] experiments did not find evidence for a state with the properties reported by the SELEX collaboration, and neither did a search at LHCb with data corresponding to an integrated luminosity of 0.65 fb −1 [163]. However, because the production environments at all the above experiments differ from that of SELEX, which studied collisions of a hyperon beam on fixed nuclear targets, these null results do not exclude the original observations. LHCb has searched for the Ξ ++ cc decaying into Λ + c K − π + π + using a sample of pp collision data at 13 TeV, corresponding to an integrated luminosity of 1.7 fb −1 . A highly significant structure is observed in the mass spectrum, where the Λ + c baryon is reconstructed into pK − π + , shown in Fig. 5.3(left). The structure is consistent with originating from a weakly decaying particle, identified as the doubly charmed baryon Ξ ++ cc . The observation of the state is confirmed using an additional sample of data collected at 8 TeV. Soon after, the observation was further confirmed by observing the same state in the decay Ξ ++ cc → Ξ + c π + , shown in Fig. 5.3(right). The mass of the Ξ ++ cc state was measured to be in very good agreement with the value measured in the Ξ ++ cc → Λ + c K − π + π + decay channel [164]. The lifetime and mass of the Ξ ++ cc baryon were also precisely measured [165,166], where the lifetime favours smaller values in the range of the theoretical predictions.
Beauty hadrons
Excited B + and B 0 mesons have been investigated in the mass distributions of B + π − and B 0 π + combinations using a 3 fb −1 data sample at 7 and 8 TeV. The B + and B 0 candidates were reconstructed through the B + → D 0 π + , B + → D 0 π + π + π − , B + → J/ψK + , B 0 → D − π + , B 0 → D − π + π + π − and B 0 → J/ψK * 0 decay chains. Samples of about 1.2 million B 0 and 2.5 million B + candidates have been obtained with purity depending on decay mode, but always better than 80%. The B + π − and B 0 π + mass spectra with requirements that p T > 2 GeV/c are shown in Fig. 5.4, where ten peaking structures are reconstructed. Out of these, six narrow low-mass structures correspond to the decays of the four B 1 (5721) 0,+ and B * 2 (5747) 0,+ states observed by the CDF and D0 collaborations [168][169][170]: B 1 (5721) → B * π and B 2 (5747) → B ( * ) π. The high statistics of LHCb has allowed the most precise measurements of the masses and widths of the B 1 (5721) 0,+ and B * 2 (5747) 0,+ states to be made. In addition to the six low-mass structures, four wider high-mass structures are observed, particularly prominent at high pion transverse momentum. These structures are consistent with the presence of four new excited B mesons, labeled B J (5840) 0,+ and B J (5960) 0,+ , whose masses and widths are obtained under different hypotheses of their quantum numbers [171]. Orbitally excited B 0 s mesons have been studied using only 1 fb −1 of data, collected at √ s = 7 TeV. The B + K − mass spectra were investigated with the B + mesons being reconstructed in four decay modes. Previously, two narrow peaks had been observed in the B + K − mass distribution by the CDF and D0 collaborations [172,173], named the B * s1 (5830) 0 and B * s2 (5840) 0 . They are putatively identified as members of j q = 3 2 HQET doublet [174]. The two states are also visible in LHCb data, here as three narrow peaks shown in Fig. 5.5(left), corresponding to the decays B s1 (5830) 0 → B * + K − , B * s2 (5840) 0 → B * + K − , and B * s2 (5840) 0 → B + K − , where a soft photon from B * + → B + γ is undetected. This is the first observation of the B * s2 (5840) 0 → B * + K − decay mode and a J P = 2 + assignment is favoured for this state. Large statistics, low background and LHCb's excellent mass resolution has allowed the first determination of the B * s2 (5840) 0 width as well as the most precise mass measurements of both states. Due to the small energy release, the position and the shape of the peaks depends on the mass of the B * + states, allowing the most precise determination of the m B * + mass, as well as the mass difference m B * + − m B + .
Excited B + c mesons have been searched for via their decays into the B + c π + π − final state. A wide peak, interpreted as the B c (2S) + , was observed by the ATLAS collaboration using a sample of about 300 reconstructed B + c → J/ψπ + candidates [175], with a large relative production rate with respect to the base B + c state. LHCb has searched for this state using 2 fb −1 of data collected at relative production rate has been obtained [176]. This upper limit is smaller than the relative production rate reported by ATLAS.
In 2018, the CMS collaboration, using a huge data set corresponding to 143 fb −1 collected at √ s = 13 TeV and containing of 7629±225 signal B + c → J/ψπ + decays, reported observation of a doublet of two narrow states, interpreted as spin-triplet B c (2S) * + and spinsinglet B c (2S) + states [177]. These observations were confirmed by LHCb using a 8.5 fb −1 data-set collected at √ s = 7, 8 and 13 TeV [178], with 3785±73 signal B + c → J/ψπ + decays. Two narrow peaks with a width compatible with the detector resolution are seen in the mass-difference m B + c π + π − − m B + c spectrum, shown in Fig. 5.5(right). The local (global) significances of the two peaks are estimated to be 6.8σ (6.3σ) and 3.2σ (2.2σ) for the low-mass and high-mass states, respectively. The low-mass signal is interpreted as the spin-triplet state B c (2S) * + , decaying into B * + c π + π − with the subsequent decay of the B * + c into B + c γ. The high-mass peak is attributed to the decay of the spin-singlet B c (2S) + state into the B + c π + π − final state.
Using the full Run 1 and 2 data-sets, the mass spectrum of Λ 0 b π + π + combinations, where Λ 0 b → Λ + c π − and Λ 0 b → J/ψpK − , was explored for higher masses of Λ 0 b π + π − combinations. A significant broad structure is found at m ≈ 6.150 GeV/c 2 with a width around 10 MeV, shown in Fig. 5.7(left) [179]. The mass and width agree well when measured in the two decay modes Λ 0 b → Λ + c π − and Λ 0 b → J/ψpK − , where the significance exceeds 26 and 9 standards deviations, respectively. Since the mass of the new structure is above the Σ ( * )± b π ∓ kinematic thresholds, the Λ 0 b π + π − mass spectrum is investigated in the Λ 0 b π ± mass regions populated by the Σ resonances. The data are split into three non overlapping regions: candidates with a Λ 0 b π ± mass within the natural width of the known Σ ± b mass, candidates with a Λ 0 b π ± mass within the natural width of the known Σ * ± b mass, and the remaining nonresonant (NR) region. The Λ 0 b π + π − mass spectra in these three regions are shown in Fig. 5.7(right). The spectra in the Σ b and Σ * b regions look different and suggest the presence of two narrow peaks with very similar widths. The two-signal hypothesis is favoured over the single-signal hypothesis with a statistical significance exceeding seven standard deviations. The masses of the two states measured are consistent with predictions for the doublet of Λ b (1D) 0 states with quantum numbers J P = 3 2 + and 5 2 + .
In 2020, the fifth excited Λ b state was observed in the Λ 0 b π + π − mass spectra, using the full LHCb Run 1 and 2 data-sets. Two decay modes of the Λ 0 b baryon were used, Λ 0 b → Λ + c π − and Λ 0 b → J/ψpK − , and the significance of the new state, denoted Λ * * 0 b , is in excess of 14 and 7 standard deviations in the two decay modes, respectively. This is shown in Fig. 5.8. Unlike the previously observed four narrow Λ b states, Λ b (5912) 0 , Λ b (5920) 0 , Λ b (6146) 0 and Λ b (6152) 0 , the new state is rather broad, Γ = 72 ± 11 ± 2 MeV. The measured mass and width agree with the interpretation of this state being the first radial excitation, the Λ b (2S) 0 resonance [180]. This resonance is also consistent with a broad excess of events in the Λ 0 b π + π − mass spectrum, previously reported by the CMS collaboration [181]. [121].
Excited Σ ± b baryons have been studied in the Λ 0 b π ± mass spectra using the Run 1 LHCb data-set [182]. In total (234.27 ± 0.90) × 10 3 signal Λ 0 b baryons were reconstructed in the decay mode Λ 0 b → Λ + c π − . Distributions of the energy release in the decay, states, observed and characterised by the CDF collaboration [183,184]. New peaks in the Λ 0 b π − (Λ 0 b π + ) spectra are visible at Q = 338.8 ± 1.7 MeV (336.6 ± 1.7 MeV), with a local significance of 12.7σ (12.6σ), based on the differences in log-likelihoods between the fits with zero signal and the nominal fit. In the heavy-quark limit, five Σ b (1P ) states are expected, and several predictions of their masses have been made. Since the expected density of baryon states is high, it cannot be excluded that the new observed structures are the superposition of more than one (near-)degenerate state. Taking into account that the predicted mass and width depend on the as-yet-unknown spin and parity, the newly observed structures are compatible with being Σ b (1P ) ± excitations. Other interpretations, such as molecular states, may also be possible.
baryons have been observed in the Ξ 0 b π − mass spectrum using the LHCb Run 1 data-set [118]. Signal Ξ 0 b candidates were reconstructed in the final state Ξ + c π − with Ξ c → pK − π + . Two peaks are clearly visible in the δm The Ξ * 0 b baryon was first observed at CMS [185], and later studied in detail by the LHCb collaboration using the Run 1 data-set [119]. The Ξ * 0 b candidates have been reconstructed in the decay Fig. 5.10(right). A narrow peak is clearly visible with a fitted signal yield of 232 ± 19 events. The non-zero value of the natural width of the peak, Γ = 0.90 ± 0.16 MeV, is also highly significant; the change in log-likelihood when the width is fixed to zero exceeds 30 units. No other statistically-significant structures are seen. The peak position and the width are consistent with, and about a factor of ten more precise than, the CMS measurements [185]. The measured width of the state is in line with theory expectations; a calculation based on lattice [118]. QCD predicts a width of 0.51 ± 0.16 MeV [186], and another using the 3 P 0 model obtains a value of 0.85 MeV [187]. The measured production ratio with respect to the Ξ − b state is measured to be (28 ± 3 ± 1)%, suggesting that in high energy pp collisions at 7 and 8 TeV, a large fraction of Ξ − b baryons are produced through feed-down from higher-mass states.
Full reconstruction of the Λ 0 b baryon allows the excellent resolution of the Λ 0 b K − mass spectra to be reached. In addition, partial reconstruction of Λ 0 b and Ξ 0 b baryons in their semileptonic modes allows a significant increase in the sample of Λ 0 b and Ξ 0 b baryons, where the missing neutrinos do not prevent a peaking structure in the spectra of mass differences (m "Λ 0
The resolution is improved by applying the 4-vector constraint (p H
, where H + c stands for Λ + c and Ξ + c and H 0 b stands for Λ 0 b and Ξ 0 b . The mass-difference spectra are shown in Fig. 5.11, where the peak locations for all three modes are seen to agree well. The statistical significances of the new excited baryon, dubbed the Ξ b (6227) − , are found to be 7.9σ for the Four narrow excited Ω b states have been observed in the Ξ 0 b K − mass spectrum using the full Run 1 and 2 LHCb data-sets [189]. The Ξ 0 b candidates were reconstructed in Ξ + c π − final states with Ξ c → pK − π + . After multivariate selection, a low-background sample of (19.2 ± 0.2) × 10 3 Ξ 0 b → Ξ + c π − decays has been obtained. The mass-difference spectrum m( Full fit along with fits to the data [188]. The top row is for 7 and 8 TeV data and the bottom is for 13 TeV. The symbol M * represents the mass after the 4-vector con- while the width of the high-mass state is found to be 1.4 +1.0 −0.8 ± 0.1 MeV. The peaks have local significances that range from 3.6 to 7.2 standard deviations. After accounting for the look-elsewhere effect, the significances of the two low-mass peaks are reduced to 2.1σ and 2.6σ, respectively, whilst the two higher-mass peaks exceed 5σ. The observed Ξ 0 b K − peaks seen here are similar to those observed in the Ξ + c K − invariant mass spectrum [155]. Arguably, the simplest interpretation is that the peaks correspond to excited Ω − b states, in particular the L = 1 angular momentum excitation of the ground state, or possibly an n = 2 radial excitation. Many of the quark-model calculations predict L = 1 states in this mass region, and at least some of the states should be narrow. In particular, using the 3 P 0 model, five states in this mass region are predicted, with approximately 8 MeV mass splittings; the four lightest have partial widths, Γ(Ξ 0 b K − ), below 1 MeV, whilst the one with the largest mass has Γ(Ξ 0 b K − ) = 1.49 MeV. Conversely, predictions using the chiral quark model indicate that the J P = (3/2) − and (5/2) − states are narrow, but the (1/2) − states are wide [190]. Quark-diquark models have also predicted several excited Ω − b states in the region around 6.3 GeV, with mass splittings similar to that observed here, however, there are no predictions for the decay widths. Molecular models have also been employed, where two narrow J P = (1/2) − states are predicted at 6405 MeV and 6465 MeV [156], however do not match well with the LHCb measurements.
An alternate interpretation for one or more of the observed peaks is that they arise from the decay of a higher-mass excited where the π 0 meson is undetected. If the mass of a not-yet-observed Ξ 0 b state is in the region , each of the observed narrow peaks can be interpreted as having come from the above decay, provided that the corresponding excited Ω * * − b state is narrow, Γ Ω * * − b ≤ 1 MeV. In this case, their masses can be evaluated as m Ω * * − b = m Ξ 0 b + δm peak , where δm peak is a measured position of the peak in the m(
Conventional charmonia and bottomonia
Charmonium states in DD mass spectra near threshold have been studied using the full LHCb statistical sample collected in Runs 1 and 2 [191]. D 0 and D + candidates were reconstructed in the D 0 → K − π + and D + → K − π + π + decay modes. In total, 3. Four peaking structures are observed in the spectra. Two of the peaks correspond to the known ψ(3770) and χ c2 (3930) charmonium states. A narrow peak close to threshold represents partially reconstructed χ c1 (3872) → D * 0 D 0 decays, subsequently with D * 0 → D 0 γ or D * 0 → D 0 π 0 with the γ or π 0 meson missing. The narrow peak with mass around 3840 MeV/c 2 is identified as a new charmonium state. Its mass value and small natural width suggest an interpretation as the ψ 3 (1 3 D 3 ) charmonium state with quantum numbers J P C = 3 −− [192]. In addition, prompt hadroproduction of the χ c2 (3930) and ψ(3770) charmonium states have been observed for the first time, and precise measurements of their resonance parameters have been performed.
Observation of the decays χ c → J/ψµ + µ − using a 4.9 fb −1 data-set collected at √ s = 7, 8 and 13 TeV enabled the most precise direct determination of the masses of the χ c1 and χ c2 states and the width of the χ c2 to be performed with unprecedented precision [120]. The observation of these decay modes provides opportunity for the precise measurements of the χ c1,c2 production and polarization, that in turn is vital for tests of QCD models of charmonia production. The relativistic Breit-Wigner function is convolved with the detector resolution, described by a sum of two Gaussian functions with common mean and parameters fixed from simulation. The effective resolution depends on m D + D − and increases from 0.9 MeV/c 2 for ψ(3770) → D + D − to 1.9 MeV/c 2 for χc2(3930) → D + D − signals and is approximately 10% larger for the D 0 D 0 final state. The background in this region is found to be well described by a second-order polynomial function. An extended unbinned maximum-likelihood fit is performed simultaneously to the D 0 D 0 and D + D − mass spectra. The mass and the natural width of the X(3842) signals in the D 0 D 0 and D + D − final state are considered as common parameters in this fit whilst all other parameters are allowed to vary independently. All parameters related to the detector resolution are fixed to values found using simulation. The result of the fit to the data is shown in Fig. 3 and the resulting parameters of interest are summarised in Table 1. The statistical significance of the X(3842) signal is evaluated using Wilks' theorem [41] to be above 7σ for the D 0 D 0 decay mode and above 21σ for the D + D − decay mode. Observation of ψ 2 (3823) → J/ψπ + π − in B + → (ψ 2 (3823) → J/ψπ + π − ) K + decays using the full LHCb Run 1 and 2 data-sets, has allowed the most precise determination of the mass of the tensor ψ 2 (3823) state and the best constrained upper limit of its width [123]. The observed mass distribution is shown in Fig. 5.14. Within the factorization approach, the branching fraction for the decay B + → ψ 2 (3823)K + vanishes, and a non-zero value for this branching fraction allows an evaluation of the contribution of the D ( * )+ sD ( * )0 rescattering amplitudes in the B + → ccK + decays.
Pentaquarks
The LHCb collaboration studied Λ 0 b → J/ψpK − decays using the Run 1 data-set [193]. In total (26.0±0.2)×10 3 signal Λ 0 b candidates were selected, and an anomalous peak in the J/ψp mass spectrum is observed, shown in Fig. 5.15(left). If the peak structure represents a resonance which strongly decays into J/ψp, the minimal valence quarks would be ccuub, a charmonium pentaquark state. A full six-dimensional amplitude fit with resonance invariant masses, three helicity angles and two differences between decay planes has been applied to describe the data. The amplitude model in the fit contains 14 well-defined Λ * states and two pentaquark states, labelled as P c (4380) + and P c (4450) + . The projections of the fit are shown in Fig. 5.15. The masses and widths of the wide P c (4380) + and narrow P c (4450) + states have been measured. The preferred spin-parity assignments are Following the first observation, the exotic hadronic character of the J/ψp structure around 4450 MeV/c 2 was confirmed in a model-independent way [141]. This analysis gave similar results and excluded that the data could be described by the pK − constributions alone. Further confirmation comes from the amplitude analysis of the Cabibbo-suppressed decay Λ 0 b → J/ψpπ − [194], where 1885 ± 50 signal candidates were investigated. There are different theoretical interpretations suggested, including a tightly bound duucc state a loosely bound molecular baryon-meson state or a triangle-diagram processes.
A partial update of the above analysis was made using the full Run 1&2 data sample [195]. A nine-fold increase of statistics is achieved due to the larger data sample, an improved selection criteria, and increased pp → bb cross-section at √ s = 13 TeV in Run 2. For candidates with a mass consistent with the nominal Λ 0 b baryon mass, the J/ψp and pK − mass spectra were investigated. In the distribution of J/ψp mass, the previously reported peaking structure around 4450 MeV/c 2 was confirmed, and a new narrow peak with mass around 4312 MeV/c 2 was found. The Λ * → pK − contributions are clearly seen in the Dalitz plot, shown in Fig. 5.16 (left).
Since the newly observed peaks are narrow, the full amplitude analysis faces computational challenges. This is because resolution effects should be included in the formalism which complicates the fitting procedure. Conversely, narrow peaks can not be due to reflections from Λ * states, motivating the validity of the one-dimensional fit approach to study the J/ψp invariant mass. The J/ψp mass in the narrow-resonance region together with the result of the fit is shown in Fig. 5.16 (right). The previously reported peak around the 4450 MeV/c 2 mass is now resolved into a two-peak structure of P c (4440) + and P c (4457) + states. In total, three narrow pentaquark states are observed. The statistical significance of the two-peak interpretation of the previously-reported single P c (4450) + structure is 5.4σ. The statistical significance of a new P c (4312) + state is 7.3σ. The masses and widths of the pentaquark candidates are measured. Taking into account systematic uncertainties, the widths are consistent with the mass resolution. Hence, upper limits on the natural widths at the 95 % confidence level (CL) are obtained.
In summary, whilst the existence of pentaquark-like resonances is certainly beyond doubt, their exact nature is still unclear. They can be genuine five-quark bound states, or e.g. near-threshold meson-baryon molecules. More studies are required to clarify this.
Charmonium-like exotic states
The enigmatic X(3872) particle was discovered in B + decays by the Belle bollaboration [196]. Subsequently, its existence has been confirmed by several other experiments [197][198][199]. The nature of this state is rather unclear. Among the open possibilities are conventional charmonium and exotic states such as D * 0D0 molecules [200], tetraquarks [201], or their mixtures [202]. Determination of the J P C quantum numbers is important to shed light on this ambiguity. The C-parity of the state is positive since the X(3872) → J/ψγ decay has been observed [203,204]. The CDF experiment analysed three-dimensional angular correlations in a relatively high-background sample of 2292 ± 113 inclusively-reconstructed X(3872) → J/ψπ + π − , J/ψ → µ + µ − decays, dominated by prompt production in pp collisions. The unknown polarisation of the X(3872) limits the sensitivity of the measurement of J P C [205]. A χ 2 fit of J P C hypotheses to the binned three-dimensional distribution of the J/ψ and π + π − helicity angles and the angle between their decay planes [206][207][208], excluded all spin-parity assignments except for 1 ++ and 2 −+ .
Using √ s = 7 TeV pp collision data corresponding to 1 fb −1 collected in 2011, the LHCb collaboration performed the first analysis of the complete five-dimensional angular correlations of the B + → X(3872)K + , X(3872) → J/ψπ + π − , J/ψ → µ + µ − decay chain [209]. About 38 000 candidates passed the multivariate selection in a ±2σ range around the B + m J/ψπ + π − K − mass distribution, with a signal purity of 89%. The ∆m ≡ m J/ψπ + π − − m J/ψ distribution is shown in Fig. 5.17(left). The fit yields 5642 ± 76 and 313 ± 26 candidates for ψ(2S) → J/ψπ + π − and X(3872) → J/ψπ + π − signals, respectively. The angular correlations in the B + decay carry information about the X(3872) quantum numbers. To discriminate between the 1 ++ and 2 −+ assignments, a likelihood-ratio test is used. A test statistic t is defined as −2 ln[L(2 −+ )/L(1 ++ )]. Positive (negative) values of the test statistic for the data, t data , favour the 1 ++ (2 −+ ) hypothesis. The value of the test statistic observed in the data is t data = +99, thus favouring the 1 ++ hypothesis. A rejection of the 2 −+ hypothesis with greater than 5σ significance is demonstrated using a large sample of pseudo-experiments. As shown in Fig. 5.17(right) the distribution of t is reasonably well approximated by a Gaussian function. Based on the mean and r.m.s. spread of the t distribution for the 2 −+ experiments, this hypothesis is rejected with a significance of 8.4σ. Hence the obtained results correspond to an unambiguous assignment of the X(3872) state to be 1 ++ .
The above result rules out the explanation of the X(3872) as a conventional η c2 (1 1 D 2 ) state. Among the remaining possibilities are the χ c1 (2 3 P 1 ) charmonium state, and unconventional explanations such as a D * 0D0 molecule tetraquark state charmonium-molecule mixture. With a larger 3 fb −1 data-set at √ s = 7 and 8 TeV, the analysis has been repeated in the decay X(3872) → J/ψρ 0 without an assumption on the orbital angular momentum [210]. The analysis confirmed the J P C = 1 ++ assignment for the X(3872) state and also set an upper limit of 4% at 90% C.L. on the D-wave contribution. A precise determination of the mass and width of the X(3872) state was performed using two minimally overlapping data-sets. The first was the 3 fb −1 Run 1 data-set in which the X(3872) particles were now selected from decays of hadrons containing b quarks [122]. The second was the full Run 1 and 2 data-sets using a sample of (547.8 ± 0.8) × 10 3 B + → J/ψπ + π − K + decays [123]. In both cases the X(3872) was reconstructed in the X(3872) → J/ψπ + π − final state. The mass and width were determined from a fit to the J/ψπ + π − mass distribution assuming a Breit-Wigner line shape for the X(3872) state, measured as follows: where the first and the second lines correspond to Refs. [122] and [123], respectively. The above measurements represent the most precise determination of the mass of the X(3872) state and the first measurements of its width. The measured mass corresponds to the binding energy δE, defined as m D 0 c 2 + m D * 0 c 2 − m X(3872) c 2 , which is 70 ± 120 keV. Whilst the proximity of the measured mass of the X(3872) to the D 0 D * 0 threshold [211][212][213][214][215] favours the interpretation of this state as a D 0 D * 0 molecule, the large production cross-section of the X(3872) [197,198,213,216,217] This result is compatible with, but more precise than, previous measurements [204,218], and strongly disfavours a pure molecular interpretation of the X(3872) state.
Structures in the J/ψφ system have acquired great experimental and theoretical interest since the CDF collaboration reported 3.8σ evidence (14 ± 5 events) for a narrow (Γ = 11.7 +8.3 −5.0 ± 3.7 MeV) near-threshold X(4140) mass peak in a sample of 75 ± 11 reconstructed B + → J/ψφK + decays [219]. Much larger widths are expected for charmonia states at this mass, therefore its possible interpretations as a molecular state, a tetraquark state, a hybrid state or a rescattering effect have been discussed. The X(4140) structure was confirmed by CMS [220] and D0 [221,222], however searches in B + → J/ψφK + decays were negative in the Belle [223,224] and BaBar [225] experiments.
Using a 0.37 fb −1 data-set at √ s = 7 TeV (346±20 signal B + → J/ψφK + decays) LHCb initially found no evidence for the narrow X(4140) structure [226], in 2.4σ disagreement with the measurement by CDF, seen in Fig 5.18(left). However, using a significantly larger sample of 4286 ± 151 B + → J/ψφK + decays (the Run 1 data-set), with roughly uniform efficiency across the entire J/ψφ mass region, LHCb performed a full amplitude analysis, including resonant contributions from K * resonances decaying into φK + and possible resonances in the J/ψφ system [142,143]. Four resonance contributions labeled as X(4140), X(4274), X(4500) and X(4700) with quantum numbers 1 ++ , 1 ++ , 0 ++ and 0 ++ , respectively, are observed, shown in Fig. 5.18(right). The statistical significance varies from 5.6 to 8.4σ. The widths of the states are found to be between 56 and 120 MeV, significantly exceeding the narrow-width of the X(4140) reported by CDF. [142,143]. The total fit is given by the red histogram.
Exotic particles with quantum numbers which can decay into η c π − are predicted in several models [229]. Using a 4.7 fb −1 data sample at 7, 8 and 13 TeV, the LHCb collaboration has performed a Dalitz-plot analysis of B 0 → η c K + π − decays, where the η c meson is reconstructed in the η c → pp final state [230]. Evidence was found for a new exotic resonance in the η c π − system, later dubbed the X(4100) − by the PDG. The significance of this new resonance exceeds three standard deviations.
Structures in the J/ψJ/ψ mass spectrum
The production of J/ψJ/ψ pairs in high-energy pp collisions was observed for the first time by the LHCb experiment using a 37.5 pb −1 data sample collected in 2010 at √ s = 7 TeV [231]. The J/ψJ/ψ mass spectrum was studied in a sample of 116 ± 16 signal pairs, and no structures was found. The subsequent analysis of 279 pb −1 of data collected in 2015 at √ s = 13 TeV [232] showed the dominant role of the double-parton scattering (DPS) mechanism for J/ψJ/ψ production over the single-parton scattering mechanism (SPS). This in turn includes both a non-resonant SPS contribution and cccc tetraquark production. Using the full Run 1 and 2 data-set, the J/ψJ/ψ mass spectrum was studied in more detail [233]. The data, shown in Fig. 5.19, were found to be inconsistent with the hypothesis of non-resonant SPS plus DPS in the 6.2 < m J/ψJ/ψ < 7.4 GeV/c 2 range, where cccc tetraquarks decaying into J/ψJ/ψ pairs are expected. A narrow peaking structure at m J/ψJ/ψ ≈ 6.9 GeV/c 2 matching the lineshape of a resonance, and a broader structure near to threshold, were found. The global significances of the broader structure close to threshold or the narrow peak around 6.9 GeV/c 2 (provided that the other structure exists), are determined to be larger than 5 standard deviations. The structures are consistent with hadron states made up of four charm quarks, alternatively they may also result from near-threshold rescattering effects, as the χ c0 χ c0 and χ c1 χ c0 thresholds sit at 6829.4 MeV/c 2 and 6925.4 MeV/c 2 , respectively.
Light hadron spectroscopy
η − η mixing has been studied by LHCb in B 0 (s) → J/ψη ( ) decays, resulting in four observed decays modes, using the Run 1 data-set [234]. The η and η were identified in the decay modes η → ηπ + π − , η → π + π − π 0 and η → ρ 0 γ. For decays of B 0 s (B 0 ) mesons, the η ( ) mesons are formed from initial ss (dd) quark pairs, hence the measurement of the ratios of branching fractions of these decays allows a precise measurement of the η − η mixing angle. It also probes the gluonium component in the η meson.
Excited strange mesons have been studied in the φK + system from a full amplitude fit of B + → J/ψφK + decays using the LHCb Run 1 data-set [142,143]. Even though no peaking structures are observed in the φK + mass distributions. correlations in the decay angles reveal a rich spectrum of K * + resonances. In addition to the angular information contained in the K * + and φ decays, the J/ψ decay also helps to probe these resonances, as the helicity states of the K * + and J/ψ mesons originating from the B + decay must be equal. Unlike the earlier scattering experiments investigating K * → φK decays, a good sensitivity to states with both natural and unnatural spin-parity combinations is achieved.
The dominant 1 + partial wave has a substantial non-resonant component, and at least one resonance that has a significance of 7.6σ. There is also 2σ evidence that this structure can be better described with two resonances matching expectations for two 2P 1 excitations of the kaon. Also prominent is the 2 − partial wave which contains at least one resonance at 5.0σ significance. This structure is also better described with two resonances at 3.0σ significance. Their masses and widths are in good agreement with the well-established K 2 (1770) and K 2 (1820) states, matching the predictions for the two 1D 2 kaon excitations. The 1 − partial wave exhibits 8.5σ evidence for a resonance which matches the K * (1680) state, which was well established in other decay modes, and matches expectations for the 1 3 D 1 kaon excitation. This is the first observation of its decay to the φK final state. The 2 + partial wave has a smaller intensity but provides 5.4σ evidence for a broad structure that is consistent with the K * 2 (1980) state, previously observed in other decay modes, and matches expectations for the 2 3 P 2 state. The K(1830) state (3 1 S 0 candidate), earlier observed in the φK decay mode in K − p scattering, is also confirmed at 3.5σ significance. Its mass and width is now properly evaluated with uncertainties for the first time.
Measurements not originally planned in LHCb
While originally designed to study the production and decay of b and c hadrons, LHCb has extended its physics programme to also include other areas, such as physics with jets, the production of W and Z bosons, searches for new particles in open mode, and nuclear collisions. Selected highlights are summarised below.
Production of EW bosons W and Z
LHCb has measured the production of Z and W bosons inclusively [235] and in association with jets, reconstructed in mainly muonic final states, using the data collected at √ s = 8 TeV [236]. Also decays to e + e − [237], τ + τ − [238] and eν [239], have been measured, however the muon channel is the most efficient due to the excellent performance of the muon system (see Sect. 2.6). The Z → µ + µ − decay shows a spectacularly clean signal, as shown in Fig. 6.1(a) [240]. The W → µν channel also manifests in a clear signal, shown in Fig. 6.1(b) [235]. The absolute and differential cross sections, their ratios, and charge asymmetries have been measured and compared to theoretical predictions. Figure 6.2 (Left) shows the comparison of W and Z cross section measurements to SM predictions, showing good agreement.
Jets in LHCb
Measurements of jets at LHCb address several interesting areas of study: • Jet properties and heavy-quark jet tagging; • The constraining of proton parton density functions (PDFs) and to probe hard QCD in a unique kinematic range. • Direct searches for the Higgs boson decaying to bb and cc final states; • Direct searches for long-lived beyond-the-SM particles decaying into jets.
Jets are reconstructed in LHCb using a particle flow algorithm [241] clustered using the anti-kT algorithm with R = 0.5 [242]. The calibration of jet reconstruction is performed in data using Z → µ + µ − decays which also contain a jet, where the jet is reconstructed back-to-back with respect to the Z. The efficiency for reconstructing and identifying jets is around 90% for jets with transverse momentum p T > 20 GeV/c. Furthermore, LHCb has developed a method to tag jets [241] and to determine whether they correspond to a b or c quark or to a lighter quark. Jets are tagged whenever a secondary vertex (SV) is reconstructed close enough to the jet in terms of R = (∆φ 2 + ∆η 2 ). This provides a light-jet mistag rate below 1%, with an efficiency for b (c) jets of ∼ 65% (∼ 25%).
Moreover, using the SV and jet properties, two boosted decision trees (BDTs) have been developed, one to separate heavy from light jets and one to separate b from c jets. A summary of the obtained performance is shown in Fig. 6.4, where the efficiency of flavour identification is plotted as a function of the misidentification of light jets.
Production of W and Z with jets. W and Z production have also been studied in association with jets [236], in W + j, Z + j, W + bb and W + cc. Jets are reconstructed as described above, whilst Z and W bosons are reconstructed mainly in muonic final states. The production of W boson plus jets is discriminated from misidentified QCD background processes using a muon isolation variable, which is built as the ratio between the p T of the jet containing the muon and the p T of the muon alone. Figure 6.2(b) shows the distribution of this variable, with genuine muons from the W boson peaking at 1. LHCb has measured the W ± + bb, W ± + cc, production cross sections using a sample of pp collisions taken at √ s = 8 TeV with a high-p T isolated lepton from the W decay (electron or muon) and two heavy flavour (b or c) tagged jets in the final state. The heavy-quark tagging uses the method described above. In this analysis, the W + cc channel is studied for the first time. In order to extract the different signal components, a simultaneous four-dimensional fit is performed on the µ + , µ − , e + and e − samples. Here the electron channels are used to increase statistics. The four variables used in the fit are the dijet mass, a multivariate discriminator to separate tt from W +bb and W +cc events and a multivariate discriminator to separate b-and c-jets, used for both accompanying jets [243]. In this fit, the background from QCD multi-jets is extrapolated from a control sample in data, while other background contributions are fixed to SM theoretical expectations. Only the signal components are then unconstrained. The projections of the resulting fit on four input variables for the µ + sample are illustrated in Fig. 6.6. The statistical significance of the measured W + + bb, W + + cc, W − + bb, W − + cc and tt production cross sections are 7.1σ, 4.7σ, 5.6σ, 2.5σ and 4.9σ, respectively. The cross sections measured in the LHCb fiducial acceptance agree well with the Next-to-Leading-Order (NLO) theory predictions. Search for long-lived new particles. The LHCb detector has been designed to measure very rare decays of b quarks, with the aim of detecting the presence of new beyond-the-SM particles through their couplings in loops, which could change the expected SM branching ratios. This implies reaching mass scales higher than those explored in open new particle searches, where the particle is produced directly in pp collisions. The traditional way to search for new particles is, as for ATLAS and CMS, reconstructing their decays through exclusive final states in their invariant masses, or with missing energy techniques. Here hermiticity is a mandatory feature of the detector. In all cases, assumptions on their coupling, their production mechanism, and on their decay modes must be made, and the search is therefore guided by theoretical models.
For LHCb, the most promising final states are those decays which form a secondary vertex, for which the LHCb VELO (see Sect. 2.2) is extremely efficient, i.e. the new particles are long lived. Several negative results have been published which are often less sensitive than the General-Purpose Detector (GPD) results. However in some cases, LHCb can extend the exclusion region. An example is given in Fig. 6.7 for a Higgs-like particle H 0 decaying into jets forming a separate secondary vertex. The LHCb exclusion region is compared with that of ATLAS and CMS, which demonstrates the complementary of the LHC experiments. The limits are, in this specific case, competitive, despite a factor 10 less luminosity.
Production of tt pairs. Top quark production is an excellent example where the forward acceptance of the LHCb detector has several advantages with respect to the central region instrumented by ATLAS and CMS. The t quark cross-section can provide important constraints on the large-x gluon PDF, where the forward kinematic region is particularly sensitive. In addition the forward region provides a greater fraction of events with quark-initiated production than in the central region, and enhances the size of tt asymmetries visible at LHCb. The challenge for LHCb to measure tt production is the small acceptance and the impossibility of a missing energy measurement. Also the fact that the luminosity is limited by the need to reduce multiple interactions for measurements in the b sector, disfavours tt statistics.
Top-quark production is presented here at √ s=13 TeV, which gives an increase in the production rate of an order of magnitude with respect to 8 TeV, and which brings these new channels into statistical reach. The tt analysis is based on an integrated luminosity of 2 fb −1 , and with eµb measured in the final state. Hence the final state is the decay chain tt → bW + bW − → e + µ − bb, where at least one b-jet is reconstructed. This is a very pure final state, as the second lepton suppresses W + bb production and the different flavoured leptons suppress Z + bb. The signal purity is illustrated in Fig. 6.8(a), and the observable cross section is measured to be σ tt = 126 ± 19( (stat)) ± 16( (syst)) ± 5(lumi) fb, which is compatible with SM predictions.
Z → bb decay. This measurement is an important validation of the LHCb jet reconstruction and b-tagging performance. Two b-tagged jets are reconstructed, with a third balancing jet also reconstructed to help control the QCD background and define signal and control regions using a multivariate technique. The background-subtracted signal distribution is shown in Fig. 6.8(b) [244]. The signal is observed with a statistical significance of 6σ and the measured cross section is found to be compatible with SM predictions at next-to-leading order. Figure 6.6: Projections of the simultaneous four-dimensional fit for the µ + sample [243] to: (a) the dijet mass, (b) the discriminator to separate tt from W + bb and W + cc, and (c) the discriminator to separate b and c leading jets (d) sub-leading jets. In light blue is W + bb, in green tt, in red W + cc and in black the background.
Dark Photons
The possibility that dark matter particles may interact via unknown forces, almost not felt by SM particles, has motivated substantial effort to search for dark-sector forces (see [245] for a review). A dark-force scenario involves a massive dark photon, A . In the minimal model, the dark photon does not couple directly to charged SM particles, but it can gain a weak coupling to the SM electromagnetic current via kinetic mixing. The strength of this coupling is suppressed by a factor with respect to the SM photon. If the kinetic mixing arises from processes whose amplitudes involve one or two loops containing high-mass particles, perhaps even at the Planck scale, then 10 −12 ≤ 2 ≤ 10 −4 is expected [245].
Constraints have been placed on visible A decays by previous beam-dump, fixedtarget, collider and rare meson decay experiments; the few-loop region is ruled out for dark photon masses m(A ) ∼ 10 MeV/c 2 . Additionally, the region 2 < 5 × 10 −7 is excluded for m(A ) < 10.2 GeV/c 2 , along with about half of the remaining few-loop region below the dimuon threshold. Many ideas have been proposed to further explore the [m(A ), 2 ] parameter space, including an inclusive search for A → µ − µ + decays with the LHCb experiment. A dark photon produced in proton-proton collisions via γ * -A mixing inherits the production mechanisms of an off-shell photon with m(γ * ) = m(A ), therefore both the production and decay kinematics of the A → µ + µ − and γ * → µ + µ − processes are where π V is a long-lived particle decaying to jets. Exclusion regions for ATLAS and CMS are also shown.
identical.
LHCb has performed searches for both prompt-like and long-lived dark photons [246] produced in pp collisions at a centre-of-mass energy of 13 TeV, using A → µ + µ − decays and a data sample corresponding to an integrated luminosity of 1.6 fb −1 collected during 2016. The prompt-like A search is performed from near the dimuon threshold up to 70 GeV, above which the m(µ + µ − ) spectrum is dominated by the Z boson. The prompt-like dimuon spectrum is shown in Fig. 6.9. Three main types of background contribute to the prompt-like A search: prompt off-shell γ * → µ + µ − , which is irreducible; resonant decays to µ + µ − , whose mass peak regions are excluded in the search (see Fig. 6.9), and various types of misidentification, which are highly suppressed by the stringent muon-identification and prompt-like requirements applied in the trigger.
For the long-lived dark photon search, i.e. with displaced dimuon vertices, the stringent criteria applied in the trigger make contamination from prompt muon candidates negligible. The long-lived A search is restricted to the mass range 214 ≤ m(A ) ≤ 350 MeV/c 2 , where the data sample potentially provides sensitivity. In this case the background composition is dominated by photon conversions to µ + µ − in the VELO, b-hadron decays where two muons are produced in the decay chain, and the low-mass tail from K 0 S → π + π − decays where both pions are misidentified as muons.
In the dark-photon searches, no evidence for a signal is found, and 90% CL exclusion regions are set on the γ − A kinetic-mixing strength, shown in Fig. 6.10. The constraints placed on prompt-like dark photons are the most stringent to date for the mass range 10.6 ≤ m(A ) ≤ 70 GeV/c 2 , and are comparable to the best existing limits for m(A ) ≤ 0.5 GeV/c 2 . The search for long-lived dark photons is the first to achieve sensitivity using a displaced-vertex signature. These results demonstrate the unique sensitivity of the LHCb experiment to dark photons, even using a data sample collected with a trigger that is inefficient for low-mass A → µ + µ − decays. Using knowledge gained from this analysis, the software-trigger efficiency for low-mass dark photons has been significantly improved for 2017 data taking.
In Run 3 to come, the planned increase in luminosity and removal of the hardwaretrigger stage should increase the number of expected A → µ + µ − decays in the low-mass region by O(100-− 1000) compared to the 2016 data sample.
Nuclear Collisions
Ultra-relativistic heavy-ion collisions allow the study of the so-called Quark-Gluon Plasma (QGP) state of matter, a hot and dense medium of deconfined quarks and gluons where heavy quarks are crucial probes. Produced via hard interactions at the early stage of the nucleus-nucleus collision, before the QGP formation, heavy quarks experience the entire evolution of the QGP. A correct interpretation of these probes requires a full understanding of Cold Nuclear Matter (CNM) effects, which are present regardless of the formation of the deconfined medium. To disentangle the CNM from genuine QGP effects, heavy-flavour production in proton-nucleus collisions is studied.
The LHCb experiment has collected data of proton-lead (pPb) and lead-lead (PbPb) collisions. Since the LHCb detector covers only one direction of the full acceptance, there are two distinctive beam configurations for the pPb collisions. In the forward (backward) configuration, the proton (lead) beam enters the LHCb detector from the interaction point. The proton beam and the lead beam have different energies per nucleon in the laboratory frame, hence the nucleon-nucleon centre-of-mass frame is boosted in the proton direction with a rapidity, y, shift. This results in the LHCb acceptance for the forward configuration as 1.5 < y < 4, and for the backward configuration −5 < y < − 2.5.
In addition, LHCb provides the unique capability at the LHC to collect fixed-target collisions utilising the System for Measuring the Overlap with Gas (SMOG) system [247].
Originally designed for precise luminosity measurements, SMOG provides the injection of a noble gas such as argon or helium inside the primary LHC vacuum around the VELO detector with pressure O(10 −7 ) mbar, allowing measurements of p-gas and ion-gas collisions, and operating LHCb as a fixed target experiment. Since 2015, LHCb has exploited SMOG in physics runs using special fills not devoted to pp physics, with a variety of beam (p or Pb) and target configurations. This allows unique production studies which are relevant to cosmic ray and heavy-ion physics.
The heavy-ion results on heavy-flavour production in pPb, PbPb and fixed-target collisions collected by LHCb bring yet more diversity and complementarity into the field. Also in this context, the excellent momentum resolution and particle identification provided by LHCb are especially suited for measuring heavy quark production. The LHCb collaboration joined the other participants into the LHC heavy-ion collider programme with a pPb run at 5 TeV in 2013 and with a PbPb run in 2015. Following these pioneering data runs, significantly larger data-sets have been successfully recorded.
Fixed target collisions. LHCb has reported first measurements of heavy-flavour production with the fixed-target mode [248]. J/ψ production cross-sections and D 0 yields have been measured in pHe collisions at √ s N N = 86.6 GeV and pAr collisions at √ s N N = 110.4 GeV, over the rapidity range 2 < y < 4.6. The cross-section measurements are made for pHe data only, since the luminosity determination is only available for this sample.
After correction for acceptances, efficiencies and branching fractions, the cross-sections are extrapolated to the full phase space. The D 0 measurement is used to extract the cc cross-section. The J/ψ and cc measurements are compared in Fig. 6.11 with other experiments at different centre-of-mass energies and with theoretical predictions.
With pHe data, LHCb also measured the antiproton production cross section [249], a very interesting direct determination, helping the interpretation of the antiproton cosmic-ray flux detected by space experiments [250]. and cc (right) cross-section measurements as a function of the centre-ofmass energy, compared with other experimental data (black points). The bands correspond to fits based on NLO NRQCD calculations for J/ψ and NLO pQCD calculations for cc, respectively. More details are given in [248].
Collider mode. In collider mode, the LHCb experiment has collected proton-lead collision data at √ s N N = 5 TeV in 2013 and at 8.16 TeV in 2016. The 2013 data sample corresponds to an integrated luminosity of 1.06 ± 0.02 nb −1 for the forward and 0.52 ± 0.01 nb −1 for the backward regions, whilst the 2016 data corresponds to 13.6± 0.3 nb −1 for the forward and 20.8 ± 0.5 nb −1 for the backward. These data samples are used to measure quarkonium and open charm or beauty production. Υ(nS)-meson production is studied in the decay to two opposite-sign muons [251]. The measurements include the differential production cross-sections of Υ(1S), Υ(2S) states and nuclear modification factors, performed as a function of transverse momentum and rapidity in the nucleon-nucleon centre-of-mass frame of the Υ(nS) state. Also the production cross-sections for the Υ(3S) is measured, integrated over phase space, and the production ratios between all three Υ(nS) states are determined.
The three states are well identified in both pPb and Pbp configurations as shown in Fig. 6.12. The nuclear modification factors are compared with theoretical predictions, and suppressions for bottomium in pPb collisions are observed. The LHCb measurements improve the understanding of cold nuclear matter effects down to low p T .
Future prospects
Over the years 2011 -2018, both the LHC machine and the LHCb detector performed extremely well, providing great improvements with respect to the B Factory measurements, in particular the pioneering CP violation measurements (Sect. 3), the observation of the rarest beauty meson decays (Sect.4) and the discovery of pentaquarks (Sect. 5). LHCb also observed and reported a number of interesting hints of anomalies related to the flavour sector, which has generated much theoretical attention, especially relating to rare decays and lepton flavour universality. The precision achieved by the experiment is in line with prior expectations, as documented in [252], and which demonstrates the remarkable understanding of all aspects of the detector.
To further pursue these exciting results and fully exploit the flavour physics potential of the LHC, the LHCb detector required an upgrade, to increase the rate and efficiency of data taking beyond the Long Shutdown 2 (LS2). Consequently the LHCb detector is now undergoing a major upgrade that is well underway, and will allow the experiment to pursue its superb performance into the future.
At present, the hardware-based trigger limits the amount of data taken each year to a maximum of about 2 fb −1 . In addition, most of the detector sub-systems would not cope with higher luminosity due to either their outdated readout electronics or radiation-induced damage sustained during Run 1 and Run 2 data taking. The initial ideas regarding the upgrade were formulated in 2011 [253], and further solidified in 2012 when the Technical Design Report was released [254]. Many of the subdetector components are largely unchanged in the upgrade, with the exception of a new pixel vertex detector replacing the current VELO, the TT stations being replaced by a new silicon micro-strip upstream tracker (UT), and the straw outer chambers replaced by a scintillating fibre detector. Details of each subdetector upgrade can be found in refs. [254][255][256][257][258].
The crucial point of the upgrade project is to build a reliable and robust detector capable of operating at higher luminosity without compromising the excellent physics performance of the current detector. This, in turn, cannot be achieved by redesigning the hardware components alone, but has to be augmented by a new innovative and flexible trigger system. A critical part of the upgrade strategy is the design of a so-called "trigger-less" front-end electronics system capable of reading out the full detector at 40 MHz, i.e. at the LHC clock frequency. Completely new and novel chips have been designed and tested for the pixel sensors [255] the UT [256] and RICH detectors [257].
The upgraded detector will operate at an instantaneous luminosity of 2 · 10 33 cm 2 s −1 which allows collection of around 10 fb −1 of data per year as a target, also keeping pace with Belle II [259], the other major flavour-physics experiment. To efficiently run at increased luminosity, the present hardware-based trigger will be replaced, and events will be selected by the software-based HLT alone. To cope with the much higher event rate (typically five proton-proton interactions per beam crossing), a flexible software trigger will be employed and coupled with a re-optimized network capable of handling a multi-terabyte data stream. The upgraded trigger will process every event (the visible rate at LHCb is estimated to reach 30 MHz) using information from every sub-detector to enhance its decision and maximize signal efficiencies, especially for the hadronic channels. The precision of particle identification and track-quality information will be identical than "offline" and able to reduce the rate down to 20-100 kHz. The new trigger strategy will increase the triggering efficiency for the hadronic channels by a factor 2 to 4 with respect to Run 1 [260], corresponding to an increase of a factor 10 to 20 for the hadronic yields.
Finally, plans for a further future upgrade (called Upgrade II) to use the full potential of flavour physics during the HL-LHC operation have now started [261,262]. This upgrade would require a complete redesign of the detector able to take data at instantaneous luminosities of 2 · 10 34 cm 2 s −1 , and collect ∼ 50 fb −1 of data per year, guaranteeing LHCb operation beyond 2030.
The LHCb Upgrades I and II will significantly improve the reach of key physics measurements. By way of example, the precision quoted in Section 3.3.3 on today's measurement of the CKM angle γ, O(5 • ), will be improved to 1 • with the new Upgrade I hadronic trigger and luminosity increase. This further improves to 0.35 • with the large statistics accumulated with LHCb Upgrade II. As discussed in Section 4.1, currently there is not enough sensitivity to measure the rare decay B(B 0 → µ + µ − ). With LHCb Upgrade I, a first observation should be possible, but it will require Upgrade II to reach a ∼ 10% precision on its branching ratio. This sensitivity will allow a meaningful measurement of the ratio between B 0 and B 0 s into the µ + µ − final state, and will constitute a clean and powerful test of extensions beyond the SM.
Summary and Conclusions
In ten years of operation at the LHC, the LHCb experiment has delivered a remarkably rich programme of physics measurements. In this paper the 25-year evolution of the experiment since its inception has been described, and its successes and achievements have been summarised. The diversity of the physics output has truly shown LHCb to be a "general-purpose detector in the forward region".
Over the last ten years, LHCb has measured the CKM quark mixing matrix elements and CP violation parameters to world-leading precision in the b-and c-quark systems. The experiment has measured very rare decays of b and c mesons and baryons, some with branching ratios down to order 10 −9 , testing Standard Model predictions to unprecedented levels. Hints of new physics in rare-decay angular distributions and through tests of lepton universality in electron-muon decay modes have generated considerable theoretical interest. The global knowledge of b and c quark states has improved significantly, through discoveries of many new resonances already anticipated in the quark model, and also by the observation of new exotic tetraquark and pentaquark states. In addition, many interesting measurements have been made that were not anticipated in the original LHCb proposal, such as electroweak physics, jet measurements, new long-lived particle searches and heavyion physics. An incredibly rich harvest of fundamental results has been produced, many of these will remain in textbooks for years to come.
LHCb has recently been upgraded and will start data-taking early in 2022 at a factor 5 higher luminosity, incorporating new subdetectors and a software-based trigger. Statistics in hadronic modes will be improved by at a factor 10-20, allowing much more precise measurements, especially of very rare b-and c-hadron decays. In addition, the future planned Upgrade II at the (HL)-LHC early in 2030 will ensure that LHCb maintains its lead in flavour physics for at least the next two decades.
Acknowledgments
LHCb is at present a collaboration of about 1000 authors. The rich variety of outstanding results has been made possible by the dedicated work of many colleagues: detector builders and operators, data verifiers and analysts.
We would like to acknowledge the important roles played by T. Nakada as first spokesman and by the late H.J.Hilke as Technical Coordinator, who successfully managed the realisation of this very complex detector.
Finally, we would also like to thank our colleagues P. Koppenburg and G. Passaleva who made helpful and insightful comments to this paper. | 2021-01-15T02:15:58.181Z | 2021-01-13T00:00:00.000 | {
"year": 2021,
"sha1": "f67bd89e39236252922edc605bf1e21f38e3433f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjh/s13129-021-00002-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "f696aa33da9964907253e4d5c2d88902916d2557",
"s2fieldsofstudy": [
"Physics",
"History"
],
"extfieldsofstudy": [
"Physics"
]
} |
244403003 | pes2o/s2orc | v3-fos-license | Microfluidic active pressure and flow stabiliser
In microfluidics, a well-known challenge is to obtain reproducible results, often constrained by unstable pressures or flow rates. Today, there are existing stabilisers made for low-pressure microfluidics or high-pressure macrofluidics, often consisting of passive membranes, which cannot stabilise long-term fluctuations. In this work, a novel stabilisation method that is able to handle high pressures in microfluidics is presented. It is based on upstream flow capacitance and thermal control of the fluid’s viscosity through a PID controlled restrictor-chip. The stabiliser consists of a high-pressure-resistant microfluidic glass chip with integrated thin films, used for resistive heating. Thereby, the stabiliser has no moving parts. The quality of the stabilisation was evaluated with an ISCO pump, an HPLC pump, and a Harvard pump. The stability was greatly improved for all three pumps, with the ISCO reaching the highest relative precision of 0.035% and the best accuracy of 8.0 ppm. Poor accuracy of a pump was compensated for in the control algorithm, as it otherwise reduced the capacity to stabilise longer times. As the dead volume of the stabiliser was only 16 nL, it can be integrated into micro-total-analysis- or other lab-on-a-chip-systems. By this work, a new approach to improve the control of microfluidic systems has been achieved.
Theory
To reduce fluctuations from a pump there has to be room for a buffer capacitance, meaning that the flow capacitance can both increase and decrease. The buffer capacitance enables fluid to be released when the pressure is too low, and stored when the pressure is too high. Since the flow rate and pressure are linearly correlated to each other by the Hagen Poiseuille equation, the same theory goes for stabilising the flow rate as for the pressure, Eq. (1) Here, one can observe how the pressure drop, ΔP, or the flow rate, Q, is also linearly dependent of the viscosity, µ, for a certain channel length, L, and hydraulic diameter, D H .
For dampers using a flexible membrane, there is a compressible gas on the opposite side of the fluid. When pressure rises, the membrane dilates and compresses the gas. In this way the volume of the fluid increases, which lowers the pressure. If the pressure gets too low, the membrane moves back while the gas decompresses. The buffer capacitance depends on the volume change the flexible membrane causes. At lower pressures, the same principle can be temporarily used with just an air bubble that compresses without any membrane 17 . Both cases use gas as the compressible media but there are also examples using moving membranes with liquids on both sides 24 .
However, by using a sufficient restriction, it is possible to stabilise the flow using only the compressibility of the fluid itself, instead of changing the volume. Though many liquids are referred to as incompressible, they never fully are. Water has normally a compressibility of 46 ppm/bar, which corresponds to a volume change of 4.6 µL/ bar for a pump with a volume of 100 mL. In general, the effect of compressibility is mostly noticed in the long waiting time it causes. When the flow rate or pressure is changed, extra fluid has to be added or removed before it is stable, limiting several applications. Nonetheless, it can also be useful by reducing fluctuations.
Herein, a fluctuation from the pump will change the pressure upstream of the stabiliser, while the downstream pressure is kept constant by adjusting the pressure drop through the stabiliser. Increasing the temperature in the restrictor will decrease the viscosity, Fig. 1, which increase the flow rate and lower the pressure drop, Eq. (1).
Decreasing the temperature will, on the contrary, raise the restriction and detain the flow. In this way, the use of the buffer capacitance, V bc , is not tied to the volume of the stabiliser, but to its ability to change the compression of the fluid 25 .
In the equation, V upstream is the total volume upstream of the stabiliser including the pump volume, while β is the compressibility of the fluid. The change of upstream pressure, dP upstream , over time, is caused by fluctuations from the pump and from the heater changing the viscosity of the fluid in the stabiliser. If a fluctuation causes the initial flow rate, Q 1 , to temporary decrease to Q 2 , the stabiliser will compensate for this by increasing the temperature and changing the viscosity of the fluid in the stabiliser from µ 1 to µ 2 , resulting in a pressure change upstreams of Whether a fluctuation can be fully damped depends on its magnitude and frequency in relation to the buffer capacitance. A fluctuation causing a difference of dQ for a certain time t, requires a buffer capacitance of Heaters and temperature sensors were integrated by sputtering 110 nm Pt on top of a 30 nm adhesion layer of Ta. The thin films were embedded in 150 nm deep trenches and patterned with a lift-off process. Thermal wafer bonding was performed before the bonded wafers were diced into chips. A detailed description of the assembling of the flow and the electrical connections to the chip as well as the full fabrication description can be found in Supplementary Information. Experimental setup. An assembled chip was connected between two pressure sensors at 400 Hz (33X, Keller) with a flow sensor at 3 Hz (SLG1430-480, Sensirion) downstream the chip. The pressure sensors have a precision of 0.01% F.S. and an accuracy of 0.05% F.S. (30 mbar and 150 mbar, respectively), and the flow sensor has an accuracy of ± 10% of reading. Pressure or flow stabilisation was performed by adjusting the heat with a power supply (QL355TP, TTi), based on external feedback signals from the downstream sensors. The resolution of the stabilisation is limited by the feedback sensor resolution and by the power supply resolution. The time resolution varied between experiments but was typically around 1.4 s, limited by the computer quality and chosen loop pause. The assembled chip was placed on a water-cooled table with a cooling paste in between. The cooling water temperature was set to 8 °C, for us the lowest possible without developing condense between connections.
To reduce clogging, a filter of 2 µm (A-702, IDEX) was placed upstream the stabiliser and, to simulate an application with a pressure drop, a restrictor capillary was placed at the end of the flow system. Three different pumps were evaluated; an HPLC pump (model 515, Waters), an ISCO pump (100 DM, ISCO Teledyne), and a Harvard syringe pump (PHD 2000 Infusion, Harvard). In the case of the Harvard syringe pump, a 10 mL glass syringe was used (Gastight #1010, Hamilton). Deionised water was degassed with ultrasonic sound for 30 min before use. A flow scheme of the experimental system can be found in Fig. 2, together with a photo of the setup and a 3D illustration of the chip.
Measurements and PID control. In line with the theory, the downstream pressure is increasing when the temperature in the stabiliser is increased, and decreasing for the opposite. This was used to control the voltage of the power supply via a feedback loop from either the pressure or flow sensor, placed downstream of the stabiliser. A setpoint of the desired pressure or flow rate was selected and the error between the setpoint and the actual value was calculated. The PID parameters were manually set and tuned for different flow rates and different chips, as the resistance of the heater and the pressure drop varied. An initial effect of 0.15-0.2 W was applied pre-experiment to enable regulation in both directions.
To demonstrate the stabilisation concept, the set flow rate of the ISCO pump was alternated between 75 and 85 µL/min every second minute. Meanwhile, a pressure corresponding to 80 µL/min was set to be maintained by the stabiliser chip. For comparison, the same experiment was performed with the chip in passive mode with a constant power of 0.32 W. The pump volume was below 10 mL during the active experiment and within 15 and 12 mL during the passive experiment. The experiment with active stabilisation was also repeated with a smaller pump volume of 4 mL, demonstrating a condition where the buffer capacitance is not sufficient.
To make a quality validation of the stabilising performance, a comparable study was made using three different pumps (HPLC, ISCO, and Harvard). The pressure was measured over time while the stabiliser was either active, passive, or disconnected. The setpoint for the active stabilisation was taken from the mean value of the pressure without regulation. Precision and accuracy were calculated with values from the time the desired stabilised value was achieved, generally five minutes into each run. In this study, precision is expressed in relative standard deviation (RSD).
If the pump flow has a constant offset from the set value, the stabilisation can only maintain the value for so long. To extend the applicability and compensate for this, one more parameter was added to the regulation. For the computer-controlled ISCO pump, this parameter was the pump setting of either the pressure or the flow rate.
In these cases, a preferred span of the voltage for the heating was set to 13-17 V. If the voltage crossed the limits, a PI regulation of the pump setting started, based on the error between the mean value of the span and the actual value of the voltage. A voltage below the span caused the regulation to decrease the pump setting, forcing the voltage to increase to maintain the flow, while the opposite applies for a voltage above the span. Except for the error, the slope of the voltage was also considered. An ever-increasing value indicates that the maximum effect will eventually be reached, with a drop in downstream pressure as a result. To prevent this, the pump setting was increased for a positive slope and decreased for a negative slope.
In the case of pressure stabilisation, the setpoint was set to 15 bar, and measurements were made by controlling either the flow rate or the pressure of the pump. In the case of flow stabilisation, the flow sensor was placed closest to the stabiliser for quicker feedback, Fig. 2a. The flow rate was stabilised at either 5 µL/min or 30 µL/min and both measurements were made by controlling the pump flow rate.
For manual pumps, using data from the voltage regulation, the settings could be manually adjusted if an offset was detected. But, in the same manner as changing the pump setting, the setpoint can instead be adjusted to better fit the actual value. This gives a more accurate value of the flow while also stabilising it. To demonstrate www.nature.com/scientificreports/ a manual pump the ISCO pump was used without computer control. For pressure stabilisation, the pump flow rate was constantly 100 µL/min and the initial setpoint was 17 bar. For flow stabilisation, the pump flow rate was constantly 40 µL/min and the initial setpoint was 35 µL/min.
Results
Stabilisation concept. Results from pressure stabilisation with an alternating set flow rate from the pump can be studied in Fig. 3. In (a), the pressure downstream the chip was kept at a mean value of 9.82 bar ± 0.29% and, to keep the pressure stable, the power altered between 0 and 0.29 W. In contrast, for the passive stabiliser with a constant power of 0.32 W, the pressure was alternating with the alternating flow rate. In (b), the experiment was repeated with less volume in the pump and the pressure could not be maintained for each alternating cycle. When the flow rate was lowered and the power reached its set maximum value of 0.58 W, the buffer fluid was depleted and the pressure decreased. Accordingly, for a higher flow rate, the buffer was filled when the power reached zero, making the pressure rise. The difference in absolute pressure between the two figures was caused by a change of the downstream restriction. The temperature in the stabiliser rises with increasing power, a simulation found in Supplementary Information indicates that a power of 0.4 W corresponds to a maximum of 70 °C at a flow rate of 100 µL/min. However, the pressure sensors upstream and downstream of the chip did not experience any temperature changes, apart from sub-degree fluctuations of the room temperature. This indicates that the heated volume leaving the stabiliser is small enough to return to its original temperature and not affect the rest of the system. Quality validation. The precision and accuracy were improved when the stabiliser was connected, both passive and active, Fig. 4. The best relative precision of 0.035%, and accuracy of 8.0 ppm (4.9 mbar and 0.11 mbar, respectively), were achieved using the ISCO pump. For the Harvard pump, the pressure drop through the passive chip was 7 bar, which reduced most of the fluctuations. However, there are still some longer term fluctuations with lower amplitude, which are reduced with an active chip. A zoomed in and smoothed figure of this can be found in Supplementary Information. The HPLC pump had a pressure drop through the passive chip of 11 bar, while the resulting flow rate from the ISCO pump was around 100 µL/min. www.nature.com/scientificreports/ Applicability demonstration. To enable stabilisation of pressure using a pump with only flow rate control, the set flow rate of the pump was included in the regulation, Fig. 5a. When the power is getting critically low, the flow rate of the pump is decreased, and when the power is about to reach its maximum, the flow rate is increased, to avoid this. Here, the relative precision was 0.065% with an accuracy of 0.014% (9.1 mbar and 2.0 mbar, respectively).
If the pump has pressure control, this can likewise be integrated into the regulation, Fig. 5b. As seen in Fig. 4d, the pump had a pressure offset of 0.3 bar. However, with an active stabiliser, the relative precision was 0.035% with an accuracy of 8.0 ppm (4.9 mbar and 0.11 mbar, respectively).
To stabilise the flow rate, the flow sensor was used in the feedback loop instead of the pressure sensor. In Fig. 6, stabilisation of two different flow rates is presented. Both were performed by adjusting the flow rate of the pump and the power in the stabiliser. In (a), the relative precision was 0.14% with an accuracy of 0.011% (42 nL/ min and 3.3 nL/min, respectively). In (b), the relative precision was 0.14% with an accuracy of 0.086% (7.0 nL/ min and 4.3 nL/min, respectively).
Stabilising pressure or flow using a non-computer controllable pump, such as the Harvard or HPLC pumps, required another approach, Fig. 7. Here, the setpoint of the pressure or flow rate was regulated for stabilisation at the actual level provided by the pump. In (a), the relative precision was 0.048% (8.4 mbar) and in (b) the relative precision was 0.67% (220 nL/min).
Discussion
In this work, it has been shown that the three evaluated pumps suffered from noise, fluctuations, and offsets. These are common issues that should be addressed, as an incorrect or fluctuating value can prohibit reproducible and reliable results. The HPLC pump often runs with a higher flow rate than used in this work, which reduces its fluctuations. However, pumps are expensive, making it beneficial to use the stabiliser to expand their working range.
A miniaturised stabiliser enables temperature actuation as the distances are short and the small volume reduces the required heat energy for temperature increase. Since only a fraction of the fluid gets warm, it also cools down quickly as it leaves the stabiliser, or when the power turns off. The stable and unaffected temperature in the pressure sensors also proves the applicability for temperature sensitive experiments. An experiment or sample injection downstream of the stabiliser will not experience any heating or cooling. Therefore, the temperature only has to be considered if the fluid from the pump is temperature sensitive. However, with a flow rate of e.g. 50 µL/min, the fluid will only be in the heated channel for approximately 19 ms.
The results shown in Fig. 3 demonstrate the concept of the presented stabilisation method. In (a), one can observe that a negative offset from the pump makes the stabiliser increase its power to maintain the pressure downstream. As the upstream pressure constantly decreases, the temperature needs to constantly rise. The upstream pressure is affected by the lowered flow rate and the temperature needed to access the buffer capacitance. Note that the flow rates shown in the figure are set flow rates of the pump, and not measured values. Because of the compressibility, even a sudden change in the pump setting will result in the flow rate gradually changing along with the pressure. As can be seen for the passive run, the blue dashed line in (a), the pressure never reaches a plateau between the altering cycles, indicating that the flow rate never reaches the extrema.
The reason that the pressure can be maintained in Fig. 3a but not in (b) is because the pump volume was lower for the experiment shown in (b), resulting in a lower buffer capacitance. It can also be seen that the first www.nature.com/scientificreports/ negative pressure peak is deeper than the following. This can be explained by the flow rate changing from having a constant value, 80 µL/min, to the altering flow rate, beginning with a negative offset. For the first cycle, the initial flow rate, Q 1 in Eq. (3), will be 80 µL/min, while for the second cycle it will be higher, resulting in a larger buffer capacitance. In Fig. 4, the results demonstrate the benefits and limitations of the stabiliser in passive mode, where it operates as a restrictor, and in active mode. For the Harvard pump, most of the fluctuations is reduced using the passive chip, while the HPLC pump needs active stabilisation to get a significant improvement.
Whether a passive restrictor is enough to reduce fluctuations depends on the size of the fluctuation, the restriction and the volume of the system. For example, if a fluctuation causes the flow rate from the pump to increase by 10%, the pressure throughout the system must also increase by 10%, before the flow rate in the end equalises the flow rate from the pump. The use of a restrictor will increase the upstream pressure, which will also increase the absolute change this percentage corresponds to. Because of the compressibility of the fluid, the pressure changes gradually. How steep the gradient is depends on how much fluid needs to be added to compensate for the compression. This volume will be proportional to the compressibility of the fluid and the total volume of the system. If the fluctuations are short and shifting, the pressure change will not keep up and the fluctuations will be dampened. Here, the biggest difference between the pumps, explaining the results, are the volumes of the systems. The HPLC pump has a very low internal volume of 100 µL resulting in quick pressure changes, while the Harvard pump has the volume of the 10 mL syringe. The ISCO pump has an even larger volume, up to 100 mL, and the noise is significantly improved with the restrictor. However, the difference in RSD is not as clear due to smaller original fluctuations. Of course, only an active stabiliser may handle drift in the pump. www.nature.com/scientificreports/ The active-stabilisation quality was varying slightly for the three different pumps. This is a result of the original fluctuations from the pumps and the absolute pressure levels. In Fig. 4a, the RSD for active stabilisation is higher for the measurements using the Harvard pump, compared to the other two pumps. The absolute deviation are similar between the Harvard pump and the ISCO pump, but the lower the absolute pressure is, the higher the RSD gets. In addition, the PID parameters play a part in the quality of the stabilisation. The sharp pressure peaks, seen in Fig. 4b, are caused by the two pistons in the HPLC pump and place great demands on the regulation. Here, the response time of the system and the speed of the feedback loops are crucial for the stabilisation. The response time is affected by the time constant of the heating and the position and dead volume of the sensors. The development of an internal pressure or flow sensor in-situ the stabiliser would help improving the response time.
The quality of the stabilisation is ultimately depending on the external sensor. For the pressure, the best measured precision and accuracy was 5.1 mbar and 0.14 mbar, respectively. This can be compared with the precision of the sensor of 30 mbar, and accuracy of 150 mbar. The measured values are well below the sensors, meaning that the best quality is more or less the quality of the sensor. Consequently, it is crucial to make precise and well-calibrated sensors for higher stabilisation quality.
If the pump has a constant offset, the setpoint cannot be maintained for long-term experiments, even with a stabiliser. The buffer capacitance will over time either be depleted or saturated, corresponding to the power reaching the upper limit or decreasing to zero. This was solved by adding the pump setting of either the flow rate or pressure to the regulation, Fig. 5, which also enabled pressure control for pumps lacking it. The exact settings required to maintain a desired pressure downstream can be difficult to obtain due to pump offsets and varying restrictions in the flow system and over the experiment. With this regulation, only an initial approximation is needed, which is automatically adjusted if the power is heading towards an extreme, indicating an improper setting.
The sudden peak in Fig. 5b is a result of the chosen PID parameters. In the time interval around 10 min, the voltage is only slightly above 13 V, which means that a small fluctuation from the pump can push the voltage below the wanted range, resulting in a decrease of the set pump pressure. This decrease was, however, too sharp, making also the downstream pressure decrease and the voltage to spike to compensate for this. This is seen in the figure as the power slightly decreases and then spikes. After the spike, the power lands on a higher level than before. As the temperature difference between the two levels was very small, the change in upstream pressure is too small to notice in the figure.
The results in Fig. 6 shows that the principle for this stabilisation method is the same for flow rate as for pressure. The measured values are noisier in (a) compared to (b) because the used flow sensor has an accuracy as a percentage of reading, and the flow rate in (b) is larger by a factor 6. In Figs. 6 and 7, one can observe that the flow rates of the pump differs from the measured flow rates downstream. The reason for this is probably that, for an unknown reason, the sensor or pump offset was shifted between the experiments. Many pumps are manual or cannot be integrated with a control system, like the Harvard pump and the HPLC pump. Therefore, experiments were done regulating the setpoint instead of a pump setting, with the results shown in Fig. 7. The setpoint is initially an approximation, which is PI regulated when the voltage is about to get too high or too low, in the same manner that the pump setting was regulated in the previous experiment. It might seem odd to change the wanted setpoint, but, as this does not change the pump settings, the conditions are still the same. Performing stabilisation on the actual value with this method is a way to enable reproducible results. By doing this, both more stable and accurate values are achieved. If an offset is revealed, it is also possible to manually change the setting of the pump to achieve the actually wanted conditions.
Conclusion
An active microfluidic pressure and flow stabiliser was presented. It has delivered a relative precision of 0.035% and an accuracy of 8.0 ppm versus the external pressure sensor. The device has a very small dead volume of 16 nL and can be integrated with µTAS systems. The applicability is wide due to the high-pressure resistance and the chemically inert borosilicate glass material. The device has been used for pressure and flow stabilisation, at higher and lower pressures and for commonly used pumps, such as ISCO and Harvard syringe pumps and HPLC piston pumps. By this paper, an increased fluid mechanic control has been achieved for high-pressure microfluidic applications like extraction, synthesis, and analysis. | 2021-11-20T06:16:27.690Z | 2021-11-18T00:00:00.000 | {
"year": 2021,
"sha1": "a4cce1fc178dc82fadb98ef3501fbdaac7d6512c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-01865-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "677526eaaec2155c09bf150b31e9812ecfd569bb",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21485190 | pes2o/s2orc | v3-fos-license | Successful detection of a high-energy electrical short circuit and a “rescue” shock using a novel automatic shocking-vector adjustment algorithm
Introduction A high-voltage (HV) electrical short circuit can be a critical electrical failure in an implantable cardioverter-defibrillator (ICD) system because of the failed delivery of appropriate shocks against fatal arrhythmia. The Food and Drug Administration has classified the St Jude Medical Riata family of ICD leads (St Jude Medical, St Paul, MN) as a class I recall since 2011 because of the “inside-out abrasion” problem. Currently, most of the externalized conductors are not related to an electrical malfunction. However, several reports have pointed out the risk of HV short circuit in Riata leads caused by the inside-out abrasion underneath the shocking coils. In the present report, we describe a case of successful rescue of an HV short circuit via the implementation of the automatic shocking-vector adjustment algorithm that secures HV shock delivery when an HV electrical short circuit is detected.
Introduction
A high-voltage (HV) electrical short circuit can be a critical electrical failure in an implantable cardioverter-defibrillator (ICD) system because of the failed delivery of appropriate shocks against fatal arrhythmia. The Food and Drug Administration has classified the St Jude Medical Riata family of ICD leads (St Jude Medical, St Paul, MN) as a class I recall since 2011 because of the "inside-out abrasion" problem. 1 Currently, most of the externalized conductors are not related to an electrical malfunction. 2 However, several reports have pointed out the risk of HV short circuit in Riata leads caused by the inside-out abrasion underneath the shocking coils. In the present report, we describe a case of successful rescue of an HV short circuit via the implementation of the automatic shocking-vector adjustment algorithm that secures HV shock delivery when an HV electrical short circuit is detected.
Case report
A 33-year-old man was admitted for replacement of his ICD generator because of the depletion of its battery 6 years after the initial implantation. He had received a prophylactic ICD implant for the treatment of Brugada syndrome. The ICD system was implanted on the right side because his innominate vein was occluded. Atlas VR V-193 (St Jude Medical) and Riata 8-F dual-coil lead (1570-65; St Jude Medical) were used. During the initial operation, the right ventricular (RV) lead was implanted at the RV apex using the supraclavian approach. Thus, the proximal end of the RV lead was brought to the right chest wall via a subcutaneous tunnel across the right clavicle. Neither electrical failure nor externalized conductors had been detected. Lead measurements had been stable with a pacing impedance of 415-440 Ω and a pacing threshold of 0.75-1.25 V per 0.5 ms. Although R-wave sensing was low at the time of implantation (2.5-3.5 mV), it had been stable within 3.5-5.1 mV.
During the ICD generator change operation, Atlas VR was replaced with Ellipse VR (St Jude Medical) as a new generator. After the operation, defibrillation threshold testing (DFT) was performed. The superior vena cava (SVC) coil and the generator (CAN) were used as cathodes (default shocking configuration: RV to SVC/CAN). Ventricular fibrillation was induced using the direct current fibber method (2.0 seconds). However, the first attempted shock (650 V) was not delivered. Subsequently, the next detection sequence was implemented and the second attempt of an 875-V shock successfully terminated ventricular fibrillation ( Figure 1). According to the test report ( Figure 2), the first shock was abandoned with the recognition of a significant problem in the HV lead (HV impedance was o10 Ω). However, immediately after the initial failed shock, another shocking configuration (RV to CAN) was automatically selected. Consequently, the second delivered shock at its maximum energy resulted in successful restoration of the sinus rhythm. We concluded that the successful rescue shock was delivered via execution of the Dynamic Tx overcurrent detection (OCD) algorithm with the detection of an HV electrical short circuit between the RV and SVC coils. After DFT, the ICD generator was explanted in order to investigate the mechanical failure or an electrical short circuit inside the subcutaneous pocket. However, no arc was found on the surface of the ICD generator and there was no apparent lead insulation break. Consequently, the ICD generator was replaced with a new Ellipse VR, and a new RV lead (Endotak Reliance G 4-site 0295-59, Boston Scientific, Natick, MA) was also placed at the RV apex via the right subclavian vein without removal of the Riata lead. The analysis of the removed ICD generator (Ellipse VR) by the manufacturer did not reveal physical or electrical aberrations.
Discussion
Serious adverse events, including deaths linked to Riata leads, have been reported. [3][4][5][6][7] In these reports, the authors point to the risk of an HV short circuit caused by the insideout abrasion underneath the shocking coils in the Riata lead family devices, though the incidence rate of internal abrasion short circuits underneath the SVC shock coil is quite low (0.06%). 8 The concern is that an HV short circuit may not be detected during a routine checkup unless DFT is performed. 4,5 Nevertheless, there are currently no recommendations or expert consensus regarding DFT during follow-up of Riata leads because of potential risks of compromised hemodynamics or failed rescue as well as overcurrent delivery, resulting in the destruction of the ICD system. Since we were concerned about the potential risk of unknown insulation break while no apparent defect was detected, DFT was performed after discussing the risk and benefit with the patient and among the device-care team. Consequently, we found that an apparently normal Riata lead had a fatal electrical failure demonstrating an HV short circuit and it occurred between the RV and SVC coils. The reasons are as follows: First, HV impedance between the RV coil and the SVC/CAN was below the detection limit, whereas during the second attempt, HV impedance between the RV coil and the CAN was within normal limits (74 Ω; Figure 2). Second, no physical defects (such as arc formation or burn injury) were found on the surface of the CAN and the RV shock lead or inside the subcutaneous pocket. These findings could not suggest that an HV short circuit occurred between the RV coil and the CAN because of an insulation break of the lead in the pocket. We expected that cables for the RV coil were electrically connected with the SVC coil because of an insulation defect and was short-circuited during the delivery of the first shock ( Figure 3). However, strictly speaking, an arc could occur between the RV and SVC coils without physical contact at HV; therefore, it cannot be guaranteed that the RV coil cable and the SVC coil underwent a pure "electrical short." We evaluated whether this is a specific problem of the Riata family. Kleeman et al 9 reported that the annual rate of ICD lead defects reaches 20% in 10-year-old leads among any type of ICD leads and that more than half of the lead defects involve insulation failure. We can hypothesize that any type of ICD lead can cause such an electrical failure after long-term use.
In the present case, the patient was saved from the failed shock delivery via the implementation of OCD together with the automatic shocking-vector adjustment algorithm (Dynamic Tx). This novel algorithm is exclusively adopted in ICD systems of Ellipse, Fortify Assura, Quadra Assura, and Unify Assura series (St Jude Medical). Importantly, it is feasible only if a dual-coil lead is implanted as well as the SVC coil is activated. When an overcurrent (460 A) is detected on the brink of the shock delivery, OCD aborts the attempted shock delivery in order to preserve the destruction of the ICD system. Simultaneously, the Dynamic Tx algorithm checks for the compromised vector integrity and finds another viable configuration to ensure HV shock delivery ( Figure 1). Thus, if an initial shocking-vector configuration (RV to SVC/CAN) failed, it is changed to "RV to CAN," followed by "RV to SVC" setting until the shock delivery is ensured. The sequence can be repeated at most 6 times. The Dynamic Tx algorithm is compatible with any type of dual-coil ICD leads.
We expect that the novel algorithm may overcome the limitation of detecting an HV short circuit as a fatal complication with long-term ICD use. Since the Dynamic Tx OCD algorithm is available, we routinely perform DFT during the box change operation in order to unveil the HV short circuit only if the algorithm is available with the newly replaced device and the Riata dual-coil lead is used. Although we should be prudent to perform DFT in every case, it should be a debatable issue. Especially in the generator replacement, we have a chance to consider the safety option of an additional placement or removal of ICD leads during the procedure if we can detect the existence of a possible HV short circuit by DFT.
With regard to clinical implications, implementation of the Dynamic Tx OCD algorithm is so far the only safety option for this type of fatal and undetectable electrical failure of ICD leads. To our knowledge, the present case report is
KEY TEACHING POINTS
St Jude Medical Riata ICD leads recalled in 2011 are prone to externalize their conductor cables due to "inside-out abrasion." While most externalized conductors are not related to an electrical failure, several reports described a fatal high-voltage (HV) short circuit after long-term use because of the inside-out abrasion underneath the superior vena cava coil.
HV short circuit could not be detected thorough routine follow-ups unless defibrillation threshold testing is performed. However, it is not currently recommended because of the potential risk of compromised hemodynamics or destruction of implantable cardioverter-defibrillator system.
The automatic shocking-vector adjustment algorithm (Dynamic Tx) automatically finds a viable vector in the dual-coil setting and ensures the shock therapy if an HV short circuit is detected on the brink of the shock delivery.
If the Dynamic Tx algorithm is available, defibrillation threshold testing can be revisited in order to unveil the HV short circuit of Riata dualcoil implantable cardioverter-defibrillator leads. Figure 1 Intracardiac tracing during defibrillation threshold testing is shown. VF was induced ①. The first shock was implemented but failed to terminate VF. Note the exclamation point at the first shock. This mark denotes overcurrent detection. A high-voltage shock could not be delivered ("0 V") ②. Subsequently, the next detection and charging sequence was executed. VF was successfully terminated using maximum shock delivery (875 V) of the "RV-CAN" shocking-vector configuration ③④. CAN ¼ implantable cardioverter-defibrillator generator; DC ¼ direct current; HV ¼ high voltage; RV ¼ right ventricle; VF ¼ ventricular fibrillation. Figure 2 The analyzed data of defibrillation threshold testing are presented. A: Alert messages. B: The second shock vector was changed (RV-CAN), and the delivered shock was at its maximum energy (875 V). C: HV impedances and delivered shock energies are shown. CAN ¼ implantable cardioverter-defibrillator generator; CL ¼ cycle length; HV ¼ high voltage; RV ¼ right ventricle; SVC ¼ superior vena cava; VF ¼ ventricular fibrillation. | 2018-04-03T01:12:18.470Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "a4826b6757f9df1fd7a7ed30bec81446c16617cc",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.hrcr.2014.10.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4826b6757f9df1fd7a7ed30bec81446c16617cc",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252399693 | pes2o/s2orc | v3-fos-license | Mitochondrial Fission and Fusion: Molecular Mechanisms, Biological Functions, and Related Disorders
Mitochondria are dynamic organelles that undergo fusion and fission. These active processes occur continuously and simultaneously and are mediated by nuclear-DNA-encoded proteins that act on mitochondrial membranes. The balance between fusion and fission determines the mitochondrial morphology and adapts it to the metabolic needs of the cells. Therefore, these two processes are crucial to optimize mitochondrial function and its bioenergetics abilities. Defects in mitochondrial proteins involved in fission and fusion due to pathogenic variants in the genes encoding them result in disruption of the equilibrium between fission and fusion, leading to a group of mitochondrial diseases termed disorders of mitochondrial dynamics. In this review, the molecular mechanisms and biological functions of mitochondrial fusion and fission are first discussed. Then, mitochondrial disorders caused by defects in fission and fusion are summarized, including disorders related to MFN2, MSTO1, OPA1, YME1L1, FBXL4, DNM1L, and MFF genes.
Introduction
Mitochondria are double-membrane organelles composed of a mitochondrial outer membrane (MOM) and a mitochondrial inner membrane (MIM) separated by the intermembrane space (IMS). The MIM is impermeable to most solutes, encloses the mitochondrial matrix, and forms cristae that expand its surface area. The electron transport chain (ETC) complexes are embedded in the MIM. They generate ATP via oxidative phosphorylation (OXPHOS), which involves electron transfer via complexes I-IV and ATP synthesis via complex V. Mitochondria are under dual genetic control with more than 99% of mitochondrial proteins are encoded by nuclear DNA (nDNA), while mitochondrial DNA (mtDNA) encodes less than 1% of mitochondrial proteins [1,2].
Mitochondria are dynamic organelles that constantly undergo fusion and fission, which provide mitochondria with a very interactive behavior [3]. Mitochondrial fission and fusion are active processes that occur continuously and simultaneously and are mediated by nDNA-encoded proteins that act on mitochondrial membranes. These specialized proteins include mechanical enzymes that alter the mitochondrial membranes physically and adaptor proteins that facilitate the binding of the mechanical enzymes to the organelles. Mitochondrial fission creates new mitochondria during cell division, allows the redistribution of mitochondria, and facilitates the segregation of damaged mitochondria, whereas mitochondrial fusion enables the exchange of intramitochondrial material between mitochondria. The balance between these two cellular processes determines the mitochondrial morphology and adapts it to the metabolic needs of the cells [4]. These two processes are crucial to optimize mitochondrial function and its bioenergetics abilities. Mitochondrial morphology reflects the respiratory activity of the cell. Maximum respiratory activity necessitates the fusion of the mitochondria, whereas, during cellular nutrient excess or cellular Mitochondrial morphology reflects the respiratory activity of the cell. Maximum respiratory activity necessitates the fusion of the mitochondria, whereas, during cellular nutrient excess or cellular dysfunction, mitochondrial fragmentation occurs. In respiratory-active cells, mitochondria fuse to allow spreading of mitochondrial contents, counteract the effect of mitochondrial mutations that accumulate with aging, and optimize mitochondrial function. Mitochondria fragment in resting cells in order to remove damaged content by autophagy [5].
Defects in mitochondrial proteins involved in fission and fusion due to pathogenic variants in the genes coding them result in disruption of the equilibrium between fission and fusion, leading to a group of mitochondrial diseases termed disorders of mitochondrial dynamics [6]. Herein, the molecular mechanisms and biological functions of mitochondrial fusion and fission are first discussed; then, known mitochondrial disorders caused by defects in fission and fusion are summarized.
Mechanisms and Functions of Mitochondrial Fission and Fusion
Both fission and fusion are mediated by nDNA-encoded proteins that act on mitochondrial membranes (Figure 1). These include a small number of highly conserved guanosine triphosphatases (GTPases) proteins and their interactors, which regulate these opposite processes.
Mitofusins mediate MOM fusion. They are MOM proteins with two proposed topologies. In the initial one, there is a GTPase domain located at the N-terminus, one hydrophobic heptad repeat (HR1), the transmembrane anchor(s), and a second hydrophobic heptad repeat (HR2). Accordingly, both the N-and C-terminus are facing the cytosol, and there are two transmembrane domains [9,10]. In the second proposed topology, the Cterminus is not facing the cytosol and resides in the IMS, and there is only one singlespanning membrane domain [11]. The GTPase and tethering actions of MFN1 are bigger
Mitofusins mediate MOM fusion. They are MOM proteins with two proposed topologies. In the initial one, there is a GTPase domain located at the N-terminus, one hydrophobic heptad repeat (HR1), the transmembrane anchor(s), and a second hydrophobic heptad repeat (HR2). Accordingly, both the N-and C-terminus are facing the cytosol, and there are two transmembrane domains [9,10]. In the second proposed topology, the C-terminus is not facing the cytosol and resides in the IMS, and there is only one single-spanning membrane domain [11]. The GTPase and tethering actions of MFN1 are bigger than those of MFN2; hence, MFN1 is believed to be the principal GTP-dependent membrane tethering protein for mitochondrial fusion [12]. Although MFN2 shares~80% sequence identity with its homolog MFN1, a proline-rich region (PR domain) following HR1 is only present in MFN2, accounting for particular protein-protein interactions [13]. In fact, alteration in either one of them leads to a different inhibition of the fusion reaction [7]. Embryonic fibroblasts deficient in MFN1 or MFN2 show different forms of fragmented mitochondria. In cells missing MFN1, mitochondria fail to bind, indicating that MFN1 works in mitochondrial tethering, whereas MFN2 operates in a later process of the fusion reaction [13]. The stability and activity of MFN1 are regulated by acetylation and ubiquitination, while MFN2 can undergo ubiquitination only [14,15].
Misato (MSTO1; encoded by the MSTO1 gene) protein is a soluble cytoplasmic protein that translocates to the MOM and interacts with the mitochondrial fusion proteins at the MOM-cytoplasm interface. MSTO1 can support mitochondrial fusion through enhancing or initiating the MOM fusion [8]. Its depletion causes mitochondrial fragmentation [16].
OPA1 is the main regulator of MIM fusion and cristae remodeling [17]. It is present in eight isoforms in humans and contains three highly conserved regions that are exposed to the IMS: the GTP-binding domain, the middle domain, and the GTP-effector domain. In addition, the N-terminus region includes a mitochondria-targeting sequence followed by a transmembrane helix that is needed for anchoring the MIM [18]. It directly links mitochondrial structure to bioenergetics function. When the transmembrane potential across the MIM is intact, long OPA1 (L-OPA1) isoforms carry out MIM fusion. When the potential is lost, L-OPA1 is cleaved to short (S-OPA1) by stress-sensitive IMS metalloendopeptidase OMA1 (OMA1; encoded by the OMA1 gene) [19] with increased S-OPA1 inhibiting the fusion process and promoting mitochondrial fragmentation [20]. The ATP-dependent zinc metalloprotease YME1L1 (YME1L1; encoded by YME1L1) catalyzes the degradation of OMA1 in response to membrane depolarization [21]. This proteolytic mechanism is a regulator of organellar function and structure. It engages directly with apoptotic factors as the main mechanism for mitochondrial participation in the cellular response to stress.
F-Box and Leucine-rich repeat protein 4 (FBXL4; encoded by the FBL4 gene) is a nuclear-encoded mitochondrial protein located in the IMS. Through its leucine-rich repeat domain, it can engage in protein-protein interactions, allowing it to form quaternary protein complexes. FBXL4 may play a role in mitochondrial fusion via interacting with and regulating mitochondrial fusion proteins [22].
Mitochondrial fusion allows the cells to build interconnected mitochondrial networks by producing tubular or elongated mitochondria. These networks act as united systems, promoting oxidative phosphorylation and leading to efficient dissipation of energy in the cells. Therefore, these networks are frequently found in metabolically active cells [23] Mitochondrial fusion also enables content mixing within the mitochondrial population, hence avoiding permanent loss of essential components and unifying the mitochondrial compartment, which optimizes mitochondrial function [24]. This mixing also allows the redistribution of mtDNA between damaged and healthy mitochondria, which enables human cells to tolerate high levels of pathogenic mtDNA and prevents mitochondrial elimination via mitophagy [25,26]. Hence, mitochondrial fusion is an essential regulatory process for optimizing mitochondrial function and enhancing mitochondrial integrity by allowing component sharing.
Mitochondrial Fission
MOM fission is a multistep process where a mitochondrion divides into two smaller mitochondria. It depends on a large cytoplasmic GTPase dynamin-1-like protein (DNM1L; encoded by DNM1L) that translocates to the MOM in response to cellular and mitochondrial signals. DNM1L acts on several MOM receptors, including mitochondrial fission factor (MFF; encoded by MFF), mitochondrial dynamics protein 49 (MID49), and mitochondrial dynamics protein 51 (MID51) [27]. When DNM1L is recruited to MOM, it forms a ring-like structure around the mitochondria, leading MOM constriction, which marks a potential site for future mechanical scission. Other mitochondrial fission sites are also marked by the endoplasmic reticulum and the actin cytoskeleton, which facilitates the oligomerization of the recruited DNM1L [28]. This is followed by GTP binding and hydrolysis, leading to a conformational change in DNM1L resulting in membrane scission. Post-translational phosphorylation, SUMOylation, and ubiquitination regulate DNM1L on the mitochondria [29].
Mitochondrial fission creates new mitochondria, which is crucial for rapidly dividing and growing cells to populate them with adequate numbers of mitochondria [30]. Mitochondrial fission also allows the redistribution of mitochondria. Through the formation of smaller mitochondria, mitochondrial fission allows a more efficient redistribution of these smaller sized organelles to the energy-demanding regions. Mitochondrial fission also contributes to quality control by enabling the removal of damaged mitochondria as it can isolate impaired mitochondria to be eliminated by mitophagy, which maintains mitochondrial homeostasis [26]. Therefore, mitochondrial fission is essential for mitochondrial distribution and homoeostasis.
Disorders of Mitochondrial Fission and Fusion
Impaired fusion results in fragmented mitochondria because of imbalanced fission, whereas defects in fission result in elongated mitochondria that are excessively connected because of unbalanced fusion [31,32]. Pathogenic variants in the genes coding proteins mediating fission and fusion result in the disruption of the equilibrium between fission and fusion leading, to impaired mitochondrial energy production. These mitochondrial diseases are called disorders of mitochondrial dynamics [6].
We hereby discuss the diseases of mitochondrial fusion that result from pathogenic variants in MFN2 (Charcot-Marie-Tooth neuropathy 2A and hereditary motor and sensory neuropathy VIA with optic atrophy disease), MSTO1 (mitochondrial myopathy and ataxia), OPA-1 (optic atrophy 1, optic atrophy plus syndrome, Behr syndrome, mitochondrial DNA depletion syndrome 14), YME1L1 (optic atrophy 11), FBXL4 (mitochondrial DNA depletion syndrome 13), and the diseases of mitochondrial fission that result from pathogenic variants in DNM1L (encephalopathy due to defective mitochondrial and peroxisomal fission 1 and optic atrophy 5), and MFF (encephalopathy due to defective mitochondrial and peroxisomal fission 2) ( Table 1).
Charcot-Marie-Tooth neuropathy type 2A (CMT2A) is the most common inherited axonal neuropathy characterized by distal muscle weakness and atrophy as well as sensory deficits [33]. In 90% of cases, CMT2A is caused by monoallelic pathogenic variants in MFN2 (Charcot-Marie-Tooth type 2A2A) and is inherited as autosomal dominant, while 10% of cases occur due to biallelic pathogenic variants in MFN2 (Charcot-Marie-Tooth type2A2B) and are inherited as autosomal recessive [34] or semi-dominant (i.e., a pathogenic variant is associated with mild disease in the heterozygous state and more severe disease in the homozygous or compound heterozygous state) [35]. A total of 25% of individuals with monoallelic pathogenic variants may be asymptomatic and have a normal electrophysiological examination which suggests incomplete penetrance [33]. The phenotype in those individuals could eventually convert to late-onset disease [36].
Age of onset ranges from 1 to 60 years with most of the autosomal recessive cases having early-onset disease (age < 10 years), which is associated with more severe disability than later onset. The initial presenting sign is mainly foot weakness or foot drop. Involvement of the lower extremities is more severe and seen earlier than the upper extremities, which become involved later in the course of the disease. Affected individuals have motor deficits (limping gait, difficulty running, difficulty climbing stairs, postural tremor, and distal muscle weakness and atrophy), which are more prominent than the sensory ones (decreased sensation of pain and vibration in feet). Other neurologic manifestations include lower limb hyporeflexia or areflexia, pyramidal signs (extensor plantar responses, mild increases in muscle tone, preserved or increased reflexes), vasomotor dysfunction, and ocular anomalies (optic atrophy in 7% of autosomal dominant form and pale optic discs in 20% of autosomal recessive form) [37]. Approximately 60% of individuals with early-onset disease develop subacute optic atrophy with consequent slow recovery [38]. Rare findings include hydrocephalus, fatal subacute encephalopathy (vomiting, nystagmus, chorea, clouded consciousness, and dysautonomia), spasticity, sensorineural hearing loss, dysarthria, migraine, and early-onset stroke [39]. Vocal cord palsy with dysphonia, respiratory insufficiency, and skeletal anomalies (scoliosis, kyphosis, contractures, and hammertoes) have also been reported [40]. CMT2A has a progressive course. Nearly 27% of individuals become dependent on a wheelchair [34].
Median nerve motor conduction studies range from normal to slightly reduced. Nerve biopsy, which was previously the key diagnostic step, is being replaced by genetic testing, but it is still important in atypical cases [41]. Findings on nerve biopsy include a loss of large myelinated fibers with no myelin abnormalities, mitochondrial abnormalities, and, although rarely, presence of onion bulb structures. Electromyography studies show chronic denervation signs in more than 90% of cases. Neuroimaging abnormalities can show a defect in mitochondrial energy metabolism in the occipital cortex on magnetic resonance spectroscopy and periventricular/subcortical white matter lesions [37]. Muscle imaging may show intramuscular fat accumulation, which may be associated with functional outcomes. Diagnosis is confirmed molecularly by identifying pathogenic monoallelic or biallelic variants in MFN2.
Hereditary motor and sensory neuropathy VIA with optic atrophy disease (HMSN VIA; Charcot-Marie-Tooth disease type 6A; CMT6A) is an autosomal dominant disease caused by monoallelic pathogenic variants in MFN2 [38]. It is characterized by sensorimotor neuropathy and optic atrophy. Less than 100 individuals have been diagnosed to date. Peripheral neuropathy is early-onset (childhood to mid-adulthood; typically, between 10 and 30 years of age) with later onset of optic atrophy (mean 19 years, range 5 to 50 years), which frequently leads to visual loss. Neurological symptoms include loss of motor skills, hypertonia, hyper/hypo/areflexia, and ataxia. Reported ocular anomalies include optic atrophy, central scotoma, dysmetric saccades, pale optic disks, subacute deterioration of visual acuity, color vision defects, abnormal visual-evoked potentials, cogwheel ocular pursuit, profound visual loss with rod-cone dysfunction, extropia, nystagmus, and cataract. Neuromuscular symptoms include sensorimotor axonal neuropathy and proximal and distal muscle weakness with atrophy. Additional manifestations include cognitive impairment (cognitive decline, delayed motor and language development, and decreased IQ), sensorineural hearing loss, tinnitus, anosmia, vocal cord paresis, myalgia, and musculoskeletal anomalies (steppage gait, scoliosis, lumbar hyperlordosis, pes cavus, major joint contractures, and foot deformities). Serum creatine phosphokinase (CPK) and lactate may be elevated. Neuroimaging studies may reveal involvement of the periventricular white matter rather than the cerebral cortex [42], diffuse brain and cerebellar atrophy with cerebellar white matter abnormalities, calcifications in the basal ganglia, and chiasm atrophy [43]. Diagnosis is confirmed molecularly by identifying pathogenic monoallelic variants in the MFN2 gene.
Management of MFN2-related disorders involves a multidisciplinary team that includes a neurologist, orthopedic surgeon, psychiatrist, and physical and occupational therapists. Standard medical treatment is supportive and based upon the affected individual's needs. Routine visual assessment should be performed in individuals with or without optic atrophy and annually in children for educational needs. MRI of the legs to assess amount and location of fat replacing muscles should be performed every few years in specialized centers only [44]. Obesity, which makes walking more difficult and neurotoxic medications (e.g., vincristine and taxols) should be avoided in affected individuals. Orthotics such as ankle foot orthoses are key in the rehabilitative approach since they improve walking velocity, balance, and ankle range of motion. Musculoskeletal pain can be alleviated with acetaminophen or nonsteroidal anti-inflammatory agents. Tricyclic antidepressants, carbamazepine, or gabapentin may decrease neuropathic pain.
Mitofusin agonists and activators are showing promising results as a therapeutic approach for CMT2A and other diseases of impaired neuronal mitochondrial dynamics [45,46]. Mitofusin agonists stabilize the fusion-permissive open confirmation of endogenous normal MFN1 or MFN2. This overcomes the dominant suppression of mitochondrial fusion induced by the dysfunctional proteins and directly stimulates mitochondrial fusion in order to restore the balance between mitochondrial fission and fusion [45,47]. In mice that express human MFN2 T105M, intermittent activation of mitofusin using MiM111, which is a metabolically stable mitofusin activator with good nervous system bioavailability, normalized CMT2A neuromuscular dysfunction secondary to accelerated primary axonal outgrowth and greater postaxotomy regrowth [46]. Gene therapy is another promising approach. It has been shown that in vivo augmentation of MFN1 in the central nervous system of mice, using a transgenic approach, rescued all phenotypes in mutant MFN2-expressing mice [48].
MSTO1-Related Mitochondrial Myopathy and Ataxia
MSTO1-related mitochondrial myopathy and ataxia (MIM#617675) is caused by monoallelic pathogenic variants in MSTO1 (MIM*617619) leading to an autosomal dominant disease or biallelic variants in MSTO1 leading to a recessive disease. To date, a total of 27 cases from 19 families have been reported [49].
Age of onset is variable but mostly in early childhood. Common presenting features of both autosomal recessive and dominant diseases include intellectual disability, delayed motor development, learning disability, delayed speech, hearing impairment, ataxia with dysmetria and dysdiadochokinesis, hypotonia, tremor, difficulty walking, myalgia, muscle weakness and atrophy, short stature, distinctive facial features (small eyes, close-set eyes, micrognathia, prominent jaw, long face, and myopathic face), and musculoskeletal anomalies (scoliosis, delayed skeletal maturation, joint hyperlaxity, pes cavus, pes varus, and chest asymmetry). Features seen mainly in autosomal dominant variants include behavioral and psychiatric manifestations (anxiety, depression, and schizophrenia), endocrine abnormalities (delayed bone age, hyperthyroidism, hyperprolactinemia, and primary amenorrhea), lipomas, and frontal lobe atrophy [8]. Common features in autosomal recessive cases include poor growth, papillary pallor, hyporeflexia, brain imaging abnormalities (cerebellar hypotrophy, hyperintense white matter abnormalities), distinctive facial features (thick hair and high arched palate), pectus excavatum, and increased serum creatinine kinase [50]. Muscle biopsy shows myopathic features with increased fiber size variation, increased number of abnormal mitochondria, mitochondrial degeneration, and decreased mitochondrial mtDNA content. Fibroblasts from affected individuals display fragmented mitochondria, mtDNA depletion, enlarged lysosomal vacuoles, and reduced nucleoids number [51].
Optic atrophy 1 is an autosomal dominant disorder, although recent studies have suggested semi-dominant inheritance [54]. It is characterized by childhood-onset bilateral vision loss, visual field defects, and optic nerve pallor. Its prevalence is 1:12,000-50,000, which makes it the most common inherited optic neuropathy if glaucoma is excluded [55]. Its penetrance is 43 to 100%. Most cases have an affected parent, but de novo pathogenic variants have been reported. Affected individuals are usually detected during vision screening at school in the first decade of life (median 5 years), but later onset (21-30 years) has been reported. Visual impairment is typically bilateral and symmetrical. It ranges from mild to severe (usually moderate with a visual acuity of 20/80 to 20/120). Legal blindness is rare. Visual loss is usually irreversible and progressive during puberty until adulthood, with very slow chronic progression subsequently [56]. Reported visual field defects are paracentral, central, and centrocecal. Although color vision defects in the blueyellow or red-green axes are commonly reported [57], over 80% of cases have a mixedcolor deficit [58]. Other reported ocular anomalies include strabismus (10%), horizontal nystagmus (5%), ptosis, and progressive external ophthalmoplegia from the third decade of life onwards.
Typical ophthalmologic examination findings include bilateral and symmetrical optic nerve pallor (cardinal sign, temporal, global) with a wedge-like papillary excavation. Optic nerve heads of cases are usually smaller than in age-matched controls [9]. Optical coherence tomography reveals loss of retinal nerve fiber thickness mostly evident in the temporal quadrant with relative sparing of the nasal quadrant [59]. Electrophysiology abnormalities include absent or delayed visual evoked potentials. Histological examination can reveal diffuse atrophy of the retinal ganglion cell layer associated with atrophy and loss of myelin within the optic nerve but without atrophy of the outer retinal layers. Collagen is increased and neurofibrils and myelin sheaths are decreased in the optic nerves, optic chiasm, and optic tracts [60].
Optic atrophy plus syndrome occurs in up to 20% of optic atrophy cases that have additional extraocular neurological complications. Sensorineural deafness is a prominent manifestation that is bilateral and begins in late childhood or early adulthood but may be congenital or subclinical. Other manifestations include adult-onset cerebellar or sensory ataxia (29%), axonal sensorimotor peripheral neuropathy (29%), exercise intolerance, myalgia, muscle weakness, and proximal myopathy (35%) [61]. Rare clinical presentations include spastic paraparesis mimicking hereditary spastic paraplegia, multiple-sclerosis-like illness, and hypotonia with dysphagia and gastrointestinal dysmotility [39,61].
Behr syndrome is a genetically heterogeneous disorder characterized by childhoodonset optic atrophy with ataxia and pyramidal signs (spasticity, weakness, and hyperreflexia). Posterior column sensory loss and intellectual disability may be present. Gradual gait difficulties develop in the second decade of life. Other reported findings include cerebellar signs (dysmetria, dysdiadochokinesis, and nystagmus), hypotonia, delayed development, hearing loss, dysarthria, musculoskeletal anomalies (pes cavus and severe contractures of the lower extremities), gastrointestinal anomalies (dysphagia, vomiting episodes, intestinal dysmotility, and severe constipation). Reported brain MRI anomalies include cerebellar atrophy, vermian atrophy, atrophy of optic nerves and chiasm, and mild periventricular leukomalacia [62]. Adult-onset disease, including optic atrophy and ataxia, has been rarely reported [62].
Mitochondrial DNA depletion syndrome 14 (encephalocardiomyopathic-type) is characterized by severe lethal infantile mitochondrial encephalomyopathy and hypertrophic cardiomyopathy. It was reported in two sisters who showed profound neurodevelopmental delay, hypotonia, peripheral hypertonia (opisthotonic posturing, from birth), feeding difficulties, and hypertrophic progressive cardiomyopathy. One sister had abnormal eye pursuits with a weak cry, while the other had sensorineural deafness, optic atrophy, and increased serum and cerebrospinal fluid (CSF) lactate. They died at the age of 10 and 11 months. Electron microscopy from one sister showed incomplete fusion of the MIM along with large mitochondria. Significant mtDNA depletion was found in the muscle biopsies from both sisters [63].
Diagnosis of OPA1-related disorders is confirmed molecularly in suspected cases with the above clinical features by identifying biallelic or monoallelic pathogenic variants in OPA1. Red ragged fibers (RRFs), mtDNA deletions, and COX-deficient fibers may be observed in skeletal muscles.
Treatment is supportive and targeted to the individual's needs. Proposed treatments under investigation include genetic therapy to correct the mutation-induced splice defect [64], antioxidants (vitamin E, superoxide) [65], and idebenone drug therapy [66]. In 74 of 87 individuals with dominant optic atrophy, an increased visual acuity was observed after at least 7 months of administration of idebenone. Tolfenamic acid trial therapy has shown positive effects on mtDNA stability and amelioration of the energetic functions and the mitochondrial network morphology, depending on the type of OPA1 mutation [67]. Recent studies have shown promising results with mesenchymal stem cell therapy using human embryonic stem cells and induced pluripotent stem cells [68,69].
YME1L1-Related Optic Atrophy 11
Optic atrophy 11 (MIM#617302) is an autosomal recessive disease caused by biallelic pathogenic variants in YME1L1 (MIM*607472) and is characterized by delayed psychomotor development, optic atrophy, and leukoencephalopathy. To date, this disease has been reported in four siblings who presented with intellectual disability, developmental delay, hearing impairment, optic anomalies (optic nerve atrophy with visual impairment), and leukoencephalopathy observed on brain MRI. Inconsistent features include ataxia, hyperkinesia, athetotic and stereotypic movements, macro-/microcephaly, and elevated lactate levels in blood and CSF [70]. Muscle biopsy revealed neurogenic changes (grouped fibers indicating denervation) and mitochondria with altered cristae morphology and paracristalline inclusions. Cultured fibroblasts showed an increase in fragmented and shortened mitochondrial networks, which is consistent with mitochondrial network fragmentation [70].
Affected individuals present with early infantile onset of encephalopathy, hypotonia, global severe developmental delay, seizures, ataxia, movement abnormalities (dystonia and choreoathetosis), microcephaly, distinctive facial features (narrow face, thick eyebrows, epicanthus, upslanting palpebral fissures, long eyelashes, synophrys, broad nasal bridge and tip, saddle nose, long and smooth philtrum, malformed ears, protruding ears, low-set ears, and everted lower lip vermillion), gastroesophageal reflux, hypospadias, arrhythmias, neuroimaging anomalies (delayed myelination, thin corpus callosum, leukodystrophy, cerebral atrophy, white matter lesions in the brainstem and basal ganglia, and arachnoid cysts), and metabolic derangements (increased serum lactate, alanine, and ammonia) [72]. Other features can include plagiocephaly, nystagmus, cataracts, neutropenia, recurrent infections, renal tubular acidosis, hypertrophic cardiomyopathy, scoliosis, small feet, and abnormal liver enzymes. Mitochondrial hyperfragmentation can be observed in cultured fibroblasts. Decreased mtDNA content and reduced activity of multiple ETC complexes can be observed in skeletal muscle and fibroblasts [73]. Survival varies with a median age of reported deaths of two years (range 2 days-75 months), although survival up to 36 years old has been reported.
Treatment is supportive. Sodium pyruvate was shown to improve the muscle strength and the quality of life of an infant with myopathic mitochondrial DNA depletion syndrome [74]. Recent studies have shown that mitochondrial deletion syndromes are often associated with secondary CoQ deficiency [75]. While some studies have advised assessment of muscle CoQ status in affected individuals in order to consider early CoQ supplementation as a candidate therapy [76], other studies have concluded that the use of CoQ therapy in mitochondrial deletion disorders is not efficacious [76]. Further studies are required to sort out this controversy.
DNM1L-Related Disorders
Pathogenic variants in DNM1L (MIM*603850) are responsible for two distinct phenotypes: encephalopathy due to defective mitochondrial and peroxisomal fission 1 (MIM#614388) and optic atrophy 5 (MIM#610708). DNM1L-related encephalopathy due to defective mitochondrial and peroxisomal fission 1 is a lethal childhood encephalopathy characterized by delayed psychomotor development and hypotonia. It can be caused by monoallelic pathogenic variants in DNM1L leading to a dominant disease or biallelic pathogenic variants in DNM1L leading to a recessive disease. Variants in dominant disease occurred de novo [77]. To date, 11 cases have been identified: 7 with the autosomal dominant disease and 4 with the autosomal recessive disease. Optic atrophy 5 has been reported in three unrelated French families.
Affected individuals with encephalopathy due to defective mitochondrial and peroxisomal fission 1 present in early infancy or childhood with neurologic regression, severe developmental delay, and hypotonia. Inconsistent findings include refractory epilepsy (clonic, focal, generalized tonic-clonic, and status epilepticus), cognitive decline, insensitivity to pain, decreased visual tracking, difficulty walking, areflexia, absent response to light stimulation, myoclonus, hypertonia, dysphagia, failure to thrive, respiratory insufficiency, microcephaly, distinctive facial features (pointed chin and deep-seated eyes), cardiomyopathy, and skeletal anomalies (broad thumbs and big toes) [78][79][80]. Increased serum and CSF lactate can be seen in some affected individuals. Brain MRI findings range from normal to progressive cerebral atrophy, demyelination, delayed myelination, thinning of the corpus callosum, T2-weighted hyperintense lesions in the cortex, and abnormal gyral pattern in frontal lobes [81]. Cultured fibroblasts show decreased elongated peroxisomes and tubular mitochondria with defects in mitochondrial and peroxisomal fission. Postmortem evaluation of one patient revealed mitochondrial cardiomyopathy characterized by abnormal cardiac myocytes with enlarged mitochondria [82].
DNM1L-related optic atrophy 5 is characterized by slowly progressive visual loss with variable onset from the first to third decades. Additional ocular abnormalities may include dyschromatopsia (blue-yellow), central scotoma, a slow decrease in visual acuity, and optic nerve atrophy. Mitochondria in mutant cells showed a highly elongated, tabulated, and hyperfilamentous network supporting an impairment of mitochondrial fission [83].
Treatment is supportive. Bezafibrate has shown promising results as far as improving mitochondrial fission and function in DNM1L-deficient cells [84]. It is an agonist of peroxisome-proliferator-activated receptor alpha, which is a ligand-activated transcription factor that increases the expression of multiple genes including nuclear-encoded respiratory chain genes [85]. Bezafibrate normalized growth, ATP production, and oxygen consumption in fibroblasts from affected individuals [84].
MFF-Related Encephalopathy
Encephalopathy due to defective mitochondrial and peroxisomal fission 2 (MIM#617086) is an autosomal recessive disease caused by biallelic pathogenic variants in MFF (MIM*614785). To date, six affected individuals have been reported. Affected individuals present with severe hypotonia, delayed psychomotor development, microcephaly, and abnormal signals in the basal ganglia. Inconsistent features include early-onset seizures, including hypsarrhythmia, optic atrophy, peripheral neuropathy, hyperreflexia, spasticity, swallowing difficulties, ocular anomalies (vision loss, pale optic discs, absent visual fixation, and high refractive errors), hearing loss, short stature, failure to thrive, and diffuse cerebellar atrophy [86,87]. Serum lactate may be normal or increased. Cultured fibroblasts can show elongated peroxisomes and mitochondria, suggesting a fission defect [86,87].
Summary
Mitochondrial fusion and fission are crucial for the maintenance of normal mitochondrial morphology and functions. Mitochondrial fission creates new mitochondria during cell division, allows the redistribution of mitochondria, and facilitates the segregation of damaged mitochondria. This process is mediated by several nDNA-encoded proteins that act on tethering the MIM and MOM. Defects in mitochondrial fusion have been associated with pathogenic variants in MFN2, MSTO1, OPA1, YME1L1, and FBXL4. Mitochondrial fusion enables the exchange of intramitochondrial material between mitochondria. Mitochondrial scission is mediated by DNM1L, and defects in mitochondrial fission have been associated with pathogenic variants in DNM1L and MFF genes. Defects in mitochondrial fusion and fission result in diseases with variable neurological defects, which accentuates the importance of balanced mitochondrial fission and fusion in neuronal function [88]. | 2022-09-21T15:07:17.046Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "fc7914660dacebba2f040f878f2734bddd9e887c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cc0f648a5c1150b2c8e481366c26c215a7e14265",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253148647 | pes2o/s2orc | v3-fos-license | Dynamic congestion management system for cloud service broker
The cloud computing model offers a shared pool of resources and services with diverse models presented to the clients through the internet by an on-demand scalable and dynamic pay-per-use model. The developers have identified the need for an automated system (cloud service broker (CSB)) that can contribute to exploiting the cloud capability, enhancing its functionality, and improving its performance. This research presents a dynamic congestion management (DCM) system which can manage the massive amount of cloud requests while considering the required quality for the clients’ requirements as regulated by the service-level policy. In addition, this research introduces a forwarding policy that can be utilized to choose high-priority calls coming from the cloud service requesters and passes them by the broker to the suitable cloud resources. The policy has made use of one of the mechanisms that are used by Cisco to assist the administration of the congestion that might take place at the broker side. Furthermore, the DCM system is used to help in provisioning and monitoring the works of the cloud providers through the job operation. The proposed DCM system was implemented and evaluated by using the CloudSim tool.
INTRODUCTION
Cloud computing, cluster computing, and grid computing models are designed to give access to a vast amount of computing power by sharing information technology (IT) resources or services by running a single or a group of system interfaces. Service computing is defined as IT resources (hardware or software) where service providers establish those resources and provide them on demand [1], [2]. Service requesters can pay the service providers on pay-per-use charging mode. It is essentially like a public service (such as: gas, water, power, and telephone) in which the clients are charged on a daily or monthly basis as a pay-peruse of the given unit [3], [4].
Cloud computing is an example of a model for offering on-demand network access to configure and share computing resources. By now, cloud computing has spread widely in several applications and usages. It becomes a significant part of the future generation of services and computing infrastructure at a reasonable . Cloud service broker [9] In addition, the cloud broker can have the right to discuss arrangements with the cloud providers on the side of the cloud requester. In this situation, the CSB is given the power to split the job between many service providers to reduce the cost as possible. Furthermore, the cloud broker can provide the cloud users with an interface to hide some complexity and lets the cloud requesters deal with different cloud providers as if they were being purchased from one provider. This type of broker is named a cloud aggregator [10]. The architecture of CSB is shown in Figure 3 and its components are defined by Buyya et al. [11]. In this architecture, the CSB is divided into four layers: − User interface: To offer the access connection between the broker and a user application interface. The application interpreter defines what is to be executed from the user, the task 'descriptions', and the desired quality of services. The service interpreter identifies the service prerequisites required for the execution. These requirements include service type, service location, and some other necessary services. The credential interpreter inspects the credentials for retrieving required services. − Core services: This is where the main function of the broker takes place. The service negotiator receives the service requirements from the user interface. The scheduler decides the most suitable cloud providers for the user requested services based on its service and application requirements. The service monitor continually monitors the condition of cloud services through a periodic checking of the availability of recognized cloud services and searching for available new services. − Execution interface: This offers execution provision for the user request. The dispatcher generates the required broker-agent and attaches the data files along with the user job to be posted to the cloud resources for implementation. The job monitor detects the execution level of the task to the results of the task is departed to the user upon job completion. − Persistence: This layer is very important in the case of broker failure. It keeps the state of the core services, execution interface, and user interface in a database. Figure 3. Cloud service broker architecture [11] However, one situation that can be occurred when the broker rejects some requests if it reaches its full capacity of cloud requesters. In this case, the broker may not be able to deal with such a huge number of requests. The broker may (by the end) reject some requests to prevent the CSB from being overwhelmed. In this paper, a new dynamic congestion management (DCM) system is created, which is derived from Cisco queuing algorithms. This system can handle the huge amount of cloud requesters while considering the customers' quality of service conditions as regulated by the service-level agreement (SLA). Also, it is used to monitor and provision the work of the cloud providers during the jobs running. There are many cloudcomputing models that have methods to provision and monitor their resources and define the cloud service broker job. The following are the most standard models in the field of cloud computing and brokering systems.
The dynamic resources provisioning and monitoring (DRPM) system proposed by Al-Ayyoub et al. [12] is a multi-agent system designed to handle the resources in the cloud provider whilst taking into consideration the required quality for the clients' requirements. These requirements are controlled by the SLA. DRPM system also comprises a new virtual machine with an algorithm named the host fault detection (HFD) algorithm to select which virtual machines to move to if the hosted physical machine becomes overloaded, varying on the source of the overload. The DRPM system was tested and evaluated using the CloudSim tool. The results showed that the DRPM system has increased resource utilization and, at the same time, has decreased power consumption by avoiding SLA violations. ISSN: 2088-8708 Dynamic congestion management system for cloud service broker (Tariq Alwada'n) 875 Vecchiola et al. [13] proposed the Aneka system, which is a NET-based application (PaaS) for cloud computing. This system offers a group of runtime environment applications and APIs by several programming models and implements them on public and private cloud computing platforms such as GoGrid and Amazon EC2. The Aneka system also offers resource monitoring methods built on SLA orientation. This method runs as if the system accepts new tasks from cloud customers. It evaluates the time required to finish these tasks, in turn, to complete them with existing resource and match it with the SLA deadline time. If the estimated time to finish the new tasks is equal, then the system remains working; else, the system increases the cloud resources and remains run to avoid any SLA violation.
Siddiqui et al. [14] proposed the Elastic-JADE system which has three elements (local machine, Amazon EC2 cloud, and the cloud user). By using those elements, the system automatically balances Amazon's EC2's resources (by scaling up or down the resources) via the Java agent development framework (JADE) platform once heavy loads take place in the local platform. The agent at the client machine is in control of provisioning the whole system on both sides. Also, it communicates with the administrative agent in the Amazon EC2 then forwards directive orders for increasing or decreasing assets according to the system load. Bonvin et al. [15] introduced the scattered autonomic resources (Scarce) model, which is a multi-agent platform to dynamically administrate the resources by using an economic-based method. The agents work on the server side and are responsible for directing and scaling the resources and constantly checking the condition of the systems.
Venticinque et al. [16] proposed the open cloud computing interface (OCCI) framework, which supports monitoring, provisioning, and auto-configuration for the cloud resources to fulfill the application requirements at the infrastructure level (IaaS). The OCCI also contains a collection of API and protocols with a self-contained supplier and neutral platforms, which resolve several problems in the administration of usual tasks with the fulfillment of portability, interoperability, and integration requirements plus autonomic scaling, monitoring, and deployment.
Morrison [17] considered three distinct types of CSB: i) customer-centric, created for customers' service requirements and quality of service (QoS) providing to the customer; ii) solution centric, created for collecting services presenting from a wide of technical services presented in the cloud. One of the main jobs of this type is to ensure the integrity and the security of services; and iii) resource-centric, which works as a service assembler. The author proposed that a service broker comprises self-service entitlement, application catalogs, role-based access control, billing, SLA monitoring, metering, auditing, and reporting. He identified the CSB as a business model that helps the customers to choose, manage, organize, and coordinate the different services they require.
Praveen et al. in [18] introduced a new task scheduling and resource allocation schema for the cloud environment. The proposed load balancing schema uses the social group optimization (SGO) algorithm and consists of two autonomous phases: optimal resources allocation using SGO and effective scheduling of tasks using shortest-job-first (SJF). Their experimental results have shown the SGO-based SJF scheduler has reduced the makespan time and the number of active servers when compared to other first-come-first-serve (FCFS) and genetic algorithm (GA) based schedulers, and thus reduces the associated cost of using cloud services.
Guo et al. [19] introduced a load balancing schema for the edge cloud environments. The challenges that edge cloud environments face are emphasized due to the high level of granularity of the required resource billings and allocations processes. They proposed the use of auto regressive integrated moving average (ARIMA) and back propagation (BP) neural networks to estimate the required load. Accurate estimations can allow user data to be migrated to a less congested working node to ensure service continuity. When compared to load estimation using ARIMA models and GA models, the proposed model outperformed the other models in terms of load estimation accuracy. They also showed that using the proposed model could reduce the associated service expenditures effectively.
In this research, the authors focus on identifying the need for an automated system CSB that can help in utilizing the cloud power, enhancing its functionality, and improving its performance. The contributions of this work can be summarized in the following points: i) presents a DCM system which can manage the massive amount of cloud requesters while considering the required quality for the clients' requirements as regulated by the service-level agreement; ii) introduces a forwarding policy that can be utilized to choose high-priority requests coming from the cloud service requesters and passes them by the broker to the suitable cloud resources. The policy has made use of one of the mechanisms that are used by Cisco to ease the administration of the congestion that might take place at the broker side; and iii) furthermore, the DCM system is used to help in provisioning and monitoring the work of the cloud providers during the job running.
The following are some backgrounds about cloud service models, the SLA, and the congestion management algorithms. Figure 4 shows the cloud service models and the relations between them [20]. These models can be explained as: − Infrastructure as a service: In this model, the provider is the holder of the equipment and tools. This business model presents the virtualization of resources on-demand [21]. The providers give the cloud users the ability to use those tools and equipment virtually instead of buying them. In this case, the customer pays per use on-demand [3]. This model also allows the customer to perform self-provision when using the 'provider's services. − Software as a service: This is the most common model in cloud service. In this model, the client can access the applications and use them without the need to download or buy them. Also, it is a storage and supply model where the user obtains the storage area from the supplier on rent [22]. The users can use and access SaaS services by using the web browser. − Platform as a service: This is the service for operating applications over the internet by hiring software and hardware structures from cloud providers [23]. It is mainly for application developers who can develop and test their applications. The clients under the cloud computing model do not have to worry about the problem of possessing and running the physical base needed for their job. In other words, they do not have to be concerned about programming teams and developers. Furthermore, the customers need not to concern about how their task will be performed or where it will be applied. The only thing that they need to be concerned about is the charge of using the resources, the quality of services promised, and the type of services they can acquire from the cloud providers. The ones who are worrying about reaching efficient utilization and monitoring resources are the CSB. By using smart ways and methods, the broker should obtain the finest quality of service assures without affecting or creating any breach in the SLA [12]. A cloud SLA is an agreement between the cloud providers and a cloud customer that ensures the lowest level of service is preserved. It guarantees levels of availability, reliability, and responsiveness of applications and systems while determining who will have control when there is a service interruption [24].
Because of the growing demands on cloud services, nowadays, applications and cloud service requesters are asking to downgrade the completion time for their jobs while reducing the associated cost. At the same time, they ask for the maximum QoS during their 'jobs' execution. Most of the requests now are quite delicate for the delays faced when running and transferring over the internet. That is why it is required to support a different type of traffic with different quality of services. The greatest significant side of this; is how to distribute the available resources while facing congestion. To continue with this, it is required that several methods and tools help to distinguish between the types of traffic coming to the cloud. This can be done through prioritizing [25]. This research introduces a forwarding policy that can be employed to choose high-priority calls coming from the cloud service requesters (as an extra service) and deliver them by the cloud service broker to the suitable cloud resources. So, there is a need to line up the low priority calls and hold them in the broker memory till the high priority calls get handled by the cloud. This paper has made use of one of the mechanisms that are used by Cisco to smooth the administration of the congestion that may take place at the cloud service broker side. Cisco has many mechanisms which support the queuing on Cisco router interfaces using hardware and software modules [25]. The chosen mechanism is the weighted fair queuing (WFQ). This mechanism does not permit classification options to be configured. Instead, WFQ categorizes packets automatically, with every flow being placed into a distinct queue according to its priority, and each one of these queues has a specific weight [26]. The process starts as a round robin (RR) mechanism, where time slices are allocated to each process in equal portions and circle order, but the processer allows the flow of traffic to get through based on the weight of that queue in the round. In this case, the WFQ can solve the starvation problem that could cause if the RR mechanism is being used. This paper is organized as: section 2 presents the proposed system model. The experiment and results are discussed in section 3. Finally, the conclusions are presented in section 4.
SYSTEM MODEL
The details of the DCM system are explained in this part. The DCM system is a multi-agent model that takes into consideration various elements when forming the decision, for example, the number and the shortcoming of cloud resources, the customers' fulfilment, and the customers' QoS requirements. This system is created by adapting and combing both the DRPM system introduced in [12] and the cloud broker architecture introduced in [11]. Figure 5 shows the architecture of the DCM system. The model is split into three parts: The cloud service users part, the CSB part, and the cloud service provider part.
The cloud service users part
In this part, a local agent is allocated to every customer. This agent is responsible for remarking the customers' requests based on their priority. This is considered as an extra paid service that the users can pay for the cloud to guarantee the priority of their jobs over the cloud among other users. If this service is not paid (exist), in this case, the local agent will consider the priority as a low priority. The local agent then sends the marked requests to the CSB, precisely to the classifier inside the broker. At the user's request, the job specifications are determined, such as the type of virtual machine (VM), the hardware, and the software required. Once a new call for a VM with particular features is collected from the client, the local agent marks these requests as one of three cases: high, low, and medium. This will help the user to process some urgent jobs where needed. The local agent can utilize the history of the user requests to help in marking the new requests. The output of remarking step is sent to the classifier in the CSB for the next step.
The cloud service broker part
This part is the core of the cloud system and the DCM system. The element that is accountable for getting the users' jobs in the CSB is the classifier. The main job of the classifier is to recognize which job should be processed first in the case of job congestion. As the CSB continues to receive jobs from different cloud users, it may reach to reject some requests due to the huge number of requests. The DCM system plays a very important role in this case. Instead of rejecting the requests randomly, the classifier can make use of the jobs marking that has been done by the local agent on the cloud service user's part to organize the requests in a way that can take into account the QoS requirements as regulated by the SLA. If the CSB is not congested the classifier applies the first in first out (FIFO) order for the incoming requests, but if the CSB got congested, then the classifier depends on the WFQ mechanism to solve the congestion. WFQ categorizes packets automatically, with every flow being placed into a distinct queue according to its priority, and each one of these queues has a specific weight. The process starts as a RR mechanism, where time slices are allocated to each process in equal portions and circle order, but the processer allows the flow of traffic to get through, based on the weight of that queue in the round. The advantage of using the WFQ mechanism is that it can solve the starvation problem that could cause if the RR mechanism is being used. Figure 6 shows how the classifier deals with the incoming requests from cloud users. The classifier can make use of the Broker Buffer in case of some requests need to be put on hold for further classification.
After the classifier decides which users' requests should be processed first, it handles them to the Scheduler for further process. The Scheduler decides the most suitable cloud services for the client requested services built on its service requirements and application. The service monitor continually monitors the status of cloud services through a periodic checking of the availability of recognized cloud services and searching for available new services. Later, the job dispatcher receives the user application (the user's job) and attaches the data files with the user application to be posted to the chosen cloud resources for implementation. The job monitor detects the execution levels of the job to send back the results of the job to the user as soon as the job is finished. This is done by checking the status of the job with the agent attached to the cloud resources. If the broker is crashed for any reason, each element in CSB maintains its state in a database for recovery later. Finally, when the job is accomplished, the agent attached to the cloud resource sends the results back to the job dispatcher which in its turn notify the job monitor about the completion of the job and sends the results back to the local agent of the cloud users.
The cloud service provider's part
This part contains the physical machine (central processing unit (CPU), random access memory (RAM), storage, and bandwidth) and the virtual machines. Each physical machine may contain one or more VMs. For each resource, there is an agent that is responsible for receiving the job description from the CSB and notifying the job monitor with parodic messages about the status of the job. It is used to provision and monitors the work of the cloud providers during the job running.
EXPERIMENTS AND RESULTS
In this part, two experiments were created and evaluated using a CloudSim [27]. A CloudSim is employed to evaluate the fulfilment of the suggested DCM system. CloudSim is one of the primary simulation environments for the cloud computing model. Both experiments have the same settings. The first experiment was divided into two parts. The settings include 300 cloudlets (jobs) from 3 cloud users (100 each), with one machine (one physical machine and three virtual machines) as cloud providers. The service broker is designed to accept up to 250 cloudlets and reject all the requests above that. The simulation period of 24 hours in the scheduling interval is 300 seconds. Tables 1, 2, and 3 show the features of the physical machine, the VMs, and the cloudlets, respectively.
The difference between the two parts is that in the first one, no paid services for medium or high priority were done. In this situation, the local agents assigned for each of the three requesters will count the priority for each job as equal (low priority for the three users. In the next part, the local agent at one of the requesters marked the cloudlets as a high or medium priority whereas the other two requesters with no marking priority. Both experiments intended to direct all the cloudlets to the cloud broker simultaneously to examine the time needed to process all the jobs together. Also, it is planned to examine the number of rejected jobs due to the huge number of requests. The two parts of the simulations were compared and evaluated on several aspects such as the time needed to finish the cloudlets and the number of cloudlets rejected for every user. First Part: 300 cloudlets were sent to the service broker from three users. There is no priority at this stage. The broker received the cloudlets and processed them as first in first out jobs. The first 100 cloudlets coming from the first user were processed without any rejected jobs. The same for the second user. But for the third user, the broker rejected the last 50 jobs as they reached the boundaries of the accepted requests (as in its policy). The third user had to send back the rest of the jobs later for processing. Figures 7 and 8 show the processing time and the rejected jobs, respectively, for the first part of the experiment.
Second part: again 300 cloudlets were directed to the broker. But at this stage, the local agent marked the jobs for user number 3 as paid service-high priority service. The other users remain a low priority (normal priority). Figures 7 and 8 show the processing time and the rejected jobs, respectively, for the second part of the test. The results show that user 3 has taken advantage of the paid service, as all of user 3's tasks were processed, and the time needed for the processing was very short compared to the first part. Also, there are no rejected tasks for this user. So, by using this way, it has given the guarantee of completion of all the jobs in a short time. The other results for users 1 and 2 show that user 1 still has no rejected jobs while user 2 has 45 rejected jobs, as the broker processed their cloudlets according to the first in first out mechanism. But the overall processing time for the 300 cloudlets is slightly less than the first part. Also, the number of rejected jobs in the second part is slightly less than in the first one.
To validate the previous results, another scenario was applied. The settings this time include 300 cloudlets (jobs) from 4 cloud users (75 each), with three virtual machines and one physical machine as a cloud provider. The broker is designed to handle up to 200 cloudlets and rejects all the requests above that. The same characteristics of the physical machine, the VMs, and the cloudlets, as in the first experiment as shown in Tables 1, 2, and 3 respectively. Again, the experiment was divided into two parts. The two parts of the experiments were compared and evaluated on several aspects as the time needed to finish the cloudlets and the number of cloudlets rejected for each user. First part: 300 cloudlets were sent to the broker from 4 users. No priority at this stage. The broker received the cloudlets and processed them like a FIFO job. The first 75 cloudlets coming from the first user were processed without any rejected jobs. The same for the second user. But for the third user, the broker rejected the last 25 jobs as they reached the boundaries of the accepted jobs (as in its policy). The cloudlets for the last user were rejected. The third and the fourth users had to resend the rest of their jobs later for processing. Figures 9 and 10 show the processing time and the rejected jobs, respectively, for the first part of the experiment.
Second part: again, 300 cloudlets were transmitted to the broker. In this round, the local agent marked the jobs for user number 3 as a medium priority (paid service), and the local agent for user 4 marked the jobs as high priority. The rest of the users remain a low priority. Figures 9 and 10 show the processing time and the rejected jobs, respectively, for the second part of the experiment.
The results show that users 3 and 4 have taken advantage of the priority service when all of the users 3's jobs and '4's jobs were processed, and the time needed for the processing was very short compared to the first part. Also, there are no rejected jobs for those users. The other results for users 1 and 2 show that user 1 still has no rejected jobs whilst user 2 has 73 rejected tasks, as the broker processed their cloudlets according to the first in first out mechanism. But the overall processing time for the 300 cloudlets is slightly bigger than the first part. Also, it is noted that the rejected jobs are fewer than the ones in part 1 of the second experiment.
CONCLUSION
This paper presents a DCM system which can handle a huge amount of cloud requesters while considering the required quality for the clients' requirements as regulated by the SLA. This system is being applied by the CSB and local agents attached to the cloud service requesters and providers. The proposed DCM system is assessed using the CloudSim tool. The results indicate that using the DCM system improves the degree of customers' QoS conditions during resource employment and, at the same time, avoids cloud SLA violations. Numbers of future works that can be implemented in this field, such as user responsive (Encoding); where clouds are usually very complex and difficult to encode and enforce the user policy within the broker decisions. | 2022-10-27T15:04:24.282Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "c16b18bb3a503c9916618c14be93393eb5cec323",
"oa_license": "CCBYSA",
"oa_url": "https://ijece.iaescore.com/index.php/IJECE/article/download/27587/16251",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2f1fe7f3a86ea73e3992a9943cdee1ddff3b2c3b",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": []
} |
268910644 | pes2o/s2orc | v3-fos-license | Ensemble Learning-based Algorithms for Traffic Flow Prediction in Smart Traffic Systems
Due to the tremendous growth of road traffic accidents
INTRODUCTION
Road traffic accidents are dramatically augmented each year due to the massive increasing number of vehicles on the roads.This problem is considered a serious risk, a major source of trouble for individuals worldwide, and a significant global concern.[1].Collecting and analyzing comprehensive data is essential for any initiative aiming to improve traffic safety [2].With the rising number of vehicles on the roads and the resulting congestion issues, optimizing traffic flow has become a pressing challenge in modern cities.Intelligent Transportation Systems (ITSs) have emerged as a promising solution to alleviate traffic congestion and enhance overall transportation efficiency [3][4].The Vehicle Ad-Hoc Network (VANET) serves as a fundamental infrastructure for ITSs, enabling wireless connectivity among vehicles [5][6].Additionally, intelligent transport systems are increasingly focused on addressing traffic congestion.Researchers have employed machine learning algorithms to predict traffic flow and reduce congestion at intersections.These models were evaluated using
www.etasr.com Alkarim et al.: Ensemble Learning-based Algorithms for Traffic Flow Prediction in Smart Traffic Systems
the national road traffic dataset for the UK.An adaptive traffic light system was implemented, which adjusts green and red lights based on road width, traffic density, and vehicle categories.Simulations demonstrated a 30.8% decrease in traffic congestion [7].
Accurate traffic prediction is crucial for ameliorating the effectiveness of traffic systems and reducing energy consumption.Machine learning-based methods have become commonplace, but they often rely on historical data [8][9].Furthermore, ML-based models are gaining popularity due to their ability to accurately forecast traffic conditions, thereby improving safety and infotainment applications.However, the efficacy of these models in predicting real-time traffic remains a subject of investigation [10].
Several research studies have focused on developing methods and models for traffic flow prediction and management.In [11], a framework is presented that utilizes Vector Auto Regression (VAR) and a CNN-LSTM hybrid neural network to predict short-term traffic flow.The CNN-LSTM model outperforms other models in forecasting shortterm traffic flow and demonstrates predictive accuracy associated with spatial correlation in traffic flow.In [12], three proposed solutions are discussed to address the issue of missing data in traffic management.These solutions include a livetraffic simulation, a neural network traffic prediction and rerouting system based on pheromone principles, as well as a Weighted Missing Data Imputation (WEMDI) approach.The integration of WEMDI into the systems yields notable improvements in various traffic factors and demonstrates efficient routing to alternative destinations.ML and neural networks play a significant role in solving traffic congestion issues.In this context, authors in [13] propose ML and DL algorithms for predicting intersection traffic flow.The models were trained, validated, and tested using public datasets, and the Multilayer Perceptron Neural Network (MLP-NN) produced the best results.Gradient Boosting, Recurrent Neural Networks, RF, LR, and Stochastic also showed promising performance.
ITSs require traffic flow monitoring for effective management and optimization.Conventional methods of data collection and analysis are being augmented with AI techniques, such as ensemble learning [14].The IAROEL-TFMS methodology utilizes feature subset selection and optimal ensemble learning to predict traffic flow, outperforming other approaches with its low RMSE.Authors in [14] used Hybrid-LSSVM, AST2FP-OHDBN, and IAROEL-TFMS models for evaluation purposes, considering their respective performance indicators.Among the several models evaluated, it was observed that IAROEL-TFMS had the most superior predictive performance.In close succession, the AST2FP-OHDBN model exhibited robust performance, whereas, in contrast, the Hybrid-LSSVM model demonstrated a somewhat reduced level of prediction accuracy.Regarding predictive performance, the IAROEL-TFMS model had the best precision and accuracy in forecasting the target variable.The AST2FP-OHDBN model closely followed it.On the other hand, the Hybrid-LSSVM model exhibited slightly inferior predictive skills.This paper utilizes four Machine Learning (ML) and Deep Learning (DL) models: Random Forest (RF), Linear Regression (LR), Long Short-Term Memory (LSTM), and ensemble bagging (RF).The objective is to utilize these predictions to enhance the efficiency of traffic light controllers in the context of traffic flow prediction at intersections.Experimental results demonstrate that all models exhibit a strong predictive capacity for estimating vehicular flow, highlighting their potential utility in smart traffic systems.
II. THE PROPOSED MODEL
This study has developed a model for monitoring traffic flow.The primary objective of this model is to predict traffic movement.To achieve this objective, the model operates in three distinct stages.Firstly, data collection can be accomplished using cameras or sensors.Secondly, ML and DL technologies are applied.Thirdly, the outcomes are evaluated using MAE, RMSE, and the coefficient of determination (Rsquared).The workflow of the suggested approach is illustrated in Figure 1.Overall, the proposed model aims to monitor traffic flow by predicting its movement.It involves data collection through cameras or sensors, the application of machine learning and deep learning technologies, and the evaluation of outcomes using specific error metrics.Figure 1 provides a visual representation of the workflow.The workflow of the proposed model.
A. Data Collection
The dataset used for traffic prediction was obtained from various traffic sensors provided by the Huawei Munich Research Center.The dataset plays a crucial role in predicting traffic patterns and making necessary adjustments to stop-light control settings, including cycle length, offset, and split timings.The dataset consists of recorded data from six intersections located within an urban area, collected over a period of 56 days (Table I).The data are presented as a flow time series, which indicates the number of vehicles passing through each intersection every 5 minutes, spanning 24 hours.This results in 12 readings per hour, 288 readings per day, and a total of 16,128 readings over the course of the 56 days.For this study, 4 out of the 6 intersections were selected to replicate a 4-lane intersection scenario [15].
B. Data Preparation
Data cleaning is a critical step in the preprocessing phase, where incorrect, incomplete, duplicate, or erroneous data within a dataset are rectified.Fortunately, the collected data for this study do not contain any missing values.The dataset has been divided into two parts: 70% for training the model and the remaining portion for testing.To ensure consistency and optimal performance during training, the data were scaled using the MinMaxScaler from the scikit-learn library.This scaler transforms the data, making them range between zero and one [16].
C. Proposed Techniques
In this study, four regression models from the scikit-learn module in the Python programming language are employed.The scikit-learn module is a comprehensive Python library that offers a wide range of state-of-the-art machine learning algorithms designed to tackle various supervised and challenges [16].The authors applied four ML/DL techniques to the dataset: RF, LSTM, LR, and ensemble method (bagging).The following section provides an overview of the traditional ML and ensemble methods utilized in the experiment.
III. OVERVIEW OF TRADITIONAL MACHINE LEARNING AND ENSEMBLE METHODS
A. Random Forest RF is a learning method that combines multiple tree predictors.Each tree in the forest is constructed based on the values of a random vector, sampled independently from the same distribution for all trees.Tree-based models form the core components of the random forest algorithm.A tree-based model involves iteratively dividing a given dataset into two distinct groups, guided by a specific criterion, until a predetermined stopping condition is met.The terminal nodes of decision trees are commonly known as leaf nodes or leaves [17].
B. Long Short-Term Memory (LSTM)
LSTM networks have found extensive applications in various domains, including image processing, speech recognition, manufacturing, autonomous systems, communication, and energy consumption, for dynamic system modelling purposes.LSTM has gained significant attention in recent years due to its effectiveness in modeling and predicting the dynamics of nonlinear time-variant systems.It incorporates the characteristics of short-term and long-term memory, the ability to make predictions several steps ahead, and the propagation of errors.Sequence-to-sequence networks with partial conditioning have been shown to outperform other techniques such as bidirectional or associative networks, making them well-suited for achieving the specified objectives [18].
C. Linear Regression
LR is a widely used and straightforward ML algorithm.The technique is a mathematical methodology employed to do predictive analysis.Moreover, LR is a statistical technique that enables the prediction of continuous or numerical variables.LR is a statistical technique employed to assess and quantify the association between variables under consideration [19].
D. Ensemble Method (Bagging)
Bagging, short for bootstrap aggregating, is a technique that involves creating multiple iterations of a predictor and combining them to form an aggregated predictor.In the aggregation process, the mean is calculated across the iterations when predicting a numerical outcome, while a majority vote is used when predicting a class.To generate multiple versions, bootstrap copies of the original learning set are created, and these replicates are then used as new learning sets [20].
IV. EVALUATION MEASURES
In model evaluation, the coefficient of determination (Rsquared), RMSE, and MAE are standard metrics [21].
V. EXPERIMENTAL RESULTS AND DISCUSSION
Table II presents the results of the model using various ML and DL algorithms.It can be observed that RF achieved an MAE of 13.76, while LSTM and LR achieved 14.74 and 17.80, respectively.When Bagging (RF) was applied, the minimum MAE obtained was 13.69.In terms of RMSE, the models achieved values of 22.39, 23.50, and 27.04, while the Bagging model achieved a lower RMSE of 22.21.In terms of R 2 , the experimental results for the models were 0.9341, 0.9275, and 0.9040, respectively.The best R 2 value was obtained by the Bagging model, which achieved a value of 0.9352.The results show that the RF model and the Bagging model (using RF as the base model) outperformed the LSTM and LR models in terms of both MAE and RMSE.Additionally, the Bagging model showed the highest R 2 value, suggesting a better fit to the data.Overall, these findings demonstrate the effectiveness of the RF algorithm and the potential benefits of using ensemble methods like Bagging for traffic flow prediction.Figure 2 illustrates the numerical values of MAE measurements of the considered models.It can be observed that the LSTM model has a slightly higher MAE (14.74) compared to the RF (13.76) and Bagging (RF) (13.69) models.This suggests that the LSTM model may not perform optimally in this particular scenario.On the other hand, the LR model has the highest MAE score (17.80).This indicates that it may not excel at accurately predicting the target variable.These results suggest that the RF and Bagging models (using RF as base) perform better than the LSTM and LR models in terms of MAE.It is important to consider these findings when selecting the most suitable model for traffic prediction in this context.The figures presented in Figure 3 illustrate the RMSE values of the considered models.It can be observed that the Bagging (RF) model has the lowest RMSE score (22.21), indicating that, on average, its predicted values deviate the least from the actual values.This suggests that the model exhibits strong predictive accuracy.The RF model also performs well, although it has a slightly higher RMSE (22.39) compared to the Bagging model.On the other hand, the LSTM model shows a larger RMSE (23.50), indicating potentially inferior performance in terms of predictive accuracy.The LR model has the largest RMSE value (27.04), suggesting a potentially lower level of accuracy in predicting the target variable.These findings again suggest that both the Bagging (RF) and RF models perform well in terms of RMSE, indicating their ability to provide accurate predictions.However, the LSTM and LR models may have limitations in accurately predicting the target variable based on their higher RMSE values.
Figure 4 shows the R 2 values of the considered models.R 2 ranges from 0 to 1, with a value of 1 indicating a perfect fit.Among the models presented, it is evident that the Bagging (RF) model shows the highest R 2 value (0.9352), designating its superior ability to fit the data accurately.The RF (0.9341) and LSTM (0.9275) models also demonstrate high R 2 values, suggesting their effectiveness in explaining a significant proportion of the observed variability in the dependent variable, while LR performs satisfactorily (0.9040), its R 2 value is slightly lower compared to the alternative models.Overall, it can be noticed that all of the models exhibit strong performance in elucidating the variability in the dependent variable.However, it is noteworthy that Bagging (RF) emerges as the most prominent performer among them.Compared to [13], this research has improved the results by more than 0.5%.In [13], the researchers used the same dataset and applied 5 ML methods.Gradient boosting was the most succesful, with 93.05%.The proposed model gets 93.41% by utilizing an RF model.Bagging (RF) has the highest result of 93.52%.In the future, researchers in this field could use a combination of ML and DL models to improve model performance [22].
VI. CONCLUSION
In this article, we mainly presented a new model to enhance intelligent traffic systems.The main purpose of this method is to recognize the traffic flow prediction or vehicle movement at intersections applying an ensemble learning technique.The proposed framework consisted of three primary phases: data collection through cameras and sensors, implementation of
Fig. 1 .
Fig. 1.The workflow of the proposed model. | 2024-04-05T15:42:11.014Z | 2024-04-02T00:00:00.000 | {
"year": 2024,
"sha1": "d717c7dcd995b19fb310e43abbf2b22330a080b8",
"oa_license": "CCBY",
"oa_url": "https://www.etasr.com/index.php/ETASR/article/download/6767/3482",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e88748fad690646a33ae2ec4d0f171d03afc5f12",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
237611726 | pes2o/s2orc | v3-fos-license | Human adenoviruses in paediatric patients with respiratory tract infections in Beijing, China
Background Human adenoviruse (HAdV) is a major pathogen of paediatric respiratory tract infections (RTIs). Mutation or recombination of HAdV genes may cause changes in its pathogenicity and transmission. We described the epidemiology and genotypic diversity of HAdV in hospitalized children with RTIs in Beijing, China. Methods Nasopharyngeal aspirates were collected from hospitalized children with RTIs from April 2018 to March 2019. HAdVs were detected by a quantitative real-time PCR, and the hexon gene was used for phylogenetic analysis. Results Among 1572 samples, 90 (5.72%) were HAdV-positive. The HAdV detection rate was highest in November and July. Among HAdV-positive children, 61.11% (55/90) were co-infected with other respiratory viruses, the most common of which were human respiratory syncytial virus and human rhinovirus. The main diagnosis was bronchopneumonia, most patient have cough and fever. Children with a high viral load were more likely to have a high fever (P = 0.041) and elevated WBC count (P = 0.000). Of 55 HAdV-positive specimens, HAdV-B (63.64%), HAdV-C (27.27%), and HAdV-E (9.09%) were main epidemic species. Phylogenetic analysis indicated that hexon sequences of three samples were on the same branch with the recombinant HAdV strain (CBJ113), which was circulating in Beijing since 2016. Conclusion The HAdV-B3 and HAdV-B7 are the main epidemic strains in Beijing, and the recombinant HAdV-C strain CBJ113 has formed an epidemic trend. Supplementary Information The online version contains supplementary material available at 10.1186/s12985-021-01661-6.
Background
Both well-known and emerging viruses affect human health by causing various diseases, and sometimes they even have a devastating impact on the entire society, such as the newly emerged human coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Adenoviruses (AdVs), members of the family Adenoviridae, are non-enveloped double-stranded DNA viruses, found widely in the biosphere. Since they were first discovered by Rowe et al. in 1953 [1], AdVs have been the focus of intense research. AdVs can infect various tissues and organs, sometimes with serious consequences, especially in children. The infectivity and cell entry mechanism of AdVs make them suitable for drug delivery, vaccination and gene therapy for many diseases including cancer [2]. Research on adenoviruses has greatly contributed to the fields of life sciences and medicine over the past decades.
HAdV-G is a novel species that has been typed and named by using whole genome sequencing and phylogenetics rather than by applying traditional serology. At present, the specific and frequently mutated hexon gene of HAdV has been widely used in the molecular diagnosis and genotyping of this virus. Owing to the frequent recombination of HAdVs, as exemplified by HAdV-85/89 in Japan, HAdV-D56 in France, and the HAdV-55 and CBJ113 strains in China [15,[24][25][26][27][28], whole-genome sequencing remains the gold standard for proper classification of HAdVs [28]. Among them, HAdV-55, reconstituted from HAdV-B11 and HAdV-B14, has repeatedly caused outbreaks in densely populated areas such as schools and the military in China [15,16]. The purpose of this study was to evaluate the epidemiological, clinical, and molecular characteristics of HAdV infections occurring among hospitalized children with respiratory tract infections (RTIs) in Beijing Friendship Hospital in China from April 2018-March 2019. In addition, this work explored the relationship between HAdV infection and RTI symptoms to provide information for the control and prevention of HAdV infection in China.
Patient specimens
The 1572 nasopharyngeal aspirate (NPA) samples used in this study were collected from hospitalized children (aged < 14 years) with RTIs at Beijing Friendship Hospital during the period from April 2018-March 2019. Informed consent was received from the parents or guardians of the children enrolled in the study. An RTI was defined as an illness that presented during the previous week with at least two of the following clinical manifestations: fever, cough, nasal obstruction, expectoration, sneeze and dyspnoea. Patients who were diagnosed with pneumonia by chest radiography were also included in the study, even if they did not show the clinical features described above [29]. The collected samples were stored in virus preservation solution (1640 medium with 2.5 mg/mL Bovine Serum Albumin, 25 µg/mL amphotericin B and 1% Penicillin-Streptomycin Solution), transported to the laboratory on ice, and stored at − 80 °C until further processing. The clinical data were collected and sorted out from the hospital database.
Detection of HAdVs and other common respiratory viruses
For molecular detection, total viral nucleic acid was extracted from 200 µL of each clinical NPA specimen by using the QIAamp MinElute Kit (Qiagen, Germany) in accordance with the manufacturer's instructions. HAdV detection was performed by using a quantitative real-time polymerase chain reaction (qPCR) assay targeting the highly conserved 132-bp region of the HAdV hexon gene, as previously decribed [30]. TaqMan Universal PCR Master Mix (Applied Biosystems, USA) was used to amplify HAdV hexon DNA with specific primers (Forward:5′-GCC CCA GTG GTC TTA CAT GCA CAT C-3′; Reverse: 5′-GCC ACG GTG GGG TTT CTA AACTT-3′) and probe (5′-FAM-TGC ACC AGA CCC GGG CTC AGG TAC TCCGA-3′-TAMRA); qPCR was performed using the Mx3005P qPCR System (Agilent Stratagene, USA). Samples with a cycle threshold (CT) of < 38 were retested with qPCR to confirm their classification as positive samples. The positive samples were quantified by applying a standard curve, as described previously [31].
HAdV genotyping
Nested PCR targeting the hypervariable region of the HAdV hexon gene was employed for genotyping as previously described [31]. The outer primers used were forward 5′-GCC ACC TTC TTC CCC ATG GC-3′ and reverse 5′-GTA GCG TTG CCG GCC GAG AA-3′, and the internal primers were forward 5′-TTC CCC ATG GCC CAC AAC AC-3′ and reverse 5′-GCC TCG ATG ACG CCG CGG TG-3′. Specimens that failed to be amplified were classified as untyped. Nested-PCR products were confirmed by sequencing, and a phylogenetic tree was constructed by applying the Maximum Likelihood (ML) method with MEGA 7.0 using 1000 bootstrap replicates. Reference HAdV strains (Additional file 1: Table S2) were selected based on the HAdV reference strain recommended by International Committee on Taxonomy of Viruses (ICTV) and also included CBJ113 strain (KR699642). Homology between sequences on the same evolutionary branch with CBJ113 was analyzed using BioEdit.
Statistical analysis
Data analysis was performed using SAS 9.4 software, and the significance of the difference in rates among categorical data was tested by chi-squared and Fisher's exact tests. Wilcoxon's test and independent-samples t-test were used to analyze continuous variables. Two-sided P-values < 0.05 were considered indicative of statistical significance.
Detection of viral co-infection in HAdV-positive specimens
Among the 90 HAdV-positive specimens, single infection samples accounted for 38.89% (35/90), among these patients (23 males and 12 females, male-female ratio of 1.92:1), the difference between the numbers of male and female patients was not statistically different (P = 0.878). Of the 55 HAdV-positive children who were co-infected with other respiratory viruses, the most common coinfecting viruses were HRSV and HRV ( Table 2). The viral load of the 90 HAdV positive samples ranged from 17 to 12.8 × 10 6 copies/mL NPA. The log numbers of HAdV genome copies were 2.85 ± 1.45 and 2.63 ± 1.32 in the NPAs of children infected with HAdV only and those co-infected with HAdV and other respiratory virus, respectively; however there is no statistical difference in the viral load between HAdV mono-and co-infections (P = 0.061).
Clinical characteristics of HAdV infections
Among the 90 HAdV-positive children, the main diagnosis was bronchopneumonia (68.89%, 62/90), followed by mycoplasma pneumoniae pneumonia (8.89%, 8/90); only 3 cases were diagnosed with adenovirus pneumonia. The average duration hospitalization among these patients was 5.85 days. Eighty (88.89%) of the 90 children had an abnormal chest radiograph, and 48 (53.33%) of the 90 exhibited an elevated WBC count (> 10 × 10 9 cells/L). The main clinical features of the HAdV infections included cough (83.33%, 75/90) and fever (temperature ≥ 38 °C; 90%, 81/90). Five cases experienced convulsion as a symptom. A small number of children presented with gastrointestinal symptoms, such as vomiting (14.44%, 13/90) and diarrhoea (4.44%, 4/90). There were no significant differences in the clinical characteristics of HAdV infection between HAdV-positive patients with or without a viral co-infection (Table 3). Among children with an exclusive HAdV infection (no viral co-infection), the relationships between viral load and patient age, sex, disease duration, and body temperature were not statistically significant (Table 4), but children with a high viral load were more likely to have a high fever (P = 0.041) and an elevated WBC count(P = 0.000).
Discussion
The respiratory tract-related clinical symptoms caused by HAdV infections are similar to those caused by infection with IFA, HRSV and other respiratory pathogens. Consequently, correct diagnosis of HAdV infection is often difficult. In this study, qPCR and Sanger sequencing were used to analyse the phylogenetic sequence of the hexon gene, and the epidemiological characteristics and genotypic diversity of HAdVs in children hospitalized during the period from April 2018-March 2019 in Beijing, China were investigated. Of the 1572 collected specimens, 90 (5.73%) were positive for HAdV; this HAdV detection rate is very similar to that of the previous year in this hospital (5.64%) [31] and is also consistent with those reported from China and other countries (3.71%-35.5%) [31,[38][39][40][41][42]. The HAdV detection rate was 3.71% in Hebei Province, China. However, it was slightly higher among hospitalized children with RTIs in southern China; the HAdV detection rate in hospitalized children with RTIs during the period from 2009 to 2012 in Chongqing was 8.55%, and that in Hunan Province was 9.4%. It should be noted that different HAdV detection rates may be the result of differences in detection method, sample collection site, collection time and other factors. Thus, it is necessary to establish a unified and continuous epidemiological surveillance over a wider area. The HAdV detection rate was not significantly affected by patient sex, but it was significantly affected by patients age (P = 0.008). The main age group affected by HAdV was children aged ≤ 5 years (73.3%, 66/90), specifically, the group of patients aged 3-5 years had the highest HAdV detection rate (10.11%, 27/267), whereas the group aged 2-3 years had the lowest (3.35%, 11/328). The reason for this difference remains to be determined.
Previous studies have shown that the HAdV detection rate is positively correlated with monthly average temperature, sunshine hours, and air temperature [39]. The number of HAdV infections in southern China reaches its peak during summer. In this study, the HAdV detection rate in Beijing had an obvious seasonal distribution difference (P = 0.001), and peaking in autumn (9.33%), which is consistent with a previous report by Duan et al. [43].
The clinical symptoms caused by HAdV infection in our patients were similar to those commonly caused by infections with other respiratory viruses, such as HRSV and IFV; their most common clinical symptoms and signs were fever and cough, and a few cases of HAdV also experienced gastrointestinal symptoms, such as vomit and nasal obstruction. The most common diagnosis among our HAdV-positive subjects was bronchopneumonia (68.89%, 62/90). The duration of hospital stay was generally less than 7 days, which is consistent with the results of previous studies. The co-infection of HAdVs and other respiratory viruses has been reported many times [39,40]. In this study, the co-infection rate was 61.11% (55/90), and the viruses with the highest frequency of mixed infection were HRSV and HRV. No significant difference was observed in clinical symptoms and duration of hospitalization between the mono-and coinfections. The severity of HAdV infection is affected by many factors, including the patient age, immune status, and socioeconomic status. Although, some studies have shown that HAdV-7 may cause a more severe infection [44]; others have found that the HAdV type has no obvious influence on the severity of respiratory tract infection in children. Additionally, HAdV-infected patients with a long-lasting fever often experience more serious disease [40]. In agreement with previous work, this study found that there was no significant association between the HAdV genotype and disease severity (Additional file 1: Table S3), and only one child who was co-infected with HRSV had dyspnea. In the analysis of HAdV simple infection, the viral load in NPAs had no significant statistical association with patient age, sex or hospital stay duration, but children with a high viral load were more likely to have a high fever and elevated WBC count.
HAdV-55, which often causes outbreaks, is also detected at high rates in cases of adult respiratory tract infection [4,6]. The prevalent HAdVs in China are mainly genotypes HAdV-2, -3 and -7. The dominant genotypes in northern China are HAdV-3 and -7, while those in southern China were HAdV-2 and 3 [43]. In this study, hexon gene sequencing and phylogenetic analysis were performed on 55 samples. The results show that HAdV species B and C were the most common species, accounting for 63.64% (35/55) and 29.10% (16/55) of HAdV cases, respectively. HAdV-B3 was the most common genotype (43.64%, 24/55), followed by HAdV-B7 (20.00%, 11/55), HAdV-C1 (10.91%, 6/55), and HAdV-E4 (9.09%, 9/55), which is consistent with other reports. HAdVs are prone to gene mutation and recombination [7,25,45,47], and the CBJ113 strain isolated in Beijing in 2016 contained HAdV-C2, HAdV-C6, HAdV-C1, HAdV-C5 and HAdV-C57 sequences, which were recombined in several genes, including the hexon and fiber genes. Notably, three of the hexon sequences detected here were on the same branch as strain CBJ113, and they showed maximum homology with strain CBJ113. This study demonstrates that there are at least eight different HAdV genotypes circulating in Beijing; and that the HAdV species C strain CBJ113 has been prevalent in China for a long time. The hexon gene is commonly used for typing and is common in many molecular epidemiological studies of HAdV [48,49]. However, because adenoviruses are prone to mutating and recombining, it is more accurate to use the whole genotype, which is a limitation of this study.
Our data allows CDC and health officials to understand the importance of adenovirus infection more deeply, thereby the government will give more attention and financial support to adenovirus research. In addition, our study can also provide clinicians more information of adenovirus infection, and patients can get the accurate diagnosis and better treatment for viral infection.
Conclusion
This study described the epidemiological, clinical, and molecular characteristics of HAdV infections occurring among children with RTIs in a Chinese tertiary hospital during the period April 2017-March 2018. Our results show the latest trends of HAdV epidemic genotypes in Beijing, China. Notably, the HAdV-C strain CBJ113 has formed an epidemic trend in Beijing; therefore it is necessary to establish a nationwide epidemiological surveillance program for adenovirus infection because the epidemic data from a single region are not necessarily representative. The detection of HAdV should be carried out across multiple | 2021-09-24T13:43:14.825Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "70db6a93b10687c2d541a7bd4eb29bec5642a279",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-021-01661-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "329fc716d8783f7ecf93a10aaac50efa8cf780a9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52987386 | pes2o/s2orc | v3-fos-license | ATP-binding cassette transporter A7 accelerates epithelial-to-mesenchymal transition in ovarian cancer cells by upregulating the transforming growth factor-β signaling pathway
Ovarian cancer (OC) has the highest fatality rates of all gynecological malignancies worldwide. The epithelial-to-mesenchymal transition (EMT) serves an essential role in the progression of OC. An improved understanding of the molecular mechanism underlying EMT in OC may increase the survival rate. ATP-binding cassette transporter A7 (ABCA7) is a candidate regulator of OC progression. However, the role of ABCA7 in OC is unclear. Using the PROGgeneV2 platform, the present study revealed that increased expression of ABCA7 is associated with poor outcomes in OC. The expression of ABCA7 was higher in OC tissues than in adjacent noncancerous tissues. ABCA7-knockdown decreased the migration of OC cells and the activation of mothers against decapentaplegic homolog 4 (SMAD4). Notably, downregulation of ABCA7 also increased the expression of an epithelial marker (E-cadherin) and decreased that of a mesenchymal marker (N-cadherin). In addition, the decreased expression of SMAD4 and EMT markers induced by ABCA7 depletion could be rescued by transforming growth factor β1 (TGF-β1) stimulation. Overall, these findings suggested that ABCA7 accelerates EMT in OC by upregulating the TGF-β signaling pathway.
Introduction
Ovarian cancer (OC) is a leading cause of gynecological malignancy-associated mortality worldwide (1). The vast majority of patients with OC are diagnosed at a late stage with peritoneal dissemination, resulting in a 30% survival rate (2). The epithelial-to-mesenchymal transition (EMT) is a reversible and dynamic process hypothesized to occur during invasion and metastasis of several types of carcinoma (3).
The ATP-binding cassette (ABC) transporter superfamily includes seven subfamilies (ABCA to ABCG) comprising 48 transmembrane proteins. ABC transporters undertake the transport of various inflammatory mediators and lipids directly relevant to tumor progression in OC (4). Elsnerova et al (5) reported that the expression of ABCA7 was significantly higher in OC than in control ovarian tissue, and ABCA7 was upregulated in metastatic tumor tissue compared with primary OC. Additionally, increased expression of ABCA7 was significantly associated with poor outcomes in patients with OC (5,6). ABCA7 expression was also associated with poor disease-free survival and an elevated risk of colorectal carcinoma progression (6). Therefore, ABCA7 may be involved in the regulation of OC progression.
Transforming growth factor-β (TGF-β) is a key regulator of EMT; extracellular TGF-β signal is transduced through the activation of TGF-β receptors and subsequent phosphorylation of receptor-activated mothers against decapentaplegic homolog (SMAD), which form a heterotrimeric complex with SMAD4. Therefore, SMAD4 is a central transcription factor in TGF-β signaling (7). The TGF-β signaling pathway is reportedly involved in EMT in OC (8,9).
Patients and methods
Bioinformatics analysis. The ProgeneV2 prognostic database (http://www.abren.net/PrognoScan/) was used to collect information for analysis of the effect of ABCA7 on survival in patients with OC (10,11). Kaplan-Meier curve was applied for analyzing survival rate of patients with OC.
Patients. This study was approved by the Medical Ethics Committee of the Jining No. 1 People's Hospital (Shandong, China). Written informed consent was obtained from all participants. A total of 11 females with an average age of 45.7 years (range, 38-58 years) were enrolled in this study from May 2013 to June 2017. Peritoneal cytology was positive in six participants. Cancer tissues and corresponding adjacent ovarian non-cancerous tissues were obtained during oophorosalpingectomy or surgical debulking. Cancerous and adjacent ovarian non-cancerous tissues were confirmed histologically by hematoxylin and eosin staining as described in previous studies (12,13).
Immunohistochemistry (IHC). IHC staining was performed by pathologists who were blind to the original hypothesis. IHC staining was performed manually using a IHC kit (cat. no. 25229-1; Wuhan Sanying Biotechnology Co., Ltd., Wuhan, China) accordingly to the manufacturer's protocol. Specimens were fixed in 10% formalin for 48 h at room temperature. Paraffin-embedded tumor specimens were sliced into serial sections of 5-µm thickness. ABCA7 expression was detected by IHC in paraffin-embedded specimens. All slides were dewaxed in xylene and dehydrated in an alcohol gradient (50, 75, 90 and 100%) (included in IHC kit), and then endogenous peroxidase activity was quenched with 3% hydrogen peroxide for 10 min at 37˚C. Antigen retrieval was achieved by heating slides covered with citrate buffer (cat. no. 25229-1; Wuhan Sanying Biotechnology, Wuhan, China; pH 6.0) at 95˚C for 10 min. Following this, 10% goat serum albumin (cat. no. 253441; Wuhan Sanying Biotechnology, Wuhan, China) was used to block nonspecific binding by incubating sections for 2 h at room temperature. Subsequently, the slides were incubated overnight with rabbit anti-ABCA7 monoclonal antibody (1:50; cat. no. 25339-1-AP; Wuhan Sanying Biotechnology) at 4˚C. Slides were then incubated with a secondary antibody (1:200; cat. no. BA1039; Boster Biological Technology, Pleasanton, CA, USA) for 30 min at 37˚C. For hematoxylin and eosin staining tissue sections after deparaffinization were rehydrated with 50% dimethylbenzene (cat. no. 253441; Wuhan Sanying Biotechnology) as the previously stated, and stained with 0.1% hematoxylin for 30 sec at 37˚C, rinsed in water for 1 min, 0.1% eosin for 10-30 sec at 37˚C, and dehydrated with 75% absolute alcohol (cat. no. 197543, Wuhan Sanying Biotechnology) at 37˚C. All sections were observed under a light microscope (magnification, x100 and x200; Olympus Corporation, Tokyo, Japan). These expression levels were confirmed by semi-quantitative analyses using ImageJ software 1.46r (National Institutes of Health, Bethesda, MD, USA).
Cell culture and stimulation. The immortalized human ovarian surface epithelial HOSE 6-3 cell line and the ovarian cancer SKOV-3, Caov-3, A2780, OVCA433 and OVC429 cell lines were purchased commercially from the American Type Culture Collection (Manassas, VA, USA). The cell lines were cultured in RPMI-1640 medium (Sigma-Aldrich; Merck KGaA, Darmstadt, Germany) supplemented with 10% fetal bovine serum (FBS; Gibco; Thermo Fisher Scientific, Inc., Waltham, MA, USA) at 37˚C and 5% CO 2 . TGF-β1 (Abcam, Cambridge, MA, USA) was dissolved in PBS to make a 10 mg/ml stock solution and then added in the medium to reach 10 ng/ml solution.
Lentiviral infection.
A lentiviral short hairpin RNA (shRNA) construct targeting ABCA7 (cat. no. SHCLNV-NM_019112; shRNA Product kit) was purchased commercially from Sigma-Aldrich; Merck KGaA. Two shRNA sequences targeting ABCA7 were designed (Table I). The oligonucleotides were phosphorylated, annealed and cloned into thepLKO.1 vector (Sigma-Aldrich; Merck KGaA). Lentiviral infection was performed according to the manufacturer's protocols. The concentration of shRNA were 2x10 8 /ml. Briefly, the cells were seeded at 2x10 5 cells/well in a 6-well plate prior to lentiviral particle infection and incubated with 2 ml RPMI-1640 medium supplemented with 10% FBS for 24 h. Subsequently, cells were infected with lentiviral particles (2x10 8 /ml), and after 12 h, the virus-containing medium of infected cells was substituted with RPMI-1640 medium supplemented with 10% FBS, and infected cells were incubated with 2 µg/ml puromycin for 48 h at 37˚C and 5% CO 2 . Empty lentiviral vectors were used as a control. Following screening for 48h, the infected cells were used in subsequent experiments.
Wound healing assay. SKVO-3 cells were seeded into 6-well plates and cultured to 100% confluence. A pipette tip was used to scratch a straight line in the cell layer to create a wound. Then, the cells were washed with PBS and treated with RPMI-1640 medium without FBS. Wound images were observed under a light microscope (magnification, x200). The wound gap widths were measured using ImageJ software 1.46r.
Transwell migration assay. Cell culture inserts (24-well, 8-µm pore size; Sigma-Aldrich; Merck KGaA) were seeded with 1x10 5 cells in 200 µl RPMI-1640 medium without FBS in the upper chamber. RPMI-1640 medium with 5% FBS (500 µl) was added to the lower chamber and served as a chemotactic agent. Following incubation for 24 h, non-migrating cells were removed from the upper side of the membrane, and the cells on the lower side of the membrane were fixed with 4% paraformaldehyde for 15 min at 37˚C. The cells were stained with crystal violet stainingfor 15 min at 37˚C, and cell numbers were counted under a light microscope (magnification, x200). Each individual experiment was performed with triplicate inserts, and five microscopic fields were counted per insert.
Reverse transcription-quantitative polymerase chain reaction (RT-qPCR). Total RNA, isolated from all cell lines using TRIzol ® reagent (Takara Biotechnology Co., Ltd., Dalian, China), was reverse-transcribed into cDNA in a reaction volume of 20 µl using the Double-Strand cDNA Synthesis kit (Takara Biotechnology Co., Ltd.) at 37˚C for 15 min. The generated cDNA was used as the template for the RT-qPCR reaction. All gene transcripts were quantified by RT-qPCR using the Power SYBR Green PCR Master mix on the ABI StepOnePlus system. The levels of mRNAs were determined using a StepOnePlus Realtime PCR system (Applied Biosystems; Thermo Fisher Scientific, Inc.) and the SYBR Premix Ex Taq (Takara Biotechnology Co., Ltd.) under the following conditions: 95˚C for 30 sec, followed by 40 cycles of 95˚C for 5 sec and 60˚C for 30 sec. The primer sequences were as follows: ABCA7 forward, 5'-GTG CTA TGT GGA CGA CGT GTT-3' and reverse, 5'-TGT CAC GGA GTA GAT CCA GGC-3'; and β-actin (internal control) forward, 5'-GAA GGT GAA GGT CGG AGT-3'and reverse, 5'-GAA GAT GGT GAT GGG ATT T-3'. The 2 -∆∆Cq method was used to calculate relative gene expression (14).
Statistical analysis. Statistical analysis was performed using SPSS 19.0 software (IBM Corp., Armonk, NY, USA). All experiments were performed in a minimum of triplicate, and the data are presented as the mean ± standard deviation. Statistical significance was determined using one-way analysis of variance followed by Bonferroni's post hoc test when comparing more than two groups, and a two-tailed Student's t test when comparing two groups. P<0.05 was considered to indicate a statistically significant difference.
Results
High ABCA7 mRNA levels in OC tissues are associated with poor overall survival. Increased expression of ABCA7 mRNA in OC tissue was associated with a poor 5-year overall survival (high ABCA7 expression, n=40; low ABCA7 expression, n=39; hazard ratio =11.58, P=0.019; Fig. 1).
ABCA7 expression is increased in OC tissues compared with adjacent noncancerous tissues. Immunohistochemistry revealed thatABCA7 expression levels were significantly higher in OC tissues than in adjacent non-cancerous tissues ( Fig. 2A and B; P<0.05).
ABCA7 mRNA levels in OC and adjacent non-cancerous tissues were determined by PCR. Adjacent non-cancerous tissue revealed significantly lower ABCA7 mRNA levels than OC tissues ( Fig. 2C; P<0.05). Additionally, ABCA7 mRNA levels were higher in OC cell lines (SKOV-3, Caov-3, A2780, OVCA433 and OVC429) than in the normal HOSE 6-3 cell line ( Fig. 2D; P<0.05). The ABCA7 mRNA level in SKOV-3 cells was moderate with representativeness; therefore, to avoid the ceiling and floor effects (8,9), the SKOV-3 cells with moderate ABCA7 levels were selected for subsequent experiments.
Downregulation of ABCA7 in SKOV-3 cells using shRNAs.
shRNAs were used to downregulate ABCA7 expression in SKOV-3 cells; the effect on protein expression was confirmed by western blotting (Fig. 3).
ABCA7-knockdown decreases the migration of SKOV-3 cells and increases expression of E-cadherin and N-cadherin.
EMT serves an important role in cancer migration and metastasis. During EMT, epithelial cells lose their cell-adhesive properties, repress the expression of epithelial markers and increase the expression of mesenchymal markers. Therefore, the present study examined the expression levels of an epithelial marker (E-cadherin) and a mesenchymal marker (N-cadherin). Western blot analysis revealed that the expression levels of E-cadherin and N-cadherin were decreased and increased, respectively, by ABCA7 depletion (Fig. 4A).
Furthermore, a Transwell migration assay revealed that migration of OC cells was markedly decreased by suppression of ABCA7 (Fig. 4B).
A wound migration assay was performed to evaluate the effect of ABCA7 on the migration of OC cells. ABCA7 depletion markedly reduced the wound-closure capacity of OC cells at 24 h (Fig. 4C).
ABCA7 depletion inhibits activation of the TGF-β signaling pathway and TGF-β1 increases the expression of EMT markers.
To investigate the underlying molecular mechanism, the levels of proteins of the TGF-β signaling pathway, a key regulator of EMT, were evaluated. As previously mentioned, ABCA7-knockdown significantly decreased the level of SMAD4, a TGF-β-activated transcription factor (Fig. 4A).
Discussion
Ovarian cancer (OC) is a leading cause of gynecological malignancy-associated mortality worldwide (1). Approximately 20% of types of OC are preventable through population-based testing for genes associated with susceptibility to OC (16). In the present study, it was revealed that higher expression of ABCA7 was associated with a lower survival rate in patients with OC. In addition, ABCA7 levels were revealed to be higher in OC tissues than in adjacent non-cancerous tissues. ABCA7-knockdown decreased the migration of OC cells. These results are consistent with those of previous reports (5,6).
EMT serves an important role in the progression of OC. At the molecular level, EMT underlies the dynamic cellular heterogeneity during metastasis (14). E-cadherin is a cell-to-cell adhesion molecule expressed predominantly by epithelial cells. E-cadherin is an important suppressor of metastasis. Downregulation of E-cadherin has several important consequences that are of direct relevance to EMT, and initiates a series of signaling events and a major reorganization of the cytoskeleton (17). Therefore, loss of E-cadherin is a marker of EMT (18). In the present study, it was demonstrated that ABCA7 depletion increased the expression of E-cadherin. Furthermore, decreased expression of E-cadherin during EMT is accompanied by increased expression of the mesenchymal marker N-cadherin, which renders the cell more motile and invasive (11). Increased E-cadherin and decreased N-cadherin were identified following ABCA7 depletion in the present study, suggesting that ABCA7 is associated with EMT in OC cells.
The TGF-β signaling pathway promotes metastasis of OC cells as a moderator of EMT (12). The decreased expression of SMAD4 and EMT markers induced by ABCA7 depletion could be rescued by TGF-β stimulation. In the present study, ABCA7-knockdown also decreased expression of SMAD4, a transcription factor important in TGF-β signaling (12). These data suggested that ABCA7 activates the TGF-β signaling pathway in OC cells. The reduction in SMAD4 expression induced by ABCA7 depletion could be rescued by TGF-β1 stimulation (5 ng/ml for 48 h). Therefore, the data from the present study suggested that ABCA7 accelerates EMT in OC cells via the TGF-β signaling pathway. Similar results have been previously reported; Chen et al (15) revealed that SIRT1 downregulated EMT in metastasis of oral squamous cell carcinoma by suppressing the TGF-β signaling pathway. Shirakihara et al (19) reported differential regulation of Figure 4. The effects of ATP-binding cassette transporter A7-knockdown on the migration of SKOV-3 cells. The data are presented as the mean ± standard deviation. (A) Images are representative of three independent experiments. The protein levels of E-cadherin, N-cadherin and SMAD4 were assessed by western blot analysis. All data are expressed as the mean ± standard deviation ( * P<0.05). (B) A Transwell assay was performed to assess migration. Cell numbers were counted and five microscopic fields were counted per insert (magnification, x200). Relative cell numbers were analyzed. All data are expressed as the mean ± standard deviation. * P<0.05 compared with sh-ctrl group. (C) A wound-healing assay was performed to assess migration (magnification, x200). Images are representative of three independent experiments. Relative widths were analyzed. All data are expressed as mean ± standard deviation ( * P<0.05). sh, short hairpin RNA; ctrl, control; SMAD4, mothers against decapentaplegic homolog 4. epithelial and mesenchymal markers by δEF1 proteins in EMT induced by TGF-β.
The in vitro findings of the present study require verification in other OC cell lines and in vivo. Furthermore, the involvement of other signaling pathways is unclear; therefore, further studies are warranted.
Taken together, the data from the present study suggested that ABCA7 accelerates EMT in OC by activating the TGF-β signaling pathway. ABCA7 may be a promising therapeutic target for OC metastasis to reduce mortality. | 2018-11-01T20:39:02.959Z | 2018-08-24T00:00:00.000 | {
"year": 2018,
"sha1": "eaf7e568848932fac17e25b59043f04f0eb949a8",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2018.9366/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eaf7e568848932fac17e25b59043f04f0eb949a8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16296191 | pes2o/s2orc | v3-fos-license | Ethyl 2-hydroxy-2-phenyl-2-(thiazol-2-yl)acetate
This short note describes the synthesis of the title compound through spontaneous aerobic oxidation of ethyl 2-phenyl-2-(thiazol-2-yl)acetate. Due to the prevalence of such functional motifs in biologically active substances, we believe the oxidation encountered highlights an important degradation pathway worthy of note.
Introduction
The synthesis of thiazole containing compounds has been the focus of much research due to their importance in both pharmaceuticals [1] and agrochemicals [2].
Recently, we have reported on the synthesis of 2-substituted thiazoles through a modified Gewald reaction [3].Serendipitously, the natural air oxidation of one of the 2-substituted thiazoles led to an interesting hydroxylated thiazole which yields a glycolate moiety.This previously unreported compound is important because of its implications regarding metabolic and environmental degradation pathways for related compounds.
The air oxidation of 1 slowly gives rise to the corresponding glycolate 3 when simply left standing and open to the atmosphere; the parent compound is stable if preserved under an inert environment.The resultant glycolate 3 can be easily isolated through simple column chromatography purification.
OPEN ACCESS
Molbank 2015 M857 (Page 2) Some related oxygenations have been previously described, however, these processes have employed either a palladium catalyst [4] or strong bases such as Cs2CO3 [5] in the presence of oxygen.
We hypothesise that the two oxidation derivatives (2 and 3) are generated through initial enolisation and reactive trapping of oxygen.Even though no base is present for the deprotonation, the natural enolisation is enough for the reactive trapping of oxygen, albeit rather slowly.The resultant peroxide intermediate 1b could then potentially cyclise onto the adjacent ester moiety forming a dioxetane which, after extrusion of CO2, would furnish product 2 (Scheme 1) [7].Alternatively, the peroxide intermediate 1c could undergo homolytic cleave to form the oxygen-centred radical that abstracts a hydrogen atom to form the glycolate 3 [8].It is also possible that compound 2 is the result of ester hydrolysis (water generated in the formation of 3), followed by decarboxylation to yield the simple 2benzyl thiazole.Such compounds are known to oxidise to their corresponding ketones [9] or undergo a 1,2-rearrangement to form an α-hydroperoxy α-alkoxy ketone which would form 2 after spontaneous decomposition [10].
An alternative mechanism, not involving the initial enolisation, would be one including an intial homolytic cleavage of the C-H bond, forming a carbon centred radical 1e which can react with oxygen to form the peroxo-radical 1f (Scheme 2).The peroxo-radical can either react with a hydrogen atom to form 1c as part of the formation of 3, or form the dioxetane intermediate to yield 2. Similar to mechanism A, there is nothing that induces the initial homolytic cleavage to initiate the reaction, however, we are convinced that considering the long reaction time needed for the transformation, small amounts of 1a or 1e are naturally formed due to the acidic C-H bond present in 1. | 2015-09-18T23:22:04.000Z | 2015-05-06T00:00:00.000 | {
"year": 2015,
"sha1": "5ccb9b64a3dbb2531b87e45e262aa569dee72184",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-8599/2015/2/M857/pdf?version=1430916128",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "53284b64aac18afe0cde4be968d4166a7760c28f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
104312993 | pes2o/s2orc | v3-fos-license | FLUID INCLUSIONS AND RARE EARTH ELEMENTS (REE) ANALYSIS IN CALCITE VEINS: TECTONIC-DIAGENESIS INTERACTION IN THE ROSABLANCA FORMATION, MESA DE LOS SANTOS SECTOR, EASTERN CORDILLERA, COLOMBIA
a Servicio Geológico Colombiano, Diagonal 53, Bogotá D.C, Colombia. b Ecopetrol Instituto Colombiano del Petróleo, km 7 vía BucaramangaPiedecuesta, C.P 681011, Piedecuesta, Colombia. c Escuela de Geología, Universidad Industrial de Santander, carrera 27 calle 9, C.P 680002, Bucaramanga, Colombia, *email: jconde@sgc.gov.co ABSTRACT Studies conducted by means of petrography, cathodoluminescence, SEM, fluid inclusion and REE geochemistry in core samples from the Rosablanca Formation in the Mesa de Los Santos sector, identified two types of material: the host rock classified as Packstones and Grainstones, and veins that texturally expose three types of filling (blocky texture, blocky elongate texture, fibrous texture). Diagenesis is characterized by dissolution, carbonate cement precipitation, compaction, fracturing and fluid circulation through fractures during at least three episodes; these diagenetic processes were contemporaneous with the distensive and compressive tectonic regimes regionally dominant during the Cretaceous, Paleogene and Neogene in the study area. The fluids that generated the different types of texture inside the veins were brines that belonged to the H2O – NaCl – CaCl2 system, with salinities between 0.03 – 12.96 % wt eq NaCl, derived from the Rosablanca Formation that was deposited under oxic conditions, retaining their marine character and implying an autochthonous origin for the REE present in the veins. The conditions of entrapment for fluid inclusions during the early event were heterogeneous, arising from an immiscible mixture of brines and hydrocarbons, while in the second, they were homogeneous with later post-entrapment processes.
KEYWORDS / PALABRAS CLAVE AFFILIATION
The movement of fluids through different geological formations is of great importance to the diagenetic processes in sedimentary basins because; when they occur, they aid in the transfer of fluids, allowing, for example, the migration of hydrocarbons from source rocks to reservoir rocks through faults, fractures, or interconnected porosity [1]. The study of fracture opening events, as well as the minerals with which they are filled, is of great help in understanding the conditions and compositions of the fluids that circulated during diagenesis through a stratigraphic unit [2,3].
The study of the compositions, textures and growth directions of the minerals that fill the fractures can help to increase knowledge regarding the number of fluid migration events, and they can also
GEOGRAPHIC LOCATION
The Mesa de Los Santos sector is located in the western part of Colombia's Eastern Cordillera, in the department of Santander, approximately 60 km southeast of the city of Bucaramanga. Geographically, this region is bound to the east by the Santander Massif and to the west by the piedmont that ends in the Middle Magdalena Valley (Figure 1)
STRATIGRAPHY
In the study area, there are outcrops of sedimentary rocks that regionally belong to the stratigraphic sequence relating to the Middle Magdalena Valley (Figure 2), and crystalline rocks that make up the basement of such sequence. The oldest rocks relate to low and medium grade metapelites belonging to the Silgará Formation of Pre-Devonian age, which was intruded in the Mesozoic by plutonic rocks from Granito de Pescadero. An uncomformity separates the rocks of the Silgará Formation from the sediments of the Jurassic and Early Cretaceous ages deposited in continental fluvial, transitional and shallow marine environments, associated with the Jordán, Los Santos, Rosablanca, Paja and Tablazo formations [11].
The Rosablanca Formation that is being studied herein, is one of the basal units relating to the Lower Cretaceous of Colombia's Eastern Cordillera and of the study area. Towards its base it is in contact with the sandstones of the Tambor Formation and towards its top, with the mudstone of the Paja Formation. According to Julivert [12] who carried out his study in the area of the Sogamoso river canyon located west of Mesa de Los Santos, this unit is approximately 318 m thick and is comprised by a set of massive limestones with interbedded marls and shales, and towards the upper part there is also a sandy level. The massive limestones are more abundant towards the base and the top of the Formation, while towards the middle part the marls and shales become more abundant.
Paleo-environmentally, the Rosablanca Formation was deposited in an environment relating to shallow platform environments [14], with energies that permitted the development of grainstone, packstone be associated with the deformation tectonic context in which the filling events occurred [4]- [9].
In this article we discuss the results of the petrography, inorganic geochemistry (REE), cathodoluminescence and fluid inclusion analyses performed on fracture filling material belonging to the Lower Cretaceous Rosablanca Formation. The samples were taken in outcrops in Mesa de Los Santos (Eastern Cordillera, Colombia).
TheOReTICal fRame
Ec o p e t r ol S . A and mudstone carbonates, as well as certain evaporitic levels [15]. Julivert [12], based on the petrographic study carried out in this unit, proposed that the deposit conditions were not constant, and the basal part of the unit was characterized by evaporite facies, implying hypersalinity and stillness in the deposit; the rest of the succession is deposited in an open and shallow environment in which the stillness (micrite, fossiliferous micrite and biomicrite deposits) and agitation (intrasparite, oosparite, intramicrite and oomicrite deposit) conditions alternate. The age of the Rosablanca Formation has been estimated as being from Hauterivian to Barremian [16].
JURASSIC AND CRETACEOUS
The tectonic evolution of the study area (i.e. Mesa de Los Santos sector) is regionally framed within the tectonic evolution of the Middle Magdalena Valley basin and the Eastern Cordillera, especially the latter because it forms part of it. Taking into account that proposed by Mojica & Franco [13], Cooper et al [17] and Sarmiento [18], in the Late Triassic -Upper Cretaceous interval, distensive tectonics prevailed in which an intracontinental rift was formed, bordered by normal paleo-faults, with subsidence due to the block tectonics [19] that allowed the accumulation of the continental sediments relating to the Bocas, Girón, Jordán and Los Santos formations.
At the beginning of the Cretaceous and through the same mechanism of normal distension and faulting, a transgression took place, generating shallow marine platform environments under which the Rosablanca, Paja, Tablazo, Simití, El Salto, La Luna and Umir Formations were deposited. In the Maastrichtian at the end of the Cretaceous, the accretion of the Western Cordillera and the rise of the Central Cordillera caused a regional change in the area's tectonic regime, changing from an extensional to a compressional context [20]. In the sedimentary sequence of the Middle Magdalena Valley and in the Eastern Cordillera, this change is marked by a transition from neritic marine conditions present in the Umir Formation to the paralytic and terrestrial conditions in which the Lisama Formation was deposited [13].
Geo-tectonically, in the Middle Magdalena Valley and the Eastern Cordillera, until the Lower Cretaceous the distension was associated with an intracontinental rifting phase related globally with the separation of Gondwana and Laurasia, and the opening of the Paleo-Caribbean ocean [17,18]. In the Upper Cretaceous, this phase evolved into a retroarc basin in which the distension extended and reached its maximum extent with the deposition of La Luna Formation [20], and ended at the conclusion of the Cretaceous.
PALEOGENE AND NEOGENE
At the beginning of the Paleocene and as a consequence of the deformative advance towards the East that raised the Central Cordillera, the Middle Magdalena Valley and the Eastern Cordillera constituted a foreland basin that received sediments from the Guiana Shield and the active orogen of the Central Cordillera, with the Lisama Formation depositing itself in continental environments. Already at this time, the elevation of certain sectors of the Eastern Cordillera began taking place locally and heterogeneously [20]. In the Middle Paleocene, the Santander and Floresta massifs rose during the phase that culminates in the Paleoandean orogeny of the Early -Middle Eocene and, in the anticlinal zones formed, the erosion removes a large part of the Cretaceous sequence, while sedimentation and subsidence continue in the syncline zones more or less continuously [13], [21]- [22]. [20], [23]- [24] with certain sectors that locally began to rise from the Late Eocene -Early Oligocene [25,26]. During the Middle Miocene -Pliocene, the Andean Orogeny occurred, in which the old foreland basin was fractionated in the Eastern Cordillera and the Middle Magdalena Valley, Llanos and Catatumbo basins [20]. [29] conducted mineralogical and geochemical studies into fracture filling materials. These authors obtained data on the origin, chemical nature and paleo-temperatures of the fluids. These data were used to interpret their relationship with the genesis of emerald deposits and also to identify hydrocarbon migration events within the Rosablanca Formation.
Regionally
In order to estimate the deformation events and the history of exhumation for the Macanal Formation (of Berriasian age, on the eastern flank of the Eastern Cordillera), Mora et al [26] integrated data on fluid inclusions, vitrinite reflectance (Ro), Apatite Fission Track (AFTA) and structural field data. With the results obtained, a model was built that integrates paleotemperatures, the exhumation of the Cretaceous units in the area, the compressional events, the migration of paleofluids and the time period for these events.
In the study area, Julivert [12] conducted petrography studies in the Rosablanca Formation in order to produce a petrological characterization of the unit, examine the correlation with the stratigraphic levels in the field and determine the paleoenvironmental conditions of the deposit and how they varied throughout the deposition of the Formation. His work focused mainly on the textural, compositional and paleontological aspects.
Conde [5] and Conde, Mantilla, Naranjo & Sanchez [7] conducted a regional study on calcite veins belonging to the Rosablanca Formation, integrating samples obtained in the Mesa de Los Santos sector and in the Middle Magdalena Valley, and through the use of petrography, cathodoluminescence, fluid inclusions and rare earth geochemistry they determined regionally at least three events relating to the opening and filling of calcite veins and two hydrocarbon loading events that used fractures as migration routes.
Through chemostratigraphy, stratigraphy and petrography, Bedoya & Nomesqui [30] analyzed carbonates from the Rosablanca Formation in Mesa de los Santos and Zapatoca. The data obtained suggest that the unit was deposited in the Valanginian -Aptian lower interval in a sedimentation environment associated with a shallow platform affected by strong subsidence. Similarly, they identified diagenetic processes such as silicification, compaction and carbonate cement precipitation, proposing that carbonate sequences exhibited processes of eodiagenesis, mesodiagenesis and telodiagenesis. In addition, the petrography suggests that porosity is at a low percentage, and is of the secondary type and is fracture-related.
3.
Four (4) core samples obtained from outcrops were analyzed, which were coded as LHR2-01, LHR2-02, LHR2-03 and LHR2-04, using petrography, fluid inclusion, SEM, cathodoluminescence and rare earth elements (REE) techniques. The analyses were focused on the limestone that constitutes the wall rock, and on the carbonates that form the filling material of the veins that cross the wall rock, reflecting uncomformity.
The exact location of the samples is not provided due to confidentiality of the information. The analyses were carried out in the laboratories of the Colombian Petroleum Institute and Universidad Industrial de Santander. For purposes of petrography and cathodoluminescence, a Nikon Eclipse E-200 transmitted light petrographic microscope and a Clmk3A / Clmk4 cathodoluminescence plate (300 -500 μA and 12 -15 kV) were used in order to identify minerals, cements, textures and filling events relating to the fractures or veins.
SEM analyses in the veins were performed using a Leo 1450VP electron microscope equipped with an X-ray scattering energy system (OXFORD INCA).
For the comparison between the composition of the wall rock relating to the Rosablanca Formation and the filling of the fractures through rare earth elements, data from inductively coupled plasma mass spectrometry was gathered using an ICP-MS, Perkin Elmer ELAN 6000 device.
The homogenization temperatures, salinity and chemical system of the fluids were analyzed with fluid inclusions in a Linkam THMS 600 stage. The petrography was performed using a Carl Zeiss AXIOLAB transmitted light microscope, and a Nikon Eclipse LV 100 transmitted light microscope coupled to a UV light system for the detection of fluid inclusions with hydrocarbons.
Structurally these limestones are massive, with no stratification, lamination and sedimentary microstructures observed; texturally they are grain-supported rocks with a framework formed by particles of elongate and rounded forms comprised by intraclasts and bioclasts (which were identified as echinoderms, brachiopods and bivalves). The rock exhibits good calibration, and the contacts between the sedimentary particles are longitudinal and concaveconvex due to compaction, also evidenced by the presence of styloliths (Figure 3a).
Orthochemicals such as pseudosparite and sparite appear occupying the space between the particles, and the micrite manifests itself forming bundles around the bioclasts ( Figure 3b) and exhibits replacement by pseudosparite (Figure 3c). In addition, the sparite also appears as crystals, partially or fully replacing the bioclasts (Figure 3b).
The presence of oxides in the form of pseudomorphs associated with the sparite (Figure 3d) was identified in the host rock, as well as within the veins associated with calcite crystals (Figure 3e).
CARBONATE VEINS
The fractures inside the wall rock, previously classified as Packstones and Grainstones according to Dunham [31], have thicknesses ranging between 2 mm and 2 cm, and at the textural level they are filled by the following types of crystalline aggregates (Figure 3f): GRANULAR AGGREGATES (BLOCKY TEXTURE -BT) formed by inequigranular aggregates of euhedral oxides (pyrite pseudomorphs) associated with euhedral crystals to calcite anhedrals that developed syntaxially (Figure 4a and Figure 4b).
The calcite appears twinned and with an undulatory extinction, it is located adjacent to the rock -fracture contact, with some of these crystals containing fragments of the host rock. FIBROUS AGGREGATES (Fibrous Texture -FT) formed by calcite crystals containing S and Mg (Figure 4d). They appear as individuals with an acicular habit, forming fibrous aggregates that are arranged perpendicularly with respect to the previously described aggregates. Visually, the calcite in these aggregates is colorless with the exception of certain fibrous aggregates that exhibit a pale brown tone in parallel Nichols, and have a low to medium relief, undulating extinction, and third-order green-pink interference colors.
PETROGRAPHY
Petrographic and microthermometric analyses were carried out on calcite crystals belonging to granular aggregates (BT) and granular aggregates of elongate crystals (BET) because they contain fluid inclusions of the appropriate size to be studied. The petrographic results are illustrated in Table 1.
From a petrographic point of view, the primary aqueous fluid inclusions present in the granular aggregates were grouped in fluid inclusion associations (FIA) 1 to 4 ( Figure 5a, Figure 5b, Figure 5c). Morphologically they are regular, irregular, tabular and ovoid, they are monophasic (constituted by a liquid or gaseous phase) or biphasic (formed by a liquid or gas phase). Their degree of filling (volume occupied by the bubble within the fluid inclusion) is variable, and the gas bubble occupies a volume ranging between 0 to 100% of the fluid inclusion.
The primary aqueous fluid inclusions in the granular aggregates of elongate crystals, were petrographically represented by fluid inclusion association (FIA) 6 ( Figure 5d, Figure 5e). These are of different sizes, with irregular and ovoid shapes, and at room temperature they are monophasic (formed by a liquid phase) and biphasic (liquid and vapor phase). Their degree of filling is less variable than in the granular aggregates, varying from 0.7 to 1 (the gas bubble does not occupy a volume greater than 30% with respect to the fluid inclusion's total volume). Not observable because these FI exhibits a small size. In fibrous aggregates, the primary aqueous fluid inclusions were grouped in fluid inclusion association (FIA) 7 (Figure 5f) composed of inclusions of a very small size (<4 μm in length). Due to this factor, it was not possible to observe the phase relationships.
MICROTHERMOMETRY
The microthermometric analyses were carried out on fluid inclusions belonging to fluid inclusion associations (FIA) 1, 2, 3 (granular aggregates) and 6 (granular aggregates of elongate crystals), which showed a tendency to experience decrepitation phenomena while they were under heating or cooling.
The fluid inclusions relating to FIA 1, 2, 3 were first frozen to -150°C and then heated. During this process eutectic temperatures (Te) between -51.8 ° C and -50. Table 2.
RARE EARTH ELEMENT (REE) GEOCHEMISTRY
The analyses conducted by means of rare earth element geochemistry were performed on sample LHR2-01 (host rock, and calcite crystals belonging to the granular aggregates located inside the veins). The results in terms of the concentration of elements (ppm) and the normalization value with respect to PASS (Post Archean Australian Shale) are shown in Table 3, and the normalization diagram is set out in Figure 6. The values for the Cerio and Europio anomalies were calculated in accordance with Rollinson [33].
According to Figure 6, the normalization values are higher in the host rock (packstone) than in the granular aggregates (fracture filling material). In addition, the trend for the LREE is similar in both graphs, and for the HREE the trend is similar except for the Gd and Ho elements.
By taking the values for LaN, SmN, GdN and YbN as a reference ( (LaN * PrN) = 0.534] indicates the influence of seawater, whose REE distribution is similar to that of the modern sea [34,35]. In this context, the negative Cerium anomaly is caused by the oxidation of Ce 3+ to the more insoluble Ce 4+ under specific pH and eH conditions [36,37,38].
In addition, this negative anomaly is indicating the incorporation of REE directly from seawater or pore water under oxic conditions [40]. The positive Europium anomaly [EuN /√ --(SmN * GdN) = 1.068] is not typical of seawater, and it could be caused by processes such as hydrothermal discharges in mid-oceanic ridge areas [41,42], river discharges to the sea [43] and diagenesis [44].
Considering the above, it can be suggested -in accordance with the REE diagram ( Figure 6) and the values for the Cerium and Europium anomalies -that the host rock (Rosablanca Formation) was deposited under oxic conditions, with the REE retaining their marine nature and implying an autochthonous origin for them [28], in a palaeogeographic and paleotectonic context relating to the early Cretaceous, characterized regionally by the formation of the proto-Caribbean ocean arising from the break-up of Pangaea [45], where certain expansion centers might have been relatively close to the physiographic site where the Rosablanca Formation was deposited, thus explaining the positive anomaly for the Europium found in the host rock. Table 2. Results of microthermometric analyses in primary fluid inclusions for granular aggregates and granular aggregates of elongate crystals. Te = eutectic temperature, Tfi = final ice melting temperature, Th = homogenization temperature, Td = decrepitation temperature. REEs of the granular aggregates are of the same nature as the REE present in the host rock, implying their extraction from such rock. In accordance with the above point of view, it is possible that the fluids that circulated through the fractures and that later gave rise to the granular aggregates, granular aggregates of elongate crystals and the fibrous aggregates, came from the Rosablanca Formation, meaning that intra-formational fluids were involved in the diagenesis of this Unit [7,28].
4.
Using the various studies conducted both at a regional and local level on the Rosablanca Formation [5,7,12,29,30] as a reference, and considering the fact that four samples were studied, the results obtained in this study suggest that locally (i.e. in the Mesa de Los Santos sector), during the post-depositional history of the Rosablanca Formation, diagenesis was characterized by events of dissolution, precipitation, compaction, fracturing and fluid circulation, which were simultaneous with the tectonic regimes that the unit was subjected on a local and regional scale after its deposition.
DIAGENESIS
The petrography indicates that the earliest diagenetic event was the precipitation of micrite around the bioclasts, followed by the dissolution of unstable particles and bioclasts (probably constituted by aragonite) and the subsequent formation of authigenic calcite (sparite -microesparite -pseudosparite) and pyrite within fragments of shells, intraclasts and as filling material in the porous space.
As the burial progressed, the compaction caused by the weight of the overlying sedimentary units generated the concavo-convex and longitudinal contacts, the reduction of primary porosity and the formation of stylolites.
The pressure exerted by the fluids of diagenetic origin gave rise to fracturing [46] and therefore an escape route for these fluids, generating a system of fractures through which the fluids circulated, and fundamentally carbonate minerals were precipitated and grew syntaxially, forming the veins. At least three filling events relating to these veins are documented in this study.
First, calcite and pyrite were precipitated forming the granular aggregates (Blocky Texture) and then calcite and quartz generating the granular aggregates of elongate crystals (Blocky Elongate Texture), and during the third event, calcite, giving rise to the fibrous aggregates (Fibrous Texture). This means that the fractures acted as escape channels suggesting that, in Mesa de los Santos, the Rosablanca Formation is able to behave like a fractured reservoir [5].
It is likely that pyrite oxidation will occur after the formation of granular aggregates due to the effect caused by the circulation of oxidizing fluids, which are probably of meteoric origin.
RELATIONSHIP BETWEEN THE FORMATION OF VEIN FILLS AND TECTONICS
From a textural point of view, the fractures filled by various types of mineral precipitates in the Rosablanca Formation related to crack seal veins formed by the repeated fracturing and mineral precipitation, developing granular, elongate and fibrous mineral fillings [47,48].
In addition, considering that proposed by Mügge [49], granular aggregates are formed in contexts of rapid opening where the opening rate is greater than the crystalline growth rate, while elongate and fibrous textures are formed in a slow opening environment where the opening rate is slow compared to the crystal growth rate.
Moreover, granular aggregates precipitate from fluids in contexts with zero or insignificant deformation at the time of crystallization, generating free -face growth, while elongate and fibrous aggregates crystallize in a context with the presence of deformation, generating contact growth [50].
Taking into account the above, one could consider that, in the Mesa de Los Santos sector, the diagenesis processes (through to the formation of granular aggregates (Figure 7) occurred when the unit was deposited locally and buried under a dominant distensive geotectonic context during the Cretaceous, not only in Mesa de Los Santos, but also in the Middle Magdalena Valley and in the Eastern Cordillera where this unit is also located [17,18].
The precipitation of granular aggregates of elongate crystals and fibrous aggregates inside the Rosablanca Formation was able to occur when the sedimentary sequence deposited in the Mesa de Los Santos sector began to experience compressive forces, likely related to the beginning of the basin tectonic inversion since the Paleocene (Figure 8).
Considering that Mesa de Los Santos is relatively close to the Santander Massif and that, according to Mojica & Franco [13], it was raised in the Middle Paleocene, and that in the study area only the Lower Cretaceous sedimentary sequence is preserved, it can be suggested as a hypothesis that the precipitation of the granular aggregates of elongate crystals and the precipitation of the fibrous aggregates within the Rosablanca Formation in Mesa de Los Santos could be related to the progressive elevation of the Santander Massif, at least since the Middle Paleocene, and that it experienced its greatest pulses during the paleo-Andean orogeny in the Middle Eocene and during the Andean Orogeny in the Middle Miocene [20].
GRANULAR AGGREGATES:
It is interpreted that these intraformational fluids circulated at minimum temperatures of between 100 °C -150 °C ( Figure 9). Considering the variability in the degree of filling observed in the primary fluid inclusions for granular aggregates and the significant range of homogenization temperatures obtained, one can consider that, in accordance with Goldstein & Reynolds [57] and Goldstein [58], these data represent heterogeneous conditions of entrapment for a system possibly composed of an immiscible mixture of hydrocarbons and brines.
Although no hydrocarbon fluid inclusions were detected in granular aggregates, Conde [5] In addition, Mantilla et al [59] report the existence of hydrocarbons in the spaces between fluorite crystals and within microfractures associated with the Pescadero Granite, which is located near the study area, and they propose that these hydrocarbons came from the Rosablanca Formation, which reached thermal maturity conditions between 60 °C -100 °C.
This makes it possible to suggest that the granular aggregates precipitated from an immiscible mixture of hydrocarbons and brines, implying the existence of an event involving the generation and migration of hydrocarbons derived from the Rosablanca Formation in the Mesa de Los Santos area, as a result of which the unit reached thermal maturity conditions due to burial.
GRANULAR AGGREGATES OF ELONGATE CRYSTALS: the data obtained by microthermometry seem to indicate minimum trapping temperatures between 190 °C and 230 °C (Figure 9) for the fluids that generated this type of filling. Additionally, no associated hydrocarbon fluid inclusions were detected. However, this data should be regarded with caution, as the following must be considered: a) the temperatures are very high for a sedimentary system and the petrographic analysis showed no evidence of deep diagenesis, or even features such as the development of incipient foliation.
b) it was difficult to find biphasic fluid inclusions because most are monophasic, and those that were measured by means of microthermometry showed a tendency towards decrepitation. c) Petrophically, biphasic primary fluid inclusions (L + V) were observed for this type of filling, and the degree of filling exhibited little variability associated with monophasic fluid inclusions (L), with frequently irregular shapes and evidence of necking down. If we consider that proposed by Goldstein & Reynolds [57] and Goldstein [58], the petrographic and microthermometric data would represent homogeneous entrapment conditions from a system formed by low temperature brines in a liquid state, and after formation these fluid inclusions experienced post-trapping processes.
In the Mesa de Los Santos, the petrographic study of core samples belonging to outcrops from the Rosablanca Formation shows that the wall rock is classified as Packstones and Grainstones, and the veins are texturally formed (mainly) by carbonates that constitute three types of filling: granular aggregates, granular aggregates of elongate crystals, and fibrous aggregates.
In the study area, the diagenesis of the Rosablanca Formation involved dissolution events, cement precipitation, compaction, fracturing, opening and fluids migration during at least three events in which the following precipitated consecutively: granular aggregates, granular aggregates of elongate crystals, and fibrous aggregates.
The diagenetic events relating to dissolution, cement precipitation, compaction, fracturing in the wall rock, and the formation of the granular aggregates inside the veins, all happened locally and regionally in a distensive geotectonic context that was dominant during the Cretaceous in the area of Mesa of Los Santos.
The formation of granular aggregates of elongate crystals and fibrous aggregates occurred under a compressive tectonic regime linked to the initial stages of tectonic inversion, in the study area. The elevation of the Santander Massif could have influenced the formation of fillings of this type (at least since the Middle Paleocene).
The fractionation of REE and HREE in the host rock is greater than in the fracture filling material. Moreover, regarding fractionation of LREE, the opposite trend is observed. In addition, the values for the anomalies relating to Cerium and Europium suggest that the Rosablanca Formation was deposited under oxic conditions, retaining its marine nature and implying an autochthonous origin for the REE.
For the fracture filling material, the similarity in the trend for Cerium and Europium and the standardized REE diagrams indicate that the granular aggregates and probably the granular aggregates of elongate crystals (together with the fibrous aggregates) precipitated from fluids that came from the Rosablanca Formation, entailing the circulation (through fractures) of fluids of intraformational origin.
The fluids responsible for the formation of granular aggregates and granular aggregates of elongate crystals, were intraformational brines from the Rosablanca Formation, composed of the H2O -NaCl -CaCl2, with salinities ranging between 0.03 -12.96% wt eq NaCl.
During the formation of the granular aggregates, there was an event involving the migration and loading of hydrocarbons generated by the Rosablanca Formation, as a result of which it entered conditions of thermal maturity due to burial. The granular aggregates precipitated from heterogeneous conditions in a system formed by an immiscible mixture of brines and hydrocarbons.
The granular aggregates of elongate crystals precipitated from low temperature fluids formed by brines in a liquid state. After entrapment the fluid inclusions experienced post-trapping processes.
In the Formation Damage Laboratory of the Colombian Petroleum Institute-ICP, fracture fluids are designed and evaluated for the annual campaigns of hydraulic fracturing in conventional reservoirs. | 2019-04-10T13:12:59.390Z | 2018-06-15T00:00:00.000 | {
"year": 2018,
"sha1": "0ab0c85d4bb461a2a8904e0644289af96f2943a3",
"oa_license": null,
"oa_url": "https://ctyf.journal.ecopetrol.com.co/index.php/ctyf/article/download/90/19",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b2cded64cc8109b1473bf94c8915b7356a29ed8d",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
} |
227060151 | pes2o/s2orc | v3-fos-license | Disclosing the actual efficiency of G-quadruplex-DNA–disrupting small molecules
The quest for small molecules that avidly bind to G-quadruplex-DNA (G4-DNA, or G4), so called G4-ligands, has invigorated the G4 research field from its very inception. Massive efforts have been invested to i- screen or design G4-ligands, ii- evaluate their G4-interacting properties in vitro through a series of now widely accepted and routinely implemented assays, and iii- use them as unique chemical biology tools to interrogate cellular networks that might involve G4s. In sharp contrast, only uncoordinated efforts at developing small molecules aimed at destabilizing G4s have been invested to date, even though it is now recognized that such molecular tools would have tremendous application to neurobiology as many genetic and age-related diseases are caused by an over-representation of G4s, itself caused by a deficiency of G4-resolving enzymes, the G4-helicases. Herein, we report on our double effort to i- develop a reliable in vitro assay to identify molecules able to destabilize G4s, the G4-unfold assay, and ii- fully characterize the first prototype of G4-disrupting small molecule, a phenylpyrrolcytosine (PhpC)-based G-clamp analog.
which the FAM fluorescence is quenched by the proximal dabcyl. The helicase assay per se is triggered by the addition of both Pif1 (0.5 mol. equiv.), which, in presence of an excess of ATP (4.5 mM), unfolds the system in a 5'-to-3' manner. The strand separation is then monitored through the enhancement of the FAM fluorescence. The reverse reaction is suppressed by the addition of a 15-nt ODN named Trap (5 mol. equiv.), complementary to the FAM-labelled 15-nt ODN, and the process is driven to completion by the addition of a 49nt ODN named C-htelo (5 mol. equiv.), fully complementary to dabcyl-labelled This assay was originally developed to quantify Pif1 activity and its inhibition by G4stabilizing agents (BRACO-19, [38] pyridostatin (PDS), [39] PhenDC3 [40] and TrisQ, [41] 25 mol.
equiv.).
This assay is efficient but cannot be conveniently used as a HTS test for screening G4disrupting molecules because of the limited access to Pif1 helicase that is not commercially available and must be expressed and purified. We reasoned that a modified, simplified version of this assay might be suited to assess the G4-disrupting activity of small molecules: indeed, the kinetics of the final DNA system opening upon addition of C-htelo could be affected by the presence of chemicals, being either slowed down by G4-stabilizing compounds or accelerated by G4-destabilizing compounds. This approach would greatly simplify the protocol, making it a one-step assay in which the initial FAM/dabcyl duplex is incubated with putative candidates whose effect on G4 stability is directly monitored upon addition of no ATP,no Trap). This assay named G4-unfold ( Figure 1B) is thus practically convenient, being performed at room temperature for 1h in a 96-well plate format.
G4-unfold evaluations.
First, the kinetics of the FAM/dabcyl duplex opening was decreased by using a lower concentration of C-htelo (2 mol. equiv. versus 5 mol. equiv. in the initial setup, with V0 = 70.5 versus 51.5 s -1 , respectively) and the effect of chemicals assessed throughout a wider range of concentrations (1, 5, 10 and 20 mol. equiv.). As seen in Figure 3 and in the Supporting Information ( Figures S1-6), the presence of the small molecules affects both the kinetics (represented by the slope of the curve after C-hTelo addition) and thermodynamics of the hybridization (represented by the final fluorescence level). This is particularly obvious for the experiments performed with TMPyP4 ( Figure 3E), which might originate in several factors (e.g., screen effect) that cannot be easily disentangled. A way to -7 -circumvent this issue would be to normalize the curves obtained: as seen in Figures Results (initial velocity V0, expressed in s -1 ) of the G4-unfold assay performed with the FAM/dabcyl duplex construct (40 nM) and increasing amounts of 14 compounds (from 1 to 20 mol. equiv.); diamonds are the experimental data (n > 4); bars represent the averaged V0; error bars represent standard deviation (S.D.); the diagonally hashed grey zone represent the exclusion zone (calculated as 2xS.D. of the control, Ctrl). E-F. Examples of experimental curves (n = 2) obtained with TMPyP4 and PhpC. G. Averaged V0 values obtained with the G4-unfold assay; values in dark cyan are those below that of the control (V0 = 51.5 s -1 ).
CD titrations, UV-Vis spectroscopy, PAGE and FRET-melting investigations.
We therefore decided to further investigate the properties of a panel of selected compounds, i.e., TMPyP4 and PhenDC (likely G4-stabilizers) and TPPS, 1,5-BisNPO, 2,7-BisNPN and PhpC (likely G4destabilizers), via a series of in vitro assays previously conducted to characterize possible G4unwinding agents, i.e., CD and PAGE. CD titrations were undertaken using the human telomeric G4-forming sequence (hTelo) and increasing amounts of candidates (1 to 10 mol. equiv.). Importantly, CD titrations were systematically paralleled with UV-Vis measurements, to investigate the spectroscopic behavior of both the small molecule and its complex with hTelo in solution. As seen in Figures 4A-C,G and S9-17, we first confirmed the previous observations according to which TMPyP4 triggers a strong decrease (68.7%, at 10 mol. equiv.) of the CD signal of the G4 (collected at its maximum, 293 nm). However, the UV-Vis contribution of TMPyP4 alone ( Figure 4B, blue dotted line, and Figures 4C,G) where the G4 absorbs light (collected at its maximum, 257 nm) is important and dose-dependent, which also leads to an increase in the UV-Vis contribution of the TMPyP4/hTelo complex (from 3.5 to 36.5% variation, Figure 4B, blue line, and Figures 4C,G), implying a possible induced CD (iCD) contribution to the CD signatures of the TMPyP4/hTelo complex. PhenDC3 does not disrupt the G4 structure (2.5% variation) while its UV-Vis signatures are comparable to that of TMPyP4 (from-1.7 to 22.1% variation), implying again a possible iCD contribution. The UV--9 -Vis contribution of both TPPS and PhpC, alone or in complex with the G4, are comparatively low (-4.1 to 13.0% for TPPS/hTelo, -3.4 to 7.7% for PhpC/hTelo) while they trigger significant CD decrease (down to -20.5 and -17.5%, respectively). The two azacyclophanes are found to trigger both CD (27.3 and 52.1% for 1,5-BisNPO and 2,7-BisNPN, respectively) and UV-Vis decreases (-26.7 and -20.5%, respectively), with a minimal UV-Vis contribution alone in solution. Collectively, these results highlight first and foremost that great caution must be exercised when relying only on CD titrations to study DNA/small molecule interactions, due to possible iCD contributions and other possibilities (e.g., aggregation, vide infra) that cannot be easily unraveled. Further investigations are thus required to better decipher the G4interacting properties of these candidates. To do so, PAGE investigations were performed with this series of 6 compounds. In these conditions, the partial unfolding of hTelo G4 is expected to result in smeared PAGE bands -10 -(originating in an unstructured shape, a bigger molecular volume and a modified charge) rather than in loss of the signal. As above, TMPyP4 triggers a strong decrease of the band corresponding to hTelo (-76.1%, at 20 mol. equiv., Figures 4D,G), which is not in line with the UV-Vis titration (36.5% increase at 10 mol. equiv., Figures 4B,G) and might originate in possible aggregation/precipitation events. PhenDC3 leads to band disappearance to an even greater extent (-89.4%, at 20 mol. equiv.), again suggestive of possible aggregation/precipitation of the ligand/hTelo complex. Indeed, a ligand-mediated formation of multimeric G4s, or multimerization, [52] is possible as it has been demonstrated for some G4-ligands (e.g., N-methyl-indoloquinolinium [53] and porphyrin) [54] and characterized both experimentally [55] and theoretically, [56] which can lead to supramolecular assemblies too large to migrate within the gel lattice. In these conditions, TPPS is found rather inactive (from 1.4 to -4.7% variation) while the two azacyclophanes and PhpC provide dosedependent responses (2.2 to -49.3% for 1,5-BisNPO, 3.1 to -41.7% for 2,7-BisNPN, -0.6% to -13.1% for PhpC), in line with the CD/UV-Vis results. Therefore, and again, PAGE provides interesting insights into the G4-interacting properties of these candidates but cannot be used as a standalone technique since it is not devoid of experimental pitfalls.
Finally, we decided to evaluate the apparent affinity of these candidates for hTelo using the classical FRET-melting assay (with the doubly labelled hTelo, F21T). As seen in Figures 4F,G and S18-20, this stabilization is quite high and dose-dependent for PhenDC3 and 2,7-BisNPN (DT1/2 up to 30.9 and 13.8 °C, respectively, Figure 4G) while the saturation is obtained for 5 mol. equiv. of TMPyP4 (DT1/2 = 19.9 °C). Conversely, TPPS, 1,5-BisNPO and PhpC do not display any affinity for F21T and are even able to lower its melting temperature by 1.6, 0.2 and 1.4 °C, respectively. These results thus show that 3 candidates display high affinity for folded G4s (TMPyP4, PhenDC3 and 2,7-BisNPN) while TPPS, 1,5-BisNPO and PhpC do not interact with folded G4s. Collectively, the data gathered through this in vitro workflow indicate that only PhpC responded positively, TPPS failing at the PAGE step, 1,5-BisNPO at the CD/UV-Vis step and 2,7-BisNPN at the FRET-melting step. We thus decided to further investigate the G4-disrupting properties of PhpC through additional experiments.
PhpC favors helicase processivity presumably via G4 disruption. First, we tried to gain direct insights into the way PhpC interacts with G4s but NMR investigations were poorly conclusive ( Figure S21) owing to the overall decrease of the NMR signals of hTelo rather -11 -than a clear NMR signal redistribution. We thus exploited the fluorescence properties of the PhpC analogues, which are sensitive to the proximity of nucleobases. Initially embedded in a PNA strand, PhpC allowed for monitoring its association with the targeted DNA strand through fluorescence quenching. [51] When titrated against increasing concentrations of guanosine monophosphate (GMP, 1 to 5 mol. equiv., to mimic one flipping G per G4), the PhpC fluorescence is marginally affected (-7.0% at best), indicating that the formation of the PhpC:GMP base pair does not influence the spectroscopic properties of the cytosine derivative, whatever the ionic content of the buffer (from 1 to 100 mM K + , Figure 5A). When titrated against hTelo, the K + -content of the buffer matters: decreasing the G4 stability by decreasing the K + concentration of the buffer (from 100 to 1 mM K + ) triggers a notable decrease of the PhpC fluorescence (-25.1, -30.5 and -36.4% for 100, 10 and 1 mM K + , respectively). The relationship between G4 stability (quantified as T1/2 values determined by FRET-melting assay) and fluorescence quenching as a function of the K + -content is almost linear (R 2 = 0.96, see inset in Figure 5A). The decrease of the PhpC fluorescence might be attributed to the transient opening of the external G-quartet (the external G-quartet breathes more easily in a less stable G4), enabling PhpC to trap a flipping G (schematically represented in Figure 5A, left), thus laying in close proximity of the remaining G-triad that can affect its fluorescence by contact quenching.
We thus reasoned that this interaction might favor the G4 unfolding by Pif1, as a result of both an increase of the G4 instability by the transient stabilization of a partially open G4 (schematically represented in Figure 5B, right) and a weak and reversible interaction between PhpC and a wobbling G, which might be easily disrupted during Pif1 translocation.
To investigate this, the complete Pif1 helicase assay (described in Figure 1A) was implemented with 2 enzyme concentrations (160 and 170 nM) in absence or presence of 10 mol. equiv. PhpC ( Figures 5B, right, and S22). Quite satisfyingly, the presence of PhpC enhances the Pif1-mediated G4 unfolding (between 2.4-and 4.0-fold, Figure 5B, left, and S22), while all other small molecules evaluated so far were reported to impede it. [37] These results thus open new horizons for chemical biology, as they show that small molecules can facilitate G4 unwinding by G4-helicases in a nature-inspired manner, given that only proteins such as RPA (replication protein A) [57] have been reported to date to stimulate Pif1 activity.
They also provide the first example of a small molecule able to do so, thus offering new
Conclusion
The wealth of data collected here highlight the issues faced when exploring the ability of small molecules to disrupt G4s, as their behavior is found to be strongly dependent on the technique and the concentration used, as previously evoked. [35] This originates in the fact that small molecules can interact with G4s in many different ways, as confirmed here with TMPyP4, certainly the most representative example of compound whose G4stabilization/disruption properties are complicated to unravel. These results keep on demonstrating also the versatility of the porphyrins as DNA-interacting scaffolds as modification of their chemical core (here, their charges and side-arms; previously their sidearms [35] and the presence of a metal in their central cavity) [34] can reverse their binding properties. They also cast a bright light on the promising G-clamp analog scaffold PhpC to efficiently disrupt G4 structures and facilitate G4-helicase activity in vitro.
-13 -Beyond this, they lend credence to the reliability of the G4-unfold assay described here to detect putative G4 unwinders and, above all, to the step-by-step methodology relying on a combination of techniques (G4-unfold, CD, PAGE, FRET-melting, Pif1 helicase assay) to assess the actual efficiency of putative G4-unwinding candidates in the most reliable possible way.
Applying this workflow to wider chemical libraries will undoubtedly lead to the identification of ever more efficient G4-unwinders, which will find soon applications as promising chemical biology tools in the field of genetic diseases.
Materials and Methods
Oligonucleotides. All oligonucleotides used here were purchased from Eurogentec (Seraing, Belgium) and stored at -20°C as 500 µM stock solutions in deionized water ( | 2020-11-19T09:10:08.345Z | 2020-11-16T00:00:00.000 | {
"year": 2020,
"sha1": "5b78164c3ac3d23db8544b92bf2657890aa1e34c",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/11/16/2020.11.16.382176.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "557c77eb78a4c08e18e47856b4577bf7c4678c4d",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
} |
263277394 | pes2o/s2orc | v3-fos-license | Diagnostic value of quantification of cell‐free DNA for suspected gallbladder cancer
Abstract Background and Aim An accurate preoperative diagnosis as the basis for deciding the most appropriate surgical procedure is essential for patients with suspected gallbladder cancer (GBC). The aim of this study was to investigate the usefulness of cell‐free DNA (cfDNA) for the preoperative detection of ≥T2 invasion in patients with suspected GBC. Methods Twenty‐four patients who underwent resection for suspected GBC were enrolled. The concentration of cfDNA obtained from blood samples preoperatively was measured and evaluated in two distributions. The first peak (less than 200 base pairs) of cfDNA distribution was defined as the shorter fragment cfDNA, considered to originate mainly from apoptosis; and the second peak (200 base pairs or more) was defined as the longer fragment cfDNA, originating mainly from necrosis. Results Pathological analysis identified benign disease in 12 patients and GBC in 12 patients, of whom 6 patients had ≥pT2 GBC. Carcinoembryonic antigen (CEA) and carbohydrate antigen (CA)19‐9 were significantly higher in the ≥pT2 GBC group than in the benign/<pT2 groups (2.1 [0.7–11.0] vs 4.5 [1.7–13.0], P = 0.033 and 14.0 [<2.0–401] vs 37.0 [26.0–141.0], P = 0.007, respectively). When limited to patients in the GBC group (n = 12), only cfDNA of longer fragments was significantly lower in the ≥pT2 group than the <pT2 groups (2.98 [1.88–4.61] vs 1.98 [1.42–2.42], P = 0.026) but cfDNA of shorter fragments showed no significant difference between above both comparisons. Conclusion CfDNA might have potential use as a diagnostic factor for patients with suspected GBC.
Introduction
Gallbladder cancer (GBC) is a relatively rare cancer that has high malignant potential and poor prognosis. 1Radical resection is the only curative treatment, but as the disease is commonly in an advanced stage by the time of diagnosis, few patients qualify for resection. 1,2Although recent advances in imaging modalities have enabled early detection of GBC, 3 an accurate preoperative diagnosis remains challenging. 3Furthermore, in those deemed suitable for resection, the choice of procedure varies according to the degree of progression. 24][5] Thus, the ability to diagnose T2 or more preoperatively is important for patients with suspected GBC.Although recent advances in minimally invasive surgery for GBC can solve this dilemma even for advanced GBC, 6 this approach has not been accepted worldwide.Therefore, an accurate preoperative diagnosis as the basis for deciding the most appropriate surgical procedure is essential for patients with suspected GBC.Furthermore, Gallbladder wall thickening in chronic cholecystitis or xanthogranulomatous cholecystitis (XGC), which often mimics advanced GBC, is often included in suspected cases of GBC and can confuse the selection of appropriate surgical management.For these reasons, it is important to obtain an accurate diagnosis in patients with suspected GBC, including the extent of tumor progression (distinguishing progression T2 or higher from less than T2 or benign lesions) to enable appropriate management of GBC. 1,3Various methods have been reported for assessing the depth of invasion preoperatively 1,3 but no consensus has been reached so far.
Cell-free DNA (cfDNA) comprises extracellular nucleic acids found in human serum.Its level can vary with disease progression and may have potential as a prognostic biomarker that is minimally invasive for patients.In a healthy individual, cfDNA originates from the apoptosis of nucleated cells, whereas in cancer patients the origin is generally tumor cells. 7Hence, cfDNA levels are useful for differentiating cancer patients from healthy individuals. 80][11][12][13] However, an association between cfDNA level and GBC has been reported only by Kumari et al. 14,15 In addition, cfDNA is useful for predicting the prognosis as well as an indicator of several malignant diseases. 16In other words, cfDNA may be a useful predictor of the clinical stage of a malignant disease.Indeed, Kumari et al. have demonstrated the utility of cfDNA for preoperative diagnosis in advanced GBC. 14,15 However, few studies have reported the usefulness of cfDNA for obtaining a precise preoperative diagnosis in terms of distinguishing lesions that have progressed to T2 or higher from benign lesions and from those less than T2.Thus, the aim of this study was to investigate the usefulness of cfDNA for the preoperative detection of ≥T2 invasion in patients with suspected GBC.
Methods
Patients.This was prospective study initiated since January 2020.Enrolled in the study were 24 patients with suspected GBC who underwent surgical resection between January 2020 and February 2023.None of the patients underwent preoperative chemotherapy.
Those who were diagnosed with suspected GBC with initially resectable condition were included in the study.The definition of suspected GBC was a broad-based mass including focal or diffuse thickening of the wall, or a tumor ≥1 cm with a tendency to increase in size.Patients with suspected GBC who did not undergo resection due to locally advanced or metastatic disease considered as initially unresectable, or those who were inoperable due to general conditions or co-morbidity, were excluded.Patients who declined to participate in the present study were also excluded.
We evaluated correlations of preoperative findings such as blood biochemistry data, radiographic findings, and fluorodeoxyglucosepositron emission tomography (FDG-PET) with the cfDNA and pathological findings.The depth of tumor invasion was defined using the TNM Classification of Malignant Tumors published by the Union for International Cancer Control (UICC), eighth Edition. 17DNA.Blood samples (10 ml) were obtained on the day before surgery in all patients, which were collected in Streck BCT tubes (Streck, Omaha NE), then stored at 4 C, and processed by centrifugation 2000g for 10 min at room temperature.The plasma layer was transferred to a new conical tube without removing the buffy coat and stored at À80 C.
The concentration of cfDNA was measured at Nihon Gene Research Laboratories Inc. (Sendai, Japan), as follows: After thawing the frozen plasma at 4 C, large debris was sedimented by centrifugation (2000g, 10 min, 20 C), and 1.5 ml of the supernatant was collected.High-speed centrifugation (16 000g, 10 min, 20 C) was performed to completely sediment the debris, and 1 ml of the supernatant was transferred to a new tube.
cfDNA was extracted from 1 ml of the pretreated plasma using the MagMAX Cell-Free DNA Isolation Kit (Thermo Fisher Scientific) according to the manufacturer's specified protocol.The DNA elution volume was set to 15 μl.
Electrophoresis of DNA extracts was performed using the TapeStation2200 and High Sensitivity D5000 reagent kit (Agilent Technologies), and the DNA content was determined by the TapeStation software.
9][20] Short-fragment cfDNA generally refers to that with <200 base pairs (bp). 16hical considerations.The institutional review board approved this study (Approval No. 1910008), which was conducted in accordance with the ethical standards established in the Declaration of Helsinki in 1995 (revised, Brazil 2013).Written informed consent was obtained from all patients.
Statistical analysis.Continuous variables were compared using Mann-Whitney U tests and are presented as medians with ranges.Categorical variables were compared using chi-squared or Fisher exact tests and are presented as numbers with ratios (%).Statistical significance was defined as P <0.05.All data were statistically analyzed using the SPSS statistical software version 24.0 (IBM Corp., Armonk, NY, USA).
Discussion
The present study revealed that cfDNA concentration cannot be used to identify pT2 or higher invasion among patients with suspected GBC, which was the primary endpoint of the present study, but might be useful for identifying pT2 or higher and LNM among GBC patients.Although the usefulness of evaluation by cfDNA was previously reported in GBC, 14,15 there were no reports that aimed to identify pT2 or higher among suspected GBC patients.Tumor-related cfDNA has been reported as a candidate prognosticator and biomarker for the detection of several malignant tumors. 21Generally, a high concentration of short-fragment cfDNA indicates advanced malignancy and poor prognosis in cancer patients. 16However, the early detection of malignant diseases remains difficult even using circulating tumor DNAs. 22In colorectal, gastroesophageal, pancreatic, and breast cancer, the frequency of detectable circulating tumor DNA at Stage I or II was less than 60%. 22In the present study, the range of variation in both the shorter and larger fragment groups was too wide to confirm statistical significance while comparing GBC and benign samples.However, the present results suggest that when limited to patients with GBC, quantification of cfDNA concentration might be a useful predictor of advanced stage of the disease.
Regarding the evaluation of cfDNA, for simplicity we quantified only the cfDNA value and did not evaluate gene mutations, which is too costly and difficult to adopt in the clinical setting. 14,15If cfDNA were shown to be significant in preoperative diagnosis, it could be easily adopted.
Although the present study showed a significant difference in cfDNA concentration between ≥pT2 and LNM (+) and other groups in GBC patients, the results were not what we expected.Generally speaking, we expected that the cfDNA level would be higher in GBC patients than in those with benign disease. 14,15However, concentration of the longer fragment cfDNA was significantly lower in the ≥pT2 and LNM (+) groups than in other groups of GBC patients.1).(b) Representative patient with gallbladder cancer (case No. 12 in Table 1).¶ and ¶ ¶ indicate peak lengths of first (shorter) and second (longer) fragments (base pair), respectively.We consider that the results disagree because previous evaluations of cfDNA in GBC patients included more advanced tumors, whereas the present cohort included patients only with relatively early stage.Therefore, the diagnostic value of cfDNA might differ in the patient cohort such as relatively early GBC like in the present study.Another possible reason for this paradoxical result is that the present cohort included only patients with suspected GBC, and no healthy persons were included.The majority of the patients with no malignancy in the present cohort had their diagnosis as cholecystitis.Although the presence of gall stones and the preoperative C-reactive protein value showed no significance in any of the present patient groups, values of longer fragment cfDNA might be higher in <pT2 and LNM (À) GBC due to necrosis by inflammatory reaction. 23The cfDNA value might reflect small differences in inflammation between patients with <pT2 and LNM (À) GBC.Furthermore, a previous study had reported that circulating tumor DNA values were significantly lower in patients with lung-only or peritoneum-only colorectal metastases than in those with liver-only metastases; in other words, the diagnostic power of cfDNA value may vary according to the origin and metastatic site of the tumor. 24ther than those hypotheses, it is possible that the diagnostic value of cfDNA evaluation for precise preoperative diagnosis in patients with suspected GBC is not significant.Nevertheless, evaluation of cfDNA or circulating tumor DNA for tumor discrimination in patients with suspected GBC requires further study.
The present study has some limitations.It was conducted with a small number of patients of a single ethnicity.Although we narrowed down the target cases to patients with suspected GBC, it was not enough to demonstrate statistical significance.Even though we could show the significance of cfDNA in such a small number of patients, a larger scale study is warranted.Furthermore, cfDNA size alone cannot be considered a precise indicator for advanced cancer or a diagnostic marker.Therefore, the present observations should be verified by microarray analysis, with hybridizing of the two categories of fragments separately to an array of cancer-specific target genes, or by using a polymerase chain reaction-based analysis.However, no previous report has compared cfDNA levels among patients with suspected GBC.A further study may reveal the usefulness of cfDNAs for preoperative diagnosis in these patients and contribute to selecting the most appropriate surgical procedure.
In conclusion, the concentration of longer cfDNA fragments was significantly lower in patients with ≥pT2 or LNM (+) than in patients with GBC without these attributes, and it might thus have potential use as a diagnostic factor.Further study in a larger number of patients is required.
Impact of cell-free DNA for gallbladder cancer K Sakamoto et al.
Figure 2
Figure 2 Cell-free DNA values in patients with gallbladder cancer.(a) Comparison between <pT2 and ≥pT2.Shorter fragment and longer fragment values are shown on the left and right, respectively.(b) Comparison between positive lymph node metastasis and ≥negative lymph node metastasis.Shorter fragment and longer fragment values are shown on the left and right, respectively.LNM, lymph node metastasis; N.S., not significant; * P < 0.05 (P = 0.026 and P = 0.036, respectively).
Table 1
Patient characteristics
Table 2
Comparison of benign or malignancy and benign/< pT2 or ≥pT2 in suspected GBC patients.
Table 3
Comparison of <pT2 or ≥pT2 and lymph node positive or negative in GBC patients.
Continuous variables were compared using Mann-Whitney U tests and are presented as medians with ranges.Categorical variables compared using chi-squared or Fisher exact tests are presented as numbers with ratios (%).BMI, body mass index; CA19-9, carbohydrate antigen 19-9; CEA, carcinoembryonic antigen; cfDNA, cell-free DNA; GBC, Gallbladder cancer; LNM, lymph node metastasis; SUVmax, maximum value of standard uptake value in fluorodeoxyglucose-positron emission tomography; T, tumor invasion status. | 2023-10-01T15:05:56.152Z | 2023-09-29T00:00:00.000 | {
"year": 2023,
"sha1": "0996e4f8af091074966a565fc7e17d3e2bb2f17d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jgh3.12977",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f271290059e36c621034db59a0bb950de107522f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
216247410 | pes2o/s2orc | v3-fos-license | Stability Study on Acetaminophen Removal from Aqueous Solution using TOA via Emulsion Liquid Membrane
The aim of this study is to develop a stable emulsion liquid membrane for acetaminophen removal through membrane breakage. In this work, Trioctylamine (TOA), Span 80 and kerosene were used as carrier, surfactant and diluent, respectively in membrane phase while ammonia solution was used as a stripping agent in the internal phase. Research was conducted on various parameters such as stripping agent concentration, agitation speed, extraction time and treat ratio. The stripping agent concentration was varied from 0.05 M to 0.2 M while agitation speed was investigated at 200 rpm to 500 rpm. The best condition achieved for acetaminophen removal from aqueous solution via emulsion liquid membrane were at 0.1M of stripping agent using 300 rpm agitation speed for 3 minutes of extraction time with a treat ratio of 3:1. Investigation on membrane breakage revealed the lowest membrane breakage achieved was 0.17%.
Introduction
The generation of large amount of wastewater from various usage is one of the critical pollution problems arising in this era nowadays. In recent years there is an increasing awareness of pharmaceutical contaminants in the environment. Pharmaceutical contamination in rivers is widespread with hundreds of drugs found at low concentrations. One of the main abundantly used pharmaceuticals are acetaminophen (ACTP), which is also known as Paracetamol. It is primarily used as analgesics and antipyretics. It is a drug used to relieve pain and to suppress inflammation in a way similar to steroids without side effects. Although the anti-inflammatory effect is weak, the impact on the environment is not different from others. The water solubility of ACTP is high, resulting in its easily accumulation in aquatic environment [1]. As reported by Kim, Choi [2], ACTP is one of the most frequently detected pharmaceuticals in sewage treatment plant effluents, drinking water or surface water.
Major portion of the pharmaceutical's products were removed by conventional wastewater treatment processes. ACTP wastewater is mainly treated by chemical oxidation processes such as electrochemical, ozonation, H2O2/UV oxidation, TiO2 photocatalysis and solar photoelectro-Fenton oxidation [3]. However, the application of conventional treatment process in wastewater treatment plants is unable to completely remove the residues. Thus, among the existing methods, one of the promising methods for ACTP removal is by emulsion liquid membrane (ELM). ELM process involves four main steps which are emulsion preparation, solute extraction, emulsion separation and demulsification. ELM system is created by forming a primary emulsion which consists of organic and aqueous phase stabilized by surfactant. The concept of ELM separation is a solute-carrier complex formed when the carrier selectively combines with solute ions at the external membrane phase. Therefore, ELM still can work appropriately even in the low concentration of solute.
ELM is relatively cheap with high flux rate, high extraction efficiency and environmental-friendly [4] but coalescence and emulsion swelling resulting in low emulsion stability are considered as its disadvantages. The major drawback of ELM is its instability, and this phenomenon has impeded the widespread applications of ELM in larger scale. The stability of an emulsion is defined as how resistant the liquid membrane towards high shear stress during solute extraction in ELM process. Thus, unstable liquid membrane tends to be ruptured or broken apart which will diminish some of the solute separation which has been achieved [5]. Emulsion instability occurs through various physical mechanisms such as swelling, breakage and coalescence.
Therefore, this research is expected to develop a stable ELM system, which to be dispersed to extract the targeted solute from aqueous solution through parameters optimization. Several factors affecting emulsion stability will be examined such as stripping agent concentration, agitation speed, extraction time and treat ratio. These parameters were investigated in order to obtain its best stable formulation for removal of ACTP. This research contributes to the knowledge and technology whereby the success of this research will be helpful in the application of wastewater treatment.
Materials
In this present work, the Acetaminophen (ACTP) is used as the external feed phase, Trioctylamine as carrier, Sorbitan Monooleate (Span 80) as surfactant, Kerosene as diluent and Ammonia as stripping agent. All chemicals used to produce emulsion liquid membrane are analytical grade and were purchased from Sigma Aldrich of Merck.
Analytical Procedures
Analytical procedures involved in this experiment consisted of pH measurement. pH for every sample was taken using Fisher Scientific accumet AB15 pH meter. The pH meter is calibrated with a three-point calibration using standard buffer solutions of pH 4.00, 7.00 and 10.00. The electrode of the pH meter needs to be immersed at appropriate depth in the solution. The pH reading was taken during stable reading at room temperature (25±1).
Production of Emulsion
The emulsion was prepared by via emulsification method before being dispersed into the external feed solution. The membrane phase was prepared by mixing Trioctylamine (TOA) and Span 80 in kerosene. The internal aqueous phase of ammonia solution was then added to the membrane organic phase where the volume ratio is internal aqueous phase to membrane phase is 1:3. These phases were then emulsified using the ultrasonic probe (USG-150). Then, 10 ppm of ACTP feed solution, was prepared by dissolving the desired amount of solute ACTP in HCl solution as the external feed phase solution.
Stability Study
Stable emulsions are defined as those that are persist without phase separation over a period of time. The stability of emulsion in emulsion liquid membrane is favoured by several parameters. In order to optimize the emulsion stability, several parameters were investigated such as stripping agent concentration, agitation speed, extraction time and treat ratio. The membrane breakage at various conditions were also determine.
Membrane Breakage
Membrane breakage, ε (%) were calculated based on H+ ions concentration change in the external phase which are deter-mined via pH meter according to the following equation [6]: ε (%) = x 100 (1) where, is the initial volume of the internal phase while is the volume of the internal phase leaked into external phase which can be calculated by mass balance as shown in equation below. = (2) where, is the initial volume of external phase, and are the initial pH of external phase and pH of external phase being in contact with emulsion after a certain time of stirring, respectively. is the initial concentration of OH-in the internal phase.
Effect of Stripping Agent Concentration
The effect of stripping agent concentration was investigated by varying the concentration at 0.05, 0.1, 0.15 and 0.2 M as presented in Figure 1. As the concentration of ammonia increases, the membrane breakage decrease. At low stripping agent concentration, there was insufficient stripping agent to strip acetaminophen from membrane phase. Meanwhile as the concentration increases, more solute is stripped resulting in more carrier molecules generated. However, further increase of the concentration up to 0.2 M causes the membrane breakage to increase. This is due to the high pH gradient between the internal and external phase where the large difference in ionic strength promotes the transportation of water into the internal phase causing emulsion swelling [7]. Consequently, this will trigger emulsion breakage. Thus, 0.1 M of ammonia was selected as the best stripping agent concentration in this study.
Effect of Agitation Speed
Agitation speed plays an important role in ELM stability where an appropriate speed must be selected. The effect of agitation speed to emulsion diameter was investigated at agitation speeds of 200 rpm, 300 rpm, 400 rpm and 500 rpm as shown in Figure 2. At an agitation speed of 200 rpm, the percentage of membrane breakage were at the highest. This is due to the insufficient shear energy to disperse the emulsion in the external feed phase and larger globules were formed and thus emulsion breakage occurred. Similar results were found by Kumbasar [8], stating that low agitation speed causes the ELM globules can-not be well dispersed and the formations of large globules. As the agitation speed increases up to 300 rpm, the membrane breakage decrease. High agitation speed is preferable to produce fine droplets with larger surface area and improved membrane. Further increase of agitation speed up to 500 rpm is detrimental for membrane stability problems where it results in increment of emulsion breakage. The increase of agitation speed results in unstable primary emulsion and favors the leakage of internal dispersed phase to the external continuous aqueous phase.
Valenzuela, Araneda [9] found that excessively high agitation speed could induce coalescence and breakdown of emulsion globule. Therefore, an agitation speed of 300 rpm was chosen to obtain a stable emulsion in this study.
Effect of Extraction Time
The effect of extraction time was investigated by varying it at 1, 3, 5 and 7 minutes as presented in Figure 3. 22% of emulsion breakage was observed at the first 1 minute. This may be due to the insufficient contact time which leads to the formation of easily ruptures emulsion globules thus causing the leakage of stripping agent into the external feed phase. However, the emulsion stability was improved when the extraction time was increased from 1 to 3 minutes. Complete emulsion dispersion to form W/O/W interface occurs with increasing contact time [10]. Thus, it is believed that the extraction time is adequate enough for a satisfactory stable emulsion. As the extraction time increases, the emulsion breakage also increases. Membrane breakage of 19% and 39% was observed at 5 and 7 minutes respectively. Longer extraction time causes more water transport into the internal phase which leads to membrane swelling followed by emulsion breakage. Ahmad, Kusumastuti [11] also reported that prolonged extraction time caused emulsion instability. Hence, 3 minutes of extraction time was selected as the best condition to produce a stable emulsion for acetaminophen removal.
Effect of Treat Ratio
The treat ratio was varied by changing the volume of external feed phase and kept constant the volume of the W/O emulsion. The treat ratio is varied at 3, 5 and 9 as shown in Figure 4. Results show that at treat ratio 3, lowest membrane breakage was achieved. This is due to the low volume of emulsion which causes the system to be dispersed properly resulting in a stable emulsion. Further increase in treat ratio of 5 and 9 results in higher membrane breakage. Membrane breakage occurs at high treat ratio due to the difference in osmotic pressure between the emulsion and the external feed phase causing the rupture of emulsion globule [12]. Globules interactions were enhanced for higher volume of emulsion and leads to coalescence of globules and membrane ruptured [13]. Hence, the best treat ratio with the lowest membrane breakage is at a treat ratio of 3:1.
Conclusion
This study succeeded in choosing the best parameters and operating conditions for the stability study of emulsion liquid membrane on membrane breakage. The effect of emulsion diameter to membrane breakage was reported where several parameters were studied which are the stripping agent concentration, agitation speed, extraction time and treat ratio. Throughout this study, the best condition was found to be a stripping agent concentration of 0.1 M, agitation speed of 300 rpm, extraction time of 3 minutes and a treat ratio of 3:1 with a membrane breakage of 0.17%. | 2020-03-05T10:14:28.462Z | 2020-03-05T00:00:00.000 | {
"year": 2020,
"sha1": "6f0ebc238a50e09187eeb33a391ecde0947a488a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/736/2/022081",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5f9a1ca231da70f37176c7705d874ca593b6806e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
25666553 | pes2o/s2orc | v3-fos-license | P-Type Tunnel FETs With Triple Heterojunctions
A triple-heterojunction (3HJ) design is employed to improve p-type InAs/GaSb heterojunction (HJ) tunnel FETs. The added two HJs (AlInAsSb/InAs in the source and GaSb/AlSb in the channel) significantly shorten the tunnel distance and create two resonant states, greatly improving the ON state tunneling probability. Moreover, the source Fermi degeneracy is reduced by the increased source (AlInAsSb) density of states and the OFF state leakage is reduced by the heavier channel (AlSb) hole effective masses. Quantum ballistic transport simulations show, that with V_{DD} = 0.3V and I_{OFF} = 10^{-3}A/m, I_{ON} of 582A=m (488A=m) is obtained at 30nm (15nm) channel length, which is comparable to n-type 3HJ counterpart and significantly exceeding p-type silicon MOSFET. Simultaneously, the nonlinear turn on and delayed saturation in the output characteristics are also greatly improved.
I. INTRODUCTION
S TEEP subthreshold swing (SS) devices, such as tunnel field-effect transistors (TFETs), offer great potential in building future low-power integrated circuits. One problem of TFETs is the low tunneling probability hence low ON state current (I ON ). To achieve large I ON , III-V TFET designs have been intensively studied [1]. In particular, InAs/GaSb HJ TFETs can considerably boost I ON due to their broken/staggered band alignments [2]. However, under strong confinement, required for good electrostatic control, the effective band gap and transport effective masses both increase, seriously limiting the tunneling probability. Methods to improve InAs/GaSb HJ n-type TFETs (nTFETs) include strain and doping engineering [3], [4], resonant enhancement [5]- [7], and source/channel heterojunctions [8]- [12]. For p-type TFETs (pTFETs), the problem is more severe, as the optimal source doping density is limited by the small conduction band density of states (DOS) [13]. This leads to a large depletion region in the source and thus, smaller I ON than nTFETs [14]- [16]. Doping and heterojunction engineering in the source [17] have been proposed to mitigate this problem. Another problem of TFETs is the superlinear onset and delayed saturation of the output characteristics. It has been shown that a large channel DOS degrades the output characteristics through large channel inversion charge [18], [19]. This is particularly relevant for pTFETs since the valence band DOS of most III-V materials is very large. These two issues make it very challenging to build complementary III-V TFET logic, which requires both high-performance nTFETs and pTFETs. Wu et al. [19] note that the required source and channel materials for HJ nTFETs and pTFETs differ greatly.
For HJ nTFETs, it has been previously shown that better ON/OFF ratio is achieved by adopting (110)/[110] as the confinement/transport crystal orientation, because smaller tunnel barrier energy and transport effective masses are found in this orientation [11]. It has been further shown that the ballistic I ON can be greatly increased by adding two more HJs, one in the channel [11] and one in the source, so as to form a 3HJ design [12]. In this paper, we show that by crystal orientation engineering, using the 3HJ design, we can also solve the above mentioned problems of pTFETs, achieving very large ballistic I ON as well as improved output I-V characteristics.
II. HETEROJUNCTION (HJ) PTFET
The ultra-thin-body (UTB) HJ pTFET consists of an InAs source and a GaSb channel/drain ( Fig. 1 (a)), with the device parameters listed in Table I. The NEMO5 tool [20] is used to simulate the device by solving Poisson equation and open boundary Schrödinger equation [21] self-consistently. The device Hamiltonian is described by transferrable full-band tight binding (TB) scheme (sp 3 d 5 s * basis including spin-orbit coupling) [22], whose parameters at 300K are taken from [23]. The improvements can be understood from the band diagrams ( Fig. 2 (c)) and transmission probabilities ( Fig. 2 (d)). Compared with the (001)/[100] orientation, the (110)/[110] orientation has larger transmission below the channel valence band edge (Ev), leading to larger I ON . However, its transmission above the channel Ev is also larger and the slope is less steep, leading to larger source-to-drain leakage and larger SS. As seen in the band structures plotted in Fig. 3, the (110)/[110] InAs/GaSb UTB has smaller tunnel barrier energy and transport effective masses than the (001)/[100] InAs/GaSb UTB. Moreover, the source Fermi degeneracy, i.e., the energy separation between the source Fermi level and the conduction band edge (Ec), is larger and the channel valence band DOS is smaller (Fig. 4 (b)), changes which improve the superlinear onset and reduce the delayed saturation [18], [19], [24]. and aligned in the (110)/[110] orientation ( Fig. 1 (b)). The mole fractions x1, x2, y1, y2, and the region lengths L4 to L7 are the design parameters, which are optimized for the largest I ON (Table I). Fig. 2 (c) and (d) show that the 3HJ design has a much thinner tunnel barrier and thus much larger tunneling probability (approaching unity) when turned on. Further, the 3HJ design shows a much steeper variation of transmission vs. energy above the channel Ev, implying less source-to-drain leakage and steeper turn-off characteristics.
From Fig. 3 (c) and (e) it is observed that a (110) AlInAsSb UTB has higher conduction band edge energy than a (110) InAs UTB. This conduction band offset forms a quantum well in the source, which shortens the source depletion length and creates a resonant state above the well, both effects enhancing the tunneling probability. Further, the (110) AlInAsSb UTB has larger electron effective masses (in both transport and transverse directions) than the (110) InAs UTB, thus a larger conduction band DOS (Fig. 4 (a)) and reduced source Fermi degeneracy (Fig. 2 (c)). From Fig. 3 (d) and (f) it is found that the (110) AlSb UTB has lower valence band edge than the (110) GaSb UTB. This valence band offset forms a quantum well in the channel, which also shortens the tunnel barrier thickness and creates another resonant state below the well, both further enhancing the tunneling probability. Moreover, the AlSb UTB channel has larger hole effective masses than the GaSb UTB channel, leading to smaller source-to-drain leakage. Grading of the source HJ and channel HJ makes further improvements by further increasing the electric field at the tunnel junction and by tuning the positions of the resonant states. Note that, although the source Fermi degeneracy is reduced and the channel DOS is increased (Fig. 4 (b)), the output characteristic is not degraded. This is due to the much higher transmission transparency enabled by the 3HJ design. In the ON state, the two resonant states created by the two quantum wells both fall in the Fermi conduction window, enhancing the current. In the OFF state, there are no quasi-bound states inside the quantum wells, reducing the thermal emission induced leakage. However, because the tunnel barrier is so thin, evanescent states incident from the source (channel) could still couple to the propagating states of the channel (source) through interaction with phonons, forming a leakage current path that is not modeled here.
Finally, we compare the 3HJ pTFETs with corresponding 3HJ nTFETs (using the same materials and orientations) [12] and Si pMOSFETs (Fig. 6). For 30nm (15nm) channel length, I ON of 3HJ pTFET is 582A/m (488A/m), comparable to the 3HJ nTFET and much larger than the Si pMOSFET. For 15nm channel length, the 3HJ pTFET has better SS and thus slightly larger I ON than the 3HJ nTFET, owing to the larger channel band gap and channel effective mass of the 3HJ pTFET.
IV. CONCLUSION
Design of III-V pTFETs is very challenging because of small source and large channel density of states. By engineering crystal orientations and employing triple heterojunctions, very large ballistic ON currents are simulated for pTFETs, comparable to the n-type counterparts and significantly exceeding Si pMOSFETs. Improved linear and saturation regions are also observed in the output I-V characteristics. However, the large ballistic current may be degraded by phonon assisted tunneling, a topic of future study. | 2016-05-23T19:59:46.000Z | 2016-05-23T00:00:00.000 | {
"year": 2016,
"sha1": "10078a7e57de36f1f9723b0b0f98f45ad08daf62",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1109/jeds.2016.2614915",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "80be6355f3acc47fb743fcc00e95a557f56712dd",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244128268 | pes2o/s2orc | v3-fos-license | Research on relative orientation method of oblique aerial photography based on basic matrix
After the oblique aerial photography technology is used to collect the stereo image, it is necessary to use the relative orientation method to check the image parameters. After the rectification process is completed, the 3D software is used to draw the 3D model to meet the subsequent application requirements. The author of this paper analyzes the difficulty of the matching and aerial photography, including affine transformation cannot successful transformation, influence there covered phenomenon and characteristic finishing is difficult, the combination of fundamental matrix tilt aerial photography as a method of relative orientation, through the study of oblique aerial photography based matrix as precision control points, its aim is to continuously optimize tilt aerial photography as the content, improve the use value of collation results.
1.
Kyungil University, Gyeongsan City, Gyeongsangbuk-do 38428 2. Dezhou University Decheng District, Dezhou City, Shandong Province 253000 *Wang2018@dlvtc.edu.cn ABSTRACT: After the oblique aerial photography technology is used to collect the stereo image, it is necessary to use the relative orientation method to check the image parameters. After the rectification process is completed, the 3D software is used to draw the 3D model to meet the subsequent application requirements. The author of this paper analyzes the difficulty of the matching and aerial photography, including affine transformation cannot successful transformation, influence there covered phenomenon and characteristic finishing is difficult, the combination of fundamental matrix tilt aerial photography as a method of relative orientation, through the study of oblique aerial photography based matrix as precision control points, its aim is to continuously optimize tilt aerial photography as the content, improve the use value of collation results.
1.Introduction
With the continuous improvement of digital photogrammetry technology system, the ways of regional image acquisition are also expanding. At the same time, affected by the quality of the photography platform, the height of the flight path, air flow, aircraft attitude and other factors, the attitude angle of the acquired image also fluctuates greatly, which affects the application quality of the final image. Based on the relevant content of the basic matrix, a reliable relative orientation method is adopted to process it, which can not only improve the quality of the collected image in the initial state, but also improve the quality of the image, so as to meet the requirements of subsequent operation.
Affine transformation cannot be smoothly converted
Based on previous practical experience, it can be known that in conventional aerial photography, affine transformation can be approximated as similarity transformation since the attitude of the aerial photography flight platform is relatively stable and vertical photography is maintained. However, affine transformation between oblique aerial images cannot be approximated as similar transformation. In the incline aerial photography system integrated by multiple lenses, there is a large photographic Angle difference between side-looking camera, down-looking camera and side-looking camera, as shown in Figure 1. When using the image matching method based on gray scale, the similarity measure between the same image points is an uncertain value if the image matching window is not pre-corrected in advance. On the other hand, most of the current feature point extraction operators can only guarantee the similarity invariance. Therefore, when the transformation relation between adjacent images is affine
2.2.The impact is covered
At present, the application of tilting aerial photography is mainly concentrated in urban areas, where there are a large number of high-rise buildings and obvious occlusion interference exists. Meanwhile, the shooting area of image is mainly artificial features, which will directly affect the progress of image feature extraction. And in the process of technology application, because the system block phenomenon, this also makes the feedback image by the number of covered area, in carries on the processing, even using the affine invariant feature extraction operator, but there are still 30% -40% of the fuzzy feature points, and cannot be these feature points are uniformly distributed, this will directly affect the final image processing quality, affect the smooth progress of follow-up operations.
2.3.Feature sorting is difficult
After feature acquisition data are obtained by oblique aerial photography, these data will be directly applied to the 3d modeling of the city, playing a role of complementing and improving the model. In the actual modeling, the feature points mainly used for natural features show a strong stable repetition rate in the image processing activities, while the artificial features have regular shapes and strong high-frequency components in the edge of the image processing, but this kind of features are in a poor stability in the application. In addition, as mentioned above, there is also a relatively obvious problem of shading during urban survey, and the geometric characteristics of inclined aerial photography itself aggravate the difficulty of image matching, thus affecting the final mapping effect and the smooth progress of subsequent operations.
3.1.1.Geometric analysis of double images
For two stereo image pairs of the same scene obtained from different locations, the coplanar condition should be satisfied between the same image points, and the formula is written as The basic matrix expresses the geometric constraint relation between two 2d image planes, which is the matrix expression form of kernel line geometric relation, and reflects the mathematical relation between two image points with the same name. Starting from coplanar conditions, the above equation can be abbreviated as the following formula: a·(A×a ' )=a T ·A {x} ×a ' =0. In kernel geometry, to simplify the model, the camera is generally considered to be the ideal case, that is, the main point of the image is in the center of the image, but the main distance of the image is variable (especially for close-range images). Therefore, there are 7 independent parameters of the basic matrix, that is, 5 relative position parameters and 2 principal distances. The basic matrix is usually used to describe the correspondence between two images with unknown internal azimuth parameters and is an important concept in computer vision dual-view geometry.
3.1.2.Solving relative directional parameters
After the formulation of the above formula, it enters the stage of solving relative directional parameters. In the specific solving process, the solving process can be divided into the following two stages: First, export processing for relative orientation elements, in practice, due to the size of the photography baseline weight does not directly influence on three-dimensional model building will not, have A {x} degrees of freedom can return A value of 2, and the degrees of freedom as 3 R matrix, and had mentioned in this paper, the formula of the need to solve of eight elements, At the same time, there are necessary correlations between elements, which is easy to cause structural instability and reduce the progress of solving results. Based on this situation, it is necessary to build a relative orientation model in the application, so that after the rank deficiency constraint ends, the obtained essential matrix will also be used as the initial value, so as to improve the accuracy of the solution results of relatively oriented elements. Secondly, formulate the relative orientation model and refine the corresponding parameters in the model. Theoretically, the vertical and downward parallax value generated by the stereo image with the same image point after relatively oriented processing is 0, and the specific calculation formula is as follows: Q=NY-N ' Y ' -Ay, where N and N ' represent the corresponding point projection coefficients of the same image points on the left and right images respectively. Then the error equation is used to sort out the applied parameters, so as to improve the accuracy of the analysis results.
3.1.3.Processing of relative directional accuracy
In the process of processing relative directional accuracy, the following should also be paid attention to: First, determine the specific values of model baseline Ax, when determine the three-dimensional model, its size can be arbitrarily assigned according to the actual situation, according to the above numerical can learn, model of vertical parallax will also keep the proportional relationship between baseline and model, and in order to better for calculation, the measurement precision of image point The calculation formula is also simplified to improve the stability and rationality of numerical results. Second, for vertical parallax weighting processing, according to the related parameters of images has been the precision image point measuring results can be determined, and on this basis, to complete the unit error calculation, the initial value and relative orientation elements as the basic conditions, using the error propagation law to deal with vertical parallax precision of calculation accuracy control at more than 99.2%. Third, the unit weight and the upper and lower parallax weight are changed, and the least square adjustment is used to carry out iterative processing in the specific application. After each iterative processing, the weight error will also be calculated, and accurate calculation results will be obtained according to the law of pixel points after the iterative convergence [1].
3.2.2.Rotation matrix solution processing
When the rotation matrix is solved in the process, often used in processing method is as follows: first, direct linear method, the method in the application, would have been the application of the elements in the rotation matrix as a relatively independent elements, the linear solution method can be used to organize each element corresponding numerical, and on the basis point equation can also be successfully listed three applications. In its calculation, the least square method is used to solve the parameters, so that in the process of solving, the content can be quickly solved, so as to obtain the initial value of the unknown parameters. Second, singular value decomposition constraint method, which can offset the accuracy error of the error to a certain extent in the application, thus improving the accuracy of the solution result to more than 99%. Third, unit quaternion method, for solving three dimensional vector rotation, can be seen as will vector to expand as the quaternion and unit between the four elements of mixed product, measuring several points according to the requirement, also be able to get the best solution formula, according to the requirements for finishing the content, so as to improve the precision of the processing result of [3].
3.2.3.Absolute directional processing
After completing the above processing, it enters the stage of absolute directional processing. In this process, the commonly used processing methods are as follows: First, spatial similarity transformation processing, in specific orientation calculation, can according to the requirements for space auxiliary coordinate calculation of model, and in order to calculate the coordinates of the corresponding absolute position, this also needs in the application, to optimize the space coordinates, obtained the corresponding transformation, increase the use value of the results of the analysis. Second, based on the requirements for absolute orientation elements, in absolute orientation of small swing Angle image processing, in order to further simplification of the corresponding calculation process, using center of gravity of the three-dimensional space coordinates to the more similarity transformation processing, and in the application, also use empirical formula for a specific parameters, so as to get the absolute orientation parameters, The accuracy was controlled above 99% [4].
4.1.1.Theoretical accuracy
Combined with the relevant requirements of basic matrix, when analyzing the theoretical accuracy, the reference content is obtained from the covariance matrix of unknowns obtained by adjustment, which can be written as the following formula: a i =s 0 √Q ii , in which Q ii represents the element on the ith diagonal of the matrix Q x , and s 0 represents the median error corresponding to the observed value of unit weight. It can be verified according to the actual situation and calculated using the variance value, so as to improve the accuracy of numerical calculation results. Moreover, in the calculation of theoretical accuracy, the corresponding covariance propagation law also maintains a good application relationship with the regional network, and the error propagation law within the regional network also maintains a direct proportional relationship with the pixel measurement accuracy. It can be seen that in numerical calculation, the theoretical accuracy can also directly represent the internal accuracy of regional net adjustment [5].
4.1.2.Actual accuracy
Correspondingly, after determining the theoretical accuracy value, the actual accuracy also needs to be calculated. As a direct embodiment of the accidental error distribution, the theoretical accuracy also has a good correlation with the point-position distribution. From the perspective of practical application, the complexity of the whole process is relatively high, and the established adjustment model will also have some systematic errors in the application. The comprehensive effect of these contents and accidental errors will also increase the difference between theoretical accuracy and actual accuracy. In general, the application accuracy of adjustment content will be directly feedback between the real coordinates and the difference coordinates of the redundant control points involved in the region. The specific calculation formula is as follows: ① a x =√(∑(x true -x adjustment ) 2 ) /n; ② a y =√(∑(y true -y adjustment ) 2 ) /n; ③ a z =√(∑(z true -z adjustment ) 2 ) /n [6].
4.2.1.Conventional image positioning accuracy
Based on the data obtained in the past, in numerical calculation, the unit weight error value of adjustment is relatively close to the estimated measuring accuracy of image points, which can also confirm that the estimated measuring accuracy of image points has strong value significance. At the same time, the actual accuracy of adjustment was analyzed, and the average accuracy of the measured data was close to 0.1m, which was twice the size of GSD after being converted into images. In addition, in the image positioning, the accuracy of the flat area is relatively high, with an average accuracy of 0.090m, while that of the hilly area is at a lower level, with an average accuracy of 0.116m. The accuracy of elevation also conforms to the above application rules, so that error elimination can be completed on this basis in calculation, so as to improve the accuracy of analysis results [7].
4.2.2.Unconventional image positioning accuracy
Based on previous data information, in the numerical calculation, due to the irregular aerial photography as corresponding as point coordinates of measurement accuracy in the low position, and the adjustment of the corresponding overall is in the condition of relatively low precision, the error in the unit weight of adjustment will be closer to 0.3 pixels, at this time and the estimated image point coordinate measurement accuracy is in the condition of close to, it can meet the basic requirements of daily data analysis. And unconventional in the application of aerial photography as the plane precision and elevation of position precision is in a state of relatively large, the key causes of such problems is that the unconventional aerial photography as geometric relationships between, make the image point coordinate measurement accuracy is in a state of decline, and the adjustment of the whole precision control in the low state, Thus, the estimated value of the accuracy estimation result is improved [8].
5.Conclusion
To sum up, based on the basic characteristics of the basic matrix, analyzing the relative orientation method of inclined aerial photography can not only improve the accuracy of image processing results, but also lay a foundation for the smooth advancement of subsequent operations. | 2021-11-16T20:04:15.718Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "469dfbdabea3d74934e908bcf6d52b5ca61e21d8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2093/1/012025",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "469dfbdabea3d74934e908bcf6d52b5ca61e21d8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
268502489 | pes2o/s2orc | v3-fos-license | Concurrent acute sensorimotor axonal neuropathy and disseminated encephalitis associated with Chlamydia pneumoniae in an adult patient with anti-MOG and anti-sulfatide antibodies: a case report
Acute disseminated encephalomyelitis and Guillain–Barré syndrome refer to post-infectious or post-vaccination inflammatory demyelinating disorders of central and peripheral nervous system, respectively. We report the case of a 60-year-old male patient presenting with irritability, gait difficulty, asymmetric quadriparesis (mostly in his left extremities), distal sensory loss for pain and temperature in left limbs, and reduced tendon reflexes in his upper limbs and absent in his lower limbs, following an upper respiratory tract infection, 3 weeks earlier. Brain magnetic resonance imaging revealed abnormal T2 signal and peripherally enhancing lesions in hemispheres, brainstem, and cerebellum. Nerve conduction studies were compatible with acute motor and sensory axonal neuropathy. Serology revealed positive IgM and IgG antibodies for Chlamydia pneumoniae, and he also tested positive for myelin oligodendrocyte glycoprotein (MOG) and sulfatide antibodies. Treatment with intravenous immunoglobulin and methylprednisolone led to clinical and radiological recovery within weeks. Even though several cases of combined central and peripheral demyelination have been reported before, it is the first case report with seropositive anti-sulfatide and anti-MOG acute sensorimotor axonal neuropathy and disseminated encephalitis associated with C. pneumoniae.
Background
Acute disseminated encephalomyelitis (ADEM) and Guillain-Barré syndrome (GBS) refer to immune-mediated syndromes of the nervous system. 1,2ADEM is a demyelinating disorder of the central nervous system (CNS) causing encephalopathy and multiple white matter lesions in the brain, brainstem, and/or spinal cord. 1 GBS is an inflammatory demyelinating disorder of the peripheral nervous system (PNS), targeting peripheral nerves and their spinal roots, causing progressive symmetrical motor weakness of more than one limb, with hyporeflexia or areflexia, often presented with several variants, such as acute axonal neuropathies. 2Even though both entities share an acute post-infectious or postvaccination inflammatory demyelinating pathogenesis, they represent distinct neurological disorders, while simultaneous co-occurrence of both disorders as an immune response to the same stimuli is very uncommon.[5][6][7][8][9][10][11][12][13][14][15][16] TherapeuTic advances in neurological disorders Volume 17 2 journals.sagepub.com/home/tanneuropathy (AMSAN) in an adult patient with anti-myelin oligodendrocyte glycoprotein (MOG) and anti-sulfatide antibodies after an upper respiratory infection, and enrich the existing literature regarding combined central and peripheral demyelination (CCPD) syndrome.
Case presentation
A 60-year-old man presented to the Emergency Department of our hospital with irritability, gait difficulty, generalized muscle weakness and numbness, mostly in his left extremities, for 2 days.He was not on medications and had no medical history (non-smoker, no alcohol, or psychotropic substances abuse).He only reported an admission to another hospital for 7 days due to an upper respiratory tract infection 3 weeks ago.A brain magnetic resonance imaging (MRI) with gadolinium and neurological examination were performed during his admission to the other hospital, due to reported headache, and provided to us.His brain MRI (Figure 1) and neurological examination were reported normal [normal orientation, mental status, muscle strength (muscle power scale 5/5 in all extremities), coordination, gait/posture, tendon reflexes, no meningeal signs, and modified Rankin Scale (mRS) 0].No pathogen isolation was reported during his former hospitalization, and the patient was treated with 500 mg of intravenous (IV) azithromycin once daily for 5 days.
Two days before his admission to our hospital, he acknowledged symptoms of numbness and weakness in his left limbs, gradually expanding to all extremities.The same day that his symptoms began, he reported visiting a neurologist, who suggested the performance of a new brain MRI and reevaluation.Brain MRI was performed after 2 days and revealed abnormal T2 and fluid-attenuated inversion recovery (FLAIR) sequence signal lesions in both hemispheres and cerebellum, as well as peripherally enhancing lesions in hemispheres and cerebellum, in T1 with gadolinium sequences (Figure 2); thus, the patient was admitted to our hospital for further evaluation and treatment.Under the suspicion of a post-infectious syndrome due to his history involving CNS and PNS, a serum specimen was tested for antibodies against MOG using fixed cell-based assay (antiaquaporin-4 antibodies were not tested), myelinassociated glycoprotein (MAG) and gangliosides (GM1, GM2, GM3, GM4, GD1a, GD1b, GQ1b, GD2, GD2, GT1a, GT1b, and sulfatide) using Western blot, and was found positive for anti-MOG IgG (titer and the patient entered an intensive rehabilitation program for 3 months. At 3-month follow-up, the patient was re-evaluated, and a remarkable clinical and radiological improvement was noted (Figure 3).His neurological examination revealed only reduced Achilles tendon reflexes and mild weakness (MRC scale 4+/5) in dorsiflexion of the left foot (mRS 1).NCS were repeated that day and showed increase in all prior reduced amplitudes of SNAPs and CMAPs.He was also retested for serum anti-MOG and anti-sulfatide antibodies and was found negative.No further immunomodulatory therapy was provided to the patient.
Conclusion
CCPD syndrome refers to a rare neurological disorder with simultaneous occurrence of CCPD. 3 Brain and spinal MRI scans, CSF tests and NCS studies are performed to confirm CCPD, while other laboratory tests are used to exclude other possible diagnoses, as performed in our patient. 3,48][19][20][21] As regards PNS involvement, chronic inflammatory demyelinating polyradiculoneuropathy (CIDP) has been most commonly associated with CCPD, while GBS variants [acute inflammatory demyelinating polyradiculoneuropathy (AIDP), acute motor axonal neuropathy (AMAN), AMSAN and conduction blocks] have also been described, even though AMAN and AMSAN variants refer to axonal damage and not demyelination. 3,10,12,13ti-MOG disease is an autoimmune disorder in which the immune system mistakenly attacks MOG, a protein located on the surface of myelin, an insulating layer that surrounds nerve cell axons and enhances signal conduction between them. 22he spectrum of demyelinating disorders with IgG antibodies to MOG, known as MOGAD, includes many core clinical phenotypes, such as optic neuritis, transverse myelitis, brainstem and/ or cerebellar deficits, cerebral monofocal or polyfocal deficits, ADEM, and cerebral cortical encephalitis often with seizures. 23Our patient reported signs, symptoms, and hospitalization for an upper respiratory tract infection 3 weeks prior to his admission to our hospital.Moreover, even though no lesions were detected in the spinal cord, brain MRI findings showed bilateral lesions with ill-defined borders and peripheral enhancement, which typically appear in ADEM-MOG encephalomyelitis. 24Our patient was found seropositive for anti-MOG IgG antibodies and had a clinical course compatible with ADEM, fulfilling the newly proposed international MOGAD criteria, 25 as well as he did not fulfill diagnostic criteria for MS and NMOSD. 26,27ulfatide (galactosylceramide-3-O-sulfate) represents the major acidic glycosphingolipid in central and peripheral nerve myelin sheath membrane, interacting with GalCer, in the presence of Ca 2 +, which is highly dependent on ceramide composition of both GalCer and sulfatide. 28Antibodies against sulfatide antigen have been reported in a variety of systemic disorders, such as diabetes, acquired immunodeficiency syndrome, idiopathic thrombocytopenic purpura, autoimmune chronic active hepatitis, and MS. 29,304][35][36] Our patient had a concurrent PNS disease course with quadriparesis, distal sensory loss, and abolished osteotendinous reflexes, while repeated NCS showed ascending sensory-motor impairment, all responding to immunosuppressive treatment at his follow-up.[47] Chlamydia pneumoniae is a type of bacteria that causes respiratory infections in humans and represents one of the most common causes of community-acquired pneumonia.Serology is considered the preferred method for confirmatory laboratory diagnosis and empirical antibiotic therapy with a macrolide or a fluoroquinolone is considered sufficient to cure the illness. 48In recent years, an increasing number of publications have reported the detection of C. pneumoniae in chronic extrarespiratory diseases, such as common neurological disorders (MS, stroke, Alzheimer's disease), which according to authors could be an irrelevant finding. 49,501][52][53][54] In one of these cases, 54 anti-MOG antibodies were found positive in a child with acute and multiphasic disseminated encephalomyelitis and subclinical C. pneumoniae infection, and in a second one, 53 anti-ganglioside GM1 antibodies were found positive in a young woman with GBS following C. pneumoniae infection.In both cases, authors suggest the possible induction of these neurological disorders by C. pneumoniae and a rather underestimated association.
Molecular mimicry and cross-reactive autoimmune response to myelin protein antigens are considered the most likeable pathogenesis of both GBS (classic and variants) and ADEM. 1,2owever, another hypothesis proposed that an antibody-mediated post-infectious syndrome results to a continuous clinical spectrum involving both the PNS and CNS, suggesting that the responsible pathogen shares an antigen of both peripheral and central myelin, like in Fisher-Bickerstaff syndrome with anti-GQ1b antibodies. 55Furthermore, a possible common pathogenic mechanism, suggesting that the immune response against a component of the myelin of the CNS may carry cross-antigenicity with the peripheral system, with a significant increase in activated and helper inducer T-cells in both GBS variants and ADEM, has been proposed by others authors. 11ven though sulfatide antibodies are often associated with a concomitant reactivity to MAG and the rare selective re-activities to sulfatides associated with different forms of neuropathy, no strong association with MOG cross-antigenicity is well founded. 56,57Nonetheless, our hypothesis was that C. pneumoniae infection induced concurrent AMSAN and ADEM, in our case, as a common antigen target for antibodies' production.Since C. pneumoniae infection could be associated with anti-MOG positive ADEM and anti-ganglioside positive axonal GBS variant, as discussed above, it is rather possible that it could trigger simultaneous antisulfatide AMSAN and anti-MOG ADEM, as seen in our patient, but the exact mechanism remains unknown.
Herein, we present a rare case of concurrent AMSAN and ADEM in an adult patient with a favorable outcome after immunosuppressive therapy.Even though several cases of CCPD have been reported before, to our knowledge, it is the first case report with concurrent AMSAN and ADEM with both anti-sulfatide and anti-MOG antibodies associated with C. pneumoniae.Further research needs to be carried out to clarify the pathogenesis and possible correlation of these entities.
Figure 1 .
Figure 1.Axial brain magnetic resonance imaging T2 FLAIR sequences2 weeks prior patient's admission to our hospital without showing abnormal findings.FLAIR, fluid-attenuated inversion recovery.
Figure 2 .
Figure 2. Axial brain magnetic resonance imaging the day of patient's admission to our hospital.(a) T2 FLAIR sequence showing abnormal T2 signal lesions in hemispheres and cerebellum.(b) T1 + gadolinium sequences showing peripherally enhancing lesions in hemispheres and cerebellum.FLAIR, fluid-attenuated inversion recovery.
Figure 3 .
Figure 3. Axial brain magnetic resonance imaging T1 + gadolinium sequences 3 months after patient's discharge from our hospital, showing radiological improvement.
Table 1 .
Nerve conduction studies performed 2 weeks after patient's admission. | 2024-03-18T15:16:15.944Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "2a735f7752bea3c4535deed5740eadfe72fe93cb",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fb5d26b16a4ff980aee25d5c3bb2cc119fc94a4e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10362879 | pes2o/s2orc | v3-fos-license | Effect of a daily dose of Lactobacillus brevis CD2 lozenges in high caries risk schoolchildren
Objectives A double-blind, randomised, placebo-controlled clinical trial was performed to validate the hypothesis that the use of lozenges containing Lactobacillus brevis CD2 (Inersan®, CD Investments srl) may reduce plaque pH, salivary mutans streptococci (ms) and bleeding on probing, during a 6-week period, in a sample of high caries risk schoolchildren. Methods A total of 191 children (aged 6–8 years), presenting two to three carious lesions and a salivary ms concentration of ≥105 CFU/ml, were enrolled and divided into two groups, an L. brevis CD2 lozenge group and a no L. brevis lozenge group, and examined at baseline (t0), after 3 weeks (t1), after 6 weeks of lozenge use (t2) and 2 weeks after the cessation of lozenge use (t3). Plaque pH was assessed using the microtouch technique following a sucrose challenge. The area under the curve (AUC5.7 and AUC6.2) was recorded. Salivary ms were counted, and bleeding on probing was assessed. Results At t0, the plaque-pH and ms concentration values were similar in both groups. Mean areas (AUC5.7 and AUC6.2) were significantly greater in the control group at t1, t2 and t3. L. brevis CD2 lozenges significantly reduced salivary ms concentrations and bleeding. The subjects from the test group showed a statistically significant decrease (p = 0.01) in salivary ms concentration. At t2, a statistically significantly lower bleeding value was recorded in the test group compared with the control group (p = 0.02). Conclusions Six weeks’ use of lozenges containing L. brevis CD2 had a beneficial effect on some important variables related to oral health, including a reduction in plaque acidogenicity, salivary ms and bleeding on probing. (Trial Registration Number NCT01601145 08/21/2012)
Introduction
Dental caries is the interaction over time between cariogenic microflora, a diet rich in fermentable carbohydrates and host factors, including saliva secretion rate and buffering capacity [1]. Caries is still one of the most common diseases among children, although a declining prevalence trend has been recorded in western countries [2][3][4][5][6][7]. Preventive programmes to control caries risk factors, focusing on dietary modification and enhancing host resistance through the use of fluoride and sealants, are recommended [8]. However, in recent years, a complete eradication of caries-associated micro-organisms has proved to be difficult and almost impossible but quite unwise [9].
Recent studies have shed a new light on the potential for using probiotics for the prevention of oral diseases, i.e. a shift in the microbial population towards a pathogen-associated population, which is central to the development of the major oral diseases (caries and periodontal disease). Preliminary data on probiotics obtained by various research groups [10,11] have been encouraging, but further randomised clinical studies are still required clearly to establish their potential for preventing and treating oral infections. These studies will enable the identification of the probiotics that are best suited for oral use, as well as the vehicles that are the most appropriate for administration: food products (cheese, milk, yogurt) or supplements (chewing gum, lozenges).
The main goal for the use of probiotics in caries prevention is the inhibition of the proliferation of cariogenic bacteria (mainly Streptococcus mutans) and the reduction of their adherence to the tooth surfaces. In vitro studies [12,13] have demonstrated the capacity of probiotics to inhibit Streptococcus sobrinus. The inhibition of S. mutans growth and biofilm formation by several probiotic strains like Lactobacillus plantarum DSM 9843, Lactobacillus reuteri PTA 5289, L. reuteri ATCC 55730 and so on has been reported [11,14,15]. A significant decrease in salivary S. mutans concentration was observed following the consumption of probiotic ice cream containing Bifidobacterium lactis Bb-12 or chewing gum containing L. reuteri [10,11].
The hypothesis of this study is that the use of lozenges containing Lactobacillus brevis CD2 may reduce plaque pH, salivary mutans streptococci (ms) concentration and bleeding on probing in a sample of high caries risk schoolchildren. To validate this hypothesis, a randomised clinical trial was designed and performed.
Study design and registration of the study
The study was designed as a randomised clinical trial, approved by the Ethics Committee at the University of Sassari, Italy (no. 108SS/2012), and registered at www.clinicaltrials.gov (NCT NCT01601145). The study was carried out in Sassari, where the total number of children aged 6-8 years in 2011 was 3,258. Power analysis was performed using G*Power 3.1.3 for Apple using ANOVA for repeated measures and, taking account of a disease prevalence of 0.51 with an effect size of 0.06 and an error probability of 0.05, the number of subjects in each group was set at 78, with an actual power of 0.95. The sample size was calculated on the basis of previous studies of caries prevalence [3,4]. It was increased by 15 % to safeguard the estimates at an optimal level of precision (5 %) against the possible effect of disease reduction compared with previous studies and the number of non-responders. The total theoretical sample size was set at 180.
Preliminary screening
Screening was carried out from November 2011 to January 2012 to select children who presented two to three manifest carious lesions in the permanent and/or primary dentition and a salivary mutans streptococci (ms) concentration of ≥10 5 CFU/mL. Carious lesions were diagnosed when there was a cavity at dentinal level (D 3 ). Subjects with a history of systemic antibiotic, topical fluoride (except for toothpaste) or chlorhexidine treatment within 30 days before baseline were excluded.
Schoolchildren were recruited using systematic cluster sampling; each class was identified as a cluster and compiled into a list. The first cluster was randomly chosen, while the others were selected at the systematic interval of three classes. In all, 564 children were recruited for the preliminary screening.
An information leaflet, explaining the aim of the study and requesting their child's participation with signed consent, was given to parents or guardians. Only children with parents' signed consent were called for examination (534 subjects). The clinical examination and the saliva sampling were performed during the school day; 526 children showed up at the time scheduled for the examination, 208 met the inclusion criteria and were enrolled in the study. The flow chart of the study design is shown in Fig. 1.
A second leaflet explaining the aim of the clinical trial and requiring the child's participation was mailed to all parents/guardians of the 208 children. The investigation had a randomised, placebo-controlled study design, including a plaque acidogenicity evaluation, a microbiological evaluation and bleeding on probing, with an experimental period of 8 weeks.
The clinical trial was carried out from January to July 2012. The study design included a first examination (saliva sample, plaque-pH evaluation and periodontal probing) at baseline (t 0 ), a second after 3 weeks (t 1 ), a third after 6 weeks (t 2 ) of lozenge use and a final examination 2 weeks after the end of lozenge use (t 3 ; Fig. 1). One week before the start of the experiment, all the subjects began to use a 1,400-μg/g AmF toothpaste (Gaba-Colgate, Rome), for daily oral hygiene. A soft toothbrush was likewise provided, and they were asked to avoid any other oral hygiene adjuvant.
Moreover, the children received a patient diary, which informed them that the use of any fluoride-containing oral hygiene products (other than the toothpaste handed to the participants), any fluoride-containing mineral water or black tea, fish meals and commercial probiotic products was not allowed during the study.
Two groups of children were created: (1) a test group, using non-sucrose lozenges containing L. brevis CD2, and (2) a control group, using non-sucrose lozenges with no active ingredient. Randomisation was carried out on an individual basis by GC and FC using Excel® 2010 for Mac. Ten subjects were absent at the start of the experiment, so the final study sample was 198. At the t 1 interim evaluation, five children were excluded as they did not return the empty lozenge bottle (three from the test group and two from the control group), at t 2 more children were excluded, three from the test group and five from the control group (four children had received systemic antibiotics therapy and four did not return the empty lozenge bottle), and, at t 3 , four children refused to complete the experiment (two from the test group and two from the control group). As a result, 181 children concluded the 8-week experimental period: 91 in the test group and 90 in the control group.
Treatment and sample collection
The L. brevis CD2 lozenges (Inersan®, CD Investments srl, Rome, Italy) contain 2,000,000,000 colonies of L. brevis CD2, sweeteners (mannitol, aspartame, fructose), anti-caking agents (talc, silicon dioxide, magnesium stearate) and banana flavouring. The lozenges for the control group contained exactly the same ingredients, except for the L. brevis CD2. The two lozenges were identical in weight (1 g), form, colour and packing. They were produced and supplied by CD Investments Srl (Rome, Italy) and coded as either "green" or "red". The code was sealed by an independent monitor and was not broken until the statistical analysis was finalised. Each subject took two lozenges a day, one in the morning and one in the evening, for the whole experimental period.
The parents/guardians were asked to make no changes to the dietary and oral hygiene habits of their children. Toothbrushing was not allowed for at least 1 h after the use of the lozenges.
In order to evaluate the administration of the products at school and home, teachers and parents were given lozenges necessary for 2 weeks at a time and were asked to return the empty bottles when receiving those necessary for the following period. This procedure was repeated throughout the whole experimental period. The compliance and any observed side effects of the administration of the products were assessed by means of a questionnaire administered to the participants' parents at t 2 .
Plaque-pH evaluation
The children refrained from eating/drinking 1 h before plaque-pH evaluation. No toothbrushing or other tooth-cleaning methods were allowed on the morning of the measurement day. Plaque acidogenicity was assessed using the microtouch technique after a sucrose challenge. Evaluations of pH were carried out at two proximal sites (between the deciduous molars) on the left and right sides of the upper jaw. The pH of each site was measured in triplicate at six different time points: before the sucrose rinse and 2, 5, 10, 15 and 20 min after a 1-min rinse with 10 ml of 10 % sucrose, using active movements. An iridium touch microelectrode, diameter 0.1 mm (Beetrode NMPH-1, World Precision Instruments, Sarasota, FL, USA) [16], with a porous glass reference electrode, was used (MERE 1, WPI, Sarasota, FL, USA). Before each session of pH evaluation, the electrode was calibrated using buffer solution at pH 7.00 and 4.00 [17].
Microbiological analyses
Non-stimulated whole saliva was collected for 150 s in sterile vials (Nunc, Kamstrup, Denmark). The samples were processed within 45 min of collection at the Department of Microbiology (University of Sassari). The samples were serially diluted in sterile PBS (Sigma Chemicals, St. Louis, MO, USA). Aliquots of 5 μl were inoculated on Mitis-Salivarius Bacitracin Agar [18] for the evaluation of ms. The plates were incubated in a 5 % CO 2 environment at 37°C for 72 h, and the colony forming units (CFU) were identified by morphology, size and colour and counted in a stereomicroscope; the ms concentration in saliva was expressed as log 10 CFU per milliliter.
Bleeding on probing
Bleeding on probing was recorded dichotomously as bleeding or not, 30 s after the gentle manipulation of the tissue at the depth of the gingival sulcus by a probe. Bleeding on probing was checked at t 0 , t 2 and t 3 by one calibrated examiner (GC) before the plaque-pH evaluation. Intraexaminer reliability was assessed before the start of the study on 20 subjects. Cohen's kappa value for the bleeding score was 0.81. Examinations were carried out under standardised conditions, using optimal artificial lighting, a plain mirror and a WHO-CPI probe.
Statistical methods
The mean pH of the pH readings for each individual site was calculated. The mean for the two sites at the different time points was calculated, as well as minimum pH and maximum pH fall. The area between reference pH (AUC 5.7 and AUC 6.2 ) and the pH curve was calculated using "Plaque-pH" software [19]. The salivary ms concentrations were transformed to log 10 values to normalise the data, and the mean and standard error (SE) was calculated for each group and time point. Data were analysed for statistically significant differences using repeated one-way and two-way measures analysis of variance (ANOVA). At t 2 , the subjects were divided according to salivary ms concentration (<10 5 and ≥10 5 ), and the areas under the plaque-pH curve (AUC 5.7 and AUC 6.2 ) for the two groups were compared in order to evaluate the possible relationship between salivary ms concentration and plaque pH. The difference between t 0 and t 2 in the AUC and in the salivary concentration of ms was also calculated for each subject, and a linear regression analysis was computed.
The number of bleeding sites was added together, and the result was expressed as a percentage of the total number of surfaces. Adjustment of type I error due to multiple testing is not considered necessary due to the a priori ordering of the hypotheses. Hypothesis testing has to be stopped if a null hypothesis cannot be rejected at the 5 % error level α. This procedure maintains a constant global level of α=5 %. All the analyses were carried out using Stata SE software 10.0. A p value of <0.05 was considered statistically significant.
Results
No adverse effects were reported by the children in either group. A total of 181 children completed the experimental period ( Fig. 1): 91 subjects in the test group and 90 in the control group. There were 16 drop-out subjects (8.1 %) during the 2-month period. The mean number of D 3 lesions in all dentition (deciduous/permanent) was 2.61 in the test group and 2.54 in the control group (p=0.36). At t 0 , plaque pH and the salivary ms concentration were similar in both groups, with no statistically significant differences for the two variables observed (p=0.18 and p=0.48, respectively). Table 1 shows the mean±SE for AUC 5.7 and AUC 6.2 during the trial. At baseline, the values for AUC 5.7 and AUC 6.2 were similar in the two groups (p=0.60 and p= 0.15, respectively). Mean areas (AUC 5.7 and AUC 6.2 ) were statistically significantly different at the following time points (t 1 , t 2 and t 3 ), with the largest difference at t 2 . The differences were statistically dissimilar in each group when it came to time intervals (p<0.01 in the L. brevis CD2 group and 0.03 in the control group for AUC 6.2 ). The minimum pH showed small variations among the four time points for the test group, whereas in the control group, a decrease was detected between t 0 and t 2 ; p values were statistically significant at the t 1 and t 2 inter-group comparison. The intergroup comparison of the maximum pH fall was statistically significant at t 1 , t 2 and t 3 , p=0.01, p<0.01 and p=0.04, respectively (data not shown).
Similar results were noted for the salivary ms concentration ( Table 2). The children from the test group showed a statistically significant decrease (p=0.01) in the salivary ms concentration (5.4 at t 0 , 4.9 at t 2 ). In children from the control group, the salivary ms concentration remained at the same level during the whole experimental period. Areas under the curve at t 2 ( Table 3) were statistically significantly smaller in children from the test group (p=0.01 AUC 5.7 and p=0.03 AUC 6.2 ) in relation to the salivary ms concentration, if compared with the control group, for children at both ms concentrations (>10 5 , ≤10 5 ).
The bleeding score at t 0 was similar in the two groups (32.8 % in the control group and 33.4 % in the test group), while, at t 2 , a statistically significantly lower value was recorded in the test group compared with the control group (p=0.02). At t 3 , the difference in bleeding score was still statistically significant (p=0.03), with a value of 24.4 [95 % confidence interval (CI)=19.0-30.1] in the test group and 28.6 (95 % CI, 23.5-33.8) in the control group (Fig. 2).
Discussion
The effect of L. brevis CD2, administered daily through lozenges at a frequency of two times/day, on plaque acidogenicity, ms concentration and gingival bleeding in a group of high caries risk children was evaluated. The main finding from this randomised clinical trial is that L. brevis CD2 lozenges were effective in reducing plaque acidogenicity and salivary ms concentrations. A reduction in the bleeding score was also observed in the test group. It has been documented that probiotics can act locally in the oral cavity through the biofilm and systemically by modifying the immune response [20].
A significant reduction in plaque acidogenicity was found after 6 weeks' use of the probiotic lozenges. The short-term consumption (2 weeks) of probiotic lactobacilli has not been found to influence plaque acidogenicity or ms levels in plaque [15]. One interesting observation was that the effect on plaque acidogenicity appeared to be reduced during the last 2 weeks of the experimental period, when no product was used. This indicates the importance of the continuous administration of probiotic products in order to achieve a lasting effect on different oral variables, as has previously been suggested [12]. Probiotic bacteria can survive and grow in the oral environment, although permanent colonisation is unlikely in an established oral biofilm [21,22]. Table 2 Concentration of mutans streptococci (log 10 CFU/ ml saliva: mean±SE) at t 0 , t 1 , t 2 and t 3 in the two groups of children using lozenges with L. brevis CD2 (test group) and lozenges without probiotic bacteria (control group) Previous studies have suggested that the intake of various strains of the lactobacilli species may reduce the counts of salivary mutans streptococci in children, even if it is only for the observational period [10,23,24]. In the present study, a statistically significant reduction in the salivary ms counts was observed. One of the explanatory hypotheses is linked to the release of arginine deiminase by the probiotic in general [25] and by L. brevis CD2 in particular [26]. Arginine deiminase is an enzyme that is normally confined to the prokaryotic kingdom [27]. This enzyme is able to catalyse the conversion of arginine to citrulline and ammonia; as many bacteria can use arginine as their only source of energy for growth, an arginine deficiency can reduce bacterial proliferation [28].
The change in the concentration of ms in saliva may reflect a change in the plaque microflora as well. Otherwise, some other results would require more investigation. For instance, the very low maximum pH at t 1 for the test group is not logical, and the same also holds true for the time of 10-20 min for t 3 and the test group; this is the reason why the AUC increases. These are the reasons why the "minimum pH" and "maximum pH fall" are not 100 % logical, when comparing the test and control groups. One hypothesis could be that people in the test group respond differently to the L. brevis CD2. A high variability in the response to the effect of different probiotics has been described and, in a recent review, it was concluded that the same strain does not appear to be ideal for everyone [20].
A significant reduction in gingival bleeding was found after 6 weeks' use of the probiotic lozenges and 2 weeks after the cessation of use. The anti-inflammatory effects of L. brevis CD2 administered through lozenges in a group of patients with chronic periodontitis were studied [29]. One possible explanation for the beneficial anti-inflammatory effects of L. brevis CD2 could be its capacity to prevent the production of nitric oxide and, consequently, the release of PGE2 and the activation of MMPs induced by the nitric oxide.
To the authors' knowledge, this study is the first randomised clinical trial of the efficacy of probiotics in relation to plaque acidogenicity. Furthermore, no previous clinical trials have used the L. brevis CD2 as probiotic strain. The findings are novel and fairly interesting, as the lozenge administration was well accepted by the children and the treatment proved to be effective and simple. In spite of these positive aspects, it is important to underline that these results were obtained in a, relatively speaking, shortterm period, and they had to be probably considered as temporary benefits. Further investigations are needed to evaluate a long-term effect on caries-related variables as well as the caries incidence which was not the outcome of this study. It is from the literature [30] well known that a long-term administration of probiotic is needed to gain health benefits. A strength of the present study is that also gingival health was monitored, which is an important feature of children oral health.
In conclusion, the 6-week administration of probiotic bacteria (L. brevis CD2) through lozenges is able to reduce same important variables related to oral health. A reduction of plaque pH, salivary mutans streptococci concentration and bleeding on probing in a sample of high caries risk schoolchildren can concur to reduce caries risk and to increase gingival health. This study provides evidence in favour of the potential use of L. brevis CD2 as a new functional food. Bleeding score (percentage) recorded at t 0 , t 2 and t 3 in the two groups of children using lozenge with L. brevis CD2 respective lozenge without probiotic bacteria (control). One-way ANOVA Table 3 Areas under the curve (AUC 5.7 and AUC 6.2 ) in relation to salivary ms concentration (≤10 5 and >10 5 ) at t 2 for the two groups of children using lozenges with L. brevis CD2 (test group) and lozenges without probiotic bacteria (control group) AUC 5.7 AUC 6.2 ms≤10 5 ms>10 5 p-value ms≤10 5 ms>10 5 p value mean±SD (n) mean±SD (n) (one-way ANOVA) mean±SD (n) mean±SD (n) (one-way ANOVA) | 2016-05-12T22:15:10.714Z | 2013-05-05T00:00:00.000 | {
"year": 2013,
"sha1": "ccdc6ef1e33524b13d53033d83673b9a44c1afad",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00784-013-0980-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "82a583818584916e13707bf5e7dd5953ac2fddfb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246750753 | pes2o/s2orc | v3-fos-license | Jasmonic acid biosynthetic genes TgLOX4 and TgLOX5 are involved in daughter bulb development in tulip (Tulipa gesneriana)
Abstract Tulip bulbs are modified underground stems that originate from axillary meristems of mother bulb scales. Hormones, including jasmonic acids (JAs), play key roles in the regulation of tulip bulb development. Here, we compared variations in daughter bulb development through transcriptomic profiling analysis and characterized the functions of JA biosynthesis-related genes during daughter bulb enlargement. The results showed that tulip cultivars exhibited contrasting bulb size variations. Transcriptomic analyses revealed that genes involved in plant hormones and development, including the two lipoxygenase genes TgLOX4 and TgLOX5, showed significant changes in expression following tulip bulb growth. Ectopic overexpression of TgLOX4 and TgLOX5 in Arabidopsis enhanced endogenous JA content, improved plant growth, and increased lateral root numbers. Silencing of these two genes in tulip repressed the growth of daughter bulbs. Furthermore, exogenous JA treatment promoted tulip bulb growth, whereas the JA biosynthesis inhibitor sodium diethyldithiocarbamate (DIECA) inhibited this process. This study offers supporting evidence for the involvement of tulip TgLOX4 and TgLOX5 in the regulation of daughter bulb growth and development.
Introduction
Tulip (Tulipa gesneriana L.) is an ornamental bulbous plant that is widely used for landscaping and cut f lowers [1,2]. The Tulipa genus is distributed in the Mediterranean, Central Asia, Europe and Northern Africa [3][4][5]. Tulips have a long juvenile phase for up to 3-7 years [6]. Therefore, tulips are mainly propagated vegetatively through bulb proliferation. Seed propagation is used only for the breeding of new cultivars because of the long adult vegetative phase and high heterozygosity [7]. Tulip bulbs are modified underground stems that consist of a brown, dry tunic outside, several layers of modified leaves called scales, and an abnormally short stem called the basal plate. In tulip, f loral meristem initiation and differentiation occur inside the expanded bulbs during the summer season [8,9]. A mature f lowering tulip bulb, referred to as a mother bulb, contains one apical meristem and six axillary meristems [8,10]. The aboveground stems, leaves, and f loral organs of tulip plants are developed from the apical meristem, and the axillary meristems expand as bulblets (daughter bulbs) [11].
Plant hormones are key regulators of plant growth and development [12,13]. Jasmonates, represented by jasmonic acid (JA) and its volatile methyl ester (methyl jasmonate, MeJA), are pivotal plant growth regulators that control plant stress responses, flowering, and development [13,14]. The first isolated JA compound was MeJA, which was initially identified as an odorant in Jasminum grandiflorum flowers [15]. The signaling perception pathway of JA has been well characterized. Binding of JA to the F-box protein CORONATINE INSENSITIVE1 (COI1) leads to the degradation of JASMONATE ZIM (JAZ) and the activation of the basic-helix-loop-helix (bHLH) transcription factor MYC2. The biosynthetic pathway of JA from α-linolenic acid through the octadecanoid pathway was established in the 1980s. α-Linolenic acid is subsequently converted into the intermediate 12-oxophytodienoic acid (OPDA) by a lipoxygenase (LOX), an allene oxide synthase (AOS), and an allene oxide cyclase (AOC) [16,17].
yeast, fungi, corals, algae, and mushrooms [18]. Plant LOXs are classified into two major subfamilies, 9-LOXs and 13-LOXs. 9-LOXs are predominantly involved in plant defense responses against various pathogens, whereas 13-LOXs play key roles in the biosynthesis of JA and volatiles [19]. Arabidopsis 13-LOXs, especially LOX2, LOX3, LOX4, and LOX6, can produce JA precursors in leaves [20,21]. In tomato, 14 LOX gene family members were identified and exhibited differential associations with growth, development, and fruit ripening [22]. In potato, JA has been shown to be associated with the induction of radial cell expansion in tubers and tuber buds [23,24]. The potato LOX1 gene is highly expressed in newly formed tubers. Suppression of LOX1 class activity resulted in reduced tuber yields and disruption of normal tuber morphology [19]. Legume LOX mRNAs and proteins were detected in nodules, mainly in the developing stage, but their expression and activity levels decreased in nodules of complete size [25]. All these results indicate that the modulation of LOX genes and changes in JA content contribute to the promotion of plant growth and development.
In tulip, MeJA treatment induced fatty acid and sterol concentrations in stems [26]. Exogenous application of polyamines (PAs) and MeJA significantly improved tulip bulb formation [27]. To date, the mechanism by which JAs regulate tulip bulb growth remains to be investigated. The functions of JA pathway-related genes involved in this process are elusive. In this study, we aimed to determine the effects of exogenous JAs and a JA biosynthe-sis inhibitor on tulip bulb formation and swelling. The functions of tulip LOX genes in the JA pathway were also dissected through ectopic expression in Arabidopsis plants and repressed expression in tulip by virusinduced gene silencing (VIGS). The results provide new clues for understanding the mechanisms of tulip bulb development.
Variation in daughter bulb size among tulip cultivars
In this study, natural variations in tulip bulb size were first investigated. The daughter bulb perimeters of 66 tulip cultivars varied from 6.7 cm to 13.4 cm (Fig. S1). The perimeters of daughter bulbs of the majority of cultivars were 8.0-11.0 cm when plants were cultivated in Wuhan, China (113 • 41 -115 • 05 E, 29 • 58 -31 • 22 N) (Fig. 1a). Based on this preliminary investigation, the two tulip cultivars "Ad Rem" and "Red Power" with red flowers and contrasting daughter bulb sizes were selected for further study (Fig. S1). We observed that there was significant enlargement of daughter bulbs from stage 1 to stage 2 in both cultivars. "Ad Rem" exhibited slightly smaller daughter bulbs compared with "Red Power" at stage 1 before planting, but "Ad Rem" bulbs were significantly larger than those of "Red Power" at stage 4 ( Fig. 1b, 1c). Consequently, the fresh weight of "Ad Rem" bulbs was significantly higher than that of "Red Power" bulbs from S2 to S4 (Fig. 1d). S2 vs S1 S3 vs S1 S4 vs S1 S2 vs S1 S3 vs S1 S4 vs S1 S2 vs S1 S3 vs S1 S4 vs S1 Table S2. (b,c) Overlapping analysis of changed genes at four developmental stages in two cultivars. (d,e) Overlapped genes between two tulip cultivars in S2_vsS1, S3_vsS1, and S4_vsS1. The original gene expression data are provided in Table S2. S1, bulblets inside the mother bulbs after dormancy release in late January; S2, bulblets from tulip plants with green buds 4-5 cm in length in early March; S3, bulbs one week after full bloom in middle ("Red Power") and late ("Ad Rem") March; and S4, bulbs from plants that were senescent in April.
Transcriptomic changes during tulip daughter bulb development
Daughter bulbs of the two tulip cultivars at four developmental stages were collected for RNA sequencing analysis. In cultivar "Ad Rem", a total of 5845, 7728, and 11 105 unigenes showed significant expression changes in S2_vs_S1, S3_vs_S1, and S4_vs_S1, respectively. In cultivar "Red Power", the numbers of changed unigenes were 8815, 8957, and 16 786 for S2_vs_S1, S3_vs_S1, and S4_vs_S1, respectively ( Fig. 2a; Table S1). Overlapping analysis showed that 2416 (1218 upregulated and 1198 downregulated) and 3752 (1846 upregulated and 1906 downregulated) unigenes were commonly regulated in three stages in "Ad Rem" and "Red Power", respectively (Fig. 2b, 2c). In total, 23.9%, 20.8%, and 23.4% of unigenes were co-regulated in the two cultivars in S2_vs_S1, S3_vs_S1, and S4_vs_S1, respectively ( Fig. 2d-f). GO term enrichment analysis indicated that GO terms including regulation of cell proliferation, meiotic cell cycle, cell wall, macromolecule, and metabolic process were over-represented only in "Ad Rem" (Fig. S2a). Other GO terms related to development (cell cycle, cell proliferation, cell division, cell growth, cell differentiation, etc.) and hormone pathways (hormone transport, regulation of hormone levels, and response to hormone stimulus) were enriched in both tulip cultivars (Fig. S2b). These results indicated that hormone-and developmentrelated pathways were extensively changed during tulip bulb growth.
Pathway enrichment analysis was performed using MapMAN software. The results indicated that twelve pathways were overrepresented in S2 to S4 relative to S1 in the two tulip cultivars: fermentation, major CHO metabolism, biodegradation of xenobiotics, amino acid metabolism, minor CHO metabolism, TCA/org transformation, cell wall, transport, secondary metabolism, lipid metabolism, cell, and hormone metabolism (Table S2). Another eight pathways were enriched in most of the four developmental stages in both cultivars: gluconeogenesis/glyoxylate cycle, oxidative phosphorylation, glycolysis, polyamine metabolism, N-metabolism, nucleotide metabolism, redox, and C1-metabolism (Table S2).
Changes in the JA pathway during bulb development
Transcriptomic data showed that 142 unigenes involved in the JA pathway exhibited significant changes in expression level following tulip bulb growth in both cultivars, including 100 unigenes encoding JA biosynthesis enzymes, 6 unigenes encoding JA co-receptors, Table S3. Letters indicate statistically significant differences determined by Duncan's multiple range test at the P ≤ 0.05 level. S1, bulblets inside the mother bulbs after dormancy release in late January; S2, bulblets from tulip plants with green buds 4-5 cm in length in early March; S3, bulbs one week after full bloom in middle ("Red Power") and late ("Ad Rem") March; and S4, bulbs from plants that were senescent in April. The qPCR primer sequences for TgLOX4 and TgLOX5 are provided in Fig. S3B and S4B, respectively. 24 unigenes encoding JA signaling activators, 7 unigenes encoding JA signaling repressors, and 5 unigenes involved in JA catabolism pathways ( Fig. 3a; Table S3). The results revealed that the majority of unigenes encoding JA co-receptors, JA signaling activators, and JA signaling repressors exhibited similar expression changes in the two tulip cultivars (Fig. 3a). Among the unigenes involved in the JA biosynthesis pathway, 52 encoding LOX4 and 34 encoding LOX5 showed expression changes in "Ad Rem" and "Red Power" (Table S3). RNA sequencing results showed that TgLOX4 (F01.PB33674) and TgLOX5 (F01.PB63464) had higher FPKM values in "Ad Rem" than in "Red Power". We then verified the expression level changes in TgLOX4 and TgLOX5 by real-time qRT-PCR. The results showed that both genes had significantly higher expression in "Ad Rem" than in "Red Power" at four developmental stages, except for TgLOX4 at the S4 stage ( Fig. 3b-e).
Contrasting expression level changes in TgLOX4 and TgLOX5 prompted us to investigate the JA contents of the two tulip cultivars. The results indicated that JA contents increased from S1 to S2 in both cultivars and then decreased from S2 to S4 (Fig. 3f). Although JA content was higher in "Red Power" than in "Ad Rem" at S2, it showed the opposite trend at the other three stages, with 4.9fold higher JA content in "Ad Rem" than in "Red Power" at S4 (Fig. 3f). There was a continuous increase in MeJA content in "Ad Rem" from S1 to S4, but a decreasing pattern was observed in "Red Power" over the same period (Fig. 3g). These results indicated that the two tulip cultivars exhibited significant changes in JA biosynthesisrelated genes and JA accumulation following daughter bulb enlargement.
Ectopic overexpression of TgLOX4 and TgLOX5 promoted lateral root growth in Arabidopsis
TgLOX4 and TgLOX5 were then cloned from "Ad Rem" and "Red Power". The TgLOX4 and TgLOX5 sequences were deposited at NCBI GenBank under accession numbers MW582299, MW582300, MW582301 and MW582302. TgLOX4 from "Ad Rem" had 2595 nucleotides and encoded 864 amino acids. TgLOX4 from "Red Power" had 2583 nucleotides and encoded 860 amino acids (Fig. S3). TgLOX5 from "Ad Rem" had 2574 nucleotides and encoded 857 amino acids. TgLOX5 from "Red Power" had 2598 nucleotides and encoded 865 amino acids (Fig. S4). Amino acid sequence alignment revealed that both TgLOX4 and TgLOX5 showed high similarity between Length of primary root the two tulip cultivars (Fig. S3, S4). Phylogenetic tree analysis showed that TgLOX4 had high homology with the LOX4 genes from Cocos nucifera, Elaeis guineensis, and Musa balbisiana, and TgLOX5 was highly homologous to LOX5 genes from Phoenix dactylifera, C. nucifera, and E. guineensis (Fig. S5). Tissue-specific expression analysis showed that TgLOX4 was highly expressed in roots and bulb scales, whereas TgLOX5 was highly expressed in leaves, roots, and bulb scales (Fig. S6).
Arabidopsis lines with ectopic expression of TgLOX4 and TgLOX5 were generated. The expression levels of the TgLOX4 and TgLOX5 transgenes were detected in the transgenic Arabidopsis lines by qRT-PCR (Fig. S7). Our results showed that there were no significant growth differences between transgenic lines overexpressing TgLOX4 genes from "Ad Rem" or "Red Power" (Fig. S8). Similar phenotypes were obtained from "Ad Rem" and "Red Power" TgLOX5 overexpression lines (Fig. S8). Therefore, we selected the transgenic lines expressing TgLOX4 and TgLOX5 from "Ad Rem" for further analysis.
Interestingly, we observed that 35::TgLOX4 and 35::TgLOX5 transgenic plants displayed significantly more lateral roots compared with the vector wild type (WT) (Fig. 4a-d). However, there were no significant differences in primary root length between the WT and TgLOX4 or TgLOX5 transgenic lines (Fig. 4e,f). We then detected JA content in the TgLOX4 and TgLOX5 transgenic plants. The results showed that both TgLOX4 and TgLOX5 transgenic lines had significantly higher JA content than the WT (Fig. 4g,h). Expression of the JA co-receptor CORONATINE INSENSITIVE1 (AtCOI1) increased 2.6-42fold in transgenic Arabidopsis compared with the WT (Fig. 4i,j). The basic-helix-loop-helix (bHLH) transcription factor (TF) MYC2 functions as a master regulator to activate downstream JA-responsive genes. Overexpression of TgLOX4 resulted in slightly increased expression of AtMYC2, whereas overexpression of the TgLOX5 transgene significantly enhanced AtMYC2 expression (Fig. 4k,l). Moreover, expression of Arabidopsis LATERAL ORGAN BOUNDARIES (LOB) DOMAIN-CONTAINING PRO-TEIN genes (LBDs), including AtLBD13, AtLBD14, AtLBD16, AtLBD18 and AtLBD29, was upregulated in TgLOX4 and TgLOX5 transgenic Arabidopsis (Fig. 4m,n; Fig. S9). These results indicated that ectopic overexpression of TgLOX4 and TgLOX5 in Arabidopsis activated JA signaling pathways and lateral organ development-related genes.
Ectopic overexpression of TgLOX4 and TgLOX5 promoted leaf growth and branching in Arabidopsis
After growth in soil for three weeks, there were no significant differences in rosette diameter between 35::TgLOX4 and WT plants, but 35::TgLOX5 transgenic plants showed significantly larger rosette diameters than the WT (Fig. 5a,b). 35::TgLOX5 transgenic plants displayed significantly longer leaf lengths than the WT, whereas the leaf lengths of 35::TgLOX4 transgenic plants were slightly but not significantly longer than those of the WT (Fig. 5c). Two 35::TgLOX4 transgenic lines and all three 35::TgLOX5 transgenic lines exhibited significantly greater leaf widths than the WT (Fig. 5d).
In addition, both 35::TgLOX4 and 35::TgLOX5 transgenic plants had significantly more second and third branches (Fig. 5e,g). 35::TgLOX5 transgenic lines also showed significantly higher plant heights than the WT (Fig. 5h). There were no significant differences in silique length between transgenic plants and the WT (Fig. 5f,i). These data showed that 35::TgLOX4 and 35::TgLOX5 transgenes promoted leaf growth and branching in Arabidopsis.
Silencing of TgLOX4 and TgLOX5 inhibited tulip daughter bulb growth
To further characterize the functions of TgLOX4 and TgLOX5 in tulip, we set up a VIGS system using TRV2-TgLOX4 and TRV2-TgLOX5 recombinant vectors. Tulip bulbs used for VIGS infection were of uniform size (Fig. 6a). The presence of TRV was verified by genomic PCR (Fig. S10). At 14 days after recombinant vector infection, TRV2-TgLOX4 and TRV2-TgLOX5 tulip plants exhibited slower growth compared with the TRV2 controls (Fig. 6b). Daughter bulbs were photographed 14 d and 60 d after VIGS treatments (Fig. 6c,d). Expression analysis of TgLOX4 and TgLOX5 in tulip bulbs showed that TgLOX4 and TgLOX5 gene expression decreased by 72% and 68% at 14 d and by 8% and 44% at 60 d after infection, respectively (Fig. 6e,f). Fresh weights and perimeters of TRV2-TgLOX4 and TRV2-TgLOX5 infected bulbs were significantly lower than those of TRV2 controls 14 d after infection (Fig. 6g,i). TRV2-TgLOX5 infected bulbs also exhibited significantly lower fresh weights and perimeters than the TRV2 controls at 60 d after infection, but there were no significant differences between TRV2-TgLOX4 and the TRV2 controls at 60 d ( Fig. 6h,j). Therefore, silencing of TgLOX4 and TgLOX5 inhibited tulip daughter bulb growth.
JA promoted tulip bulb growth in in vitro cultivation
The effects of JA and the JA biosynthesis inhibitor DIECA on growth of tulip daughter bulbs were investigated. Daughter bulbs with identical sizes were separated from mother bulbs after storage at 5 • C for 3 months (Fig. 7a). These conditions are used commercially to break tulip dormancy. We observed that JA at 10 −5 and 10 −7 M promoted the growth of tulip daughter bulbs, producing significantly higher fresh weights and bulb diameters ( Fig. 7b-d). By contrast, DIECA at 100 μM and 300 μM inhibited daughter bulb growth. The fresh weights and bulb diameters of DIECA-treated bulbs were significantly lower than those of control bulbs (Fig. 7b-d). These data indicated that exogenous JA potentially promoted tulip bulb growth under tissue culture conditions. To further investigate the effect of exogenous JA on the growth of tulip, we planted tulip bulbs in a glasshouse and applied JA as a foliar spray at the S2 and S3 stages, three times per stage. Plant height was measured at 10 days after bloom. The results showed that JA at a higher concentration (10 −4 M) inhibited tulip plant growth, as evidenced by reduced plant height (Fig. 8a-c). JA at 10 −5 M did not affect plant height, whereas treatment with 10 −7 M JA significantly promoted tulip plant growth, with significantly greater plant heights compared with the water control (Fig. 8a-c). Interestingly, JA treatments at all three concentrations facilitated the growth of tulip bulbs (Fig. 8d). Perimeters and fresh weights of daughter bulbs were significantly higher than those of control bulbs at the harvest period, but still lower than those of the mother bulbs ( Fig. 8e-f). Similar results were also obtained with the cultivar "Red Power": JA treatments increased bulb perimeter and fresh weight (Fig. S11). These results showed that JA promoted the growth of tulip bulbs in soil.
Discussion
Tulips, native to the Tien Shan and Pamir-Alay mountains, are one of the most economically important bulbous plants and have been among the top species produced for cut flowers and bedding for many years. The natural propagation rate of tulips is very low [11,28]. Moreover, the bulb size of tulips is significantly reduced after flowering and generally cannot meet the requirements for flowering in the next season. In the Netherlands and other countries, tulip petals are cut off for tulip bulb production to interrupt reproductive growth, promote the transport of photosynthetic products belowground, and supply the nutrients needed for bulb expansion. The growth and development of tulip bulbs is a very complex biological process. Bulblet formation is quite similar to axillary bud outgrowth, which is controlled by several hormones in model plants. Hormone signaling is an important factor in the regulation of bulb growth and regeneration [29][30][31]. JA and MeJA are considered to play important roles in the morphogenesis of storage organs. In Lycoris radiata, soluble sugars derived from starch degradation were proposed to be transported from the outer scales to the inner scales, thereby promoting bulblet growth. This process is accompanied by changes in a variety of plant hormones and hormone-responsive genes [29]. The biosynthesis of JA generally occurs in developing and expanding organs. The content of endogenous JA in expanded bulbs is more than three times that in non-expanded bulbs [32]. In this study, we showed that JA content differed significantly among tulip bulbs at four developmental stages (Fig. 3f). It should be pointed out that the JA content was not closely consistent with the changes in TgLOX4 and TgLOX5 expression levels (Fig. 3). One possibility is that gene expression changes occur earlier than changes in metabolite contents. Another possibility is that other TgLOX genes and JA catabolism-related genes encoding allene oxide synthase are involved in endogenous JA biosynthesis. In addition, "Ad Rem" exhibited higher JA content than "Red Power" at the S3 and S4 stages, which may have contributed to the larger bulb size of "Ad Rem". JA has been shown to induce and promote bulb formation in onion [33,34], tulip [27] and Narcissus [35] in vitro and to significantly increase endogenous methyl jasmonate content during storage organ formation [36]. In tulip, exogenous JA treatment promoted daughter bulb growth under field and tissue-culture conditions ( Fig. 7; 8). Accordingly, the JA biosynthesis inhibitor DIECA inhibited tulip bulb enlargement (Fig. 7). JA and MeJA were proposed to be involved mainly in potato tuber development rather than in tuber induction [37,38]. These data showed that JA is one of the key hormones that control bulb formation and development.
Plant LOXs are involved in diverse functions, including growth and development, stress response, senescence, seed germination, fruit ripening, and synthesis of JA and ABA [39]. LOX genes were significantly upregulated by external environmental cues and resulted in the accumulation of JA and MeJA [20,40]. During the development of potato tubers, JA is metabolized to tuberonic acid (TA) and finally to tuberonic acid glucoside (TAG). TAG is recognized as an endogenous inducer of potato tuber formation [41]. Therefore, LOX derivatives are considered to be key compounds in tuber organogenesis [36]. LOX activity has been found to vary with growth temperature, and the highest LOX activity was found at 15-20 • C, when tuber growth was most active [42,43]. Through transcriptomic profiling analysis, we further observed that the majority of JA biosynthesis-related unigenes, including TgLOX4 and TgLOX5, displayed contrasting changes in the daughter bulbs of two tulip cultivars (Fig. 3). In tomato, LOX genes are involved in growth, development and fruit ripening [22]. In tulip, ectopic overexpression of TgLOX4 and TgLOX5 improved the growth of underground roots and aboveground stems and leaves (Figs. 4, 5). Silencing of TgLOX4 and TgLOX5 in tulip repressed growth of tulip plants and bulbs (Fig. 6), indicating that these two LOX genes are involved in tulip bulb enlargement. These results are consistent with data from potato in which reduced transcript levels of potato LOX1 inhibited LOX activity, resulting in reduced tuber yield, decreased average tuber size, and disruption of tuber formation [19]. Persimmon (Diospyros kaki) LOX3 transgenic Arabidopsis exhibited faster root growth under osmotic stress conditions compared with the WT [44], whereas mutation of maize lox3 reduced root length and plant height [45]. In this study, Arabidopsis TgLOX4 and TgLOX5 overexpression lines exhibited more lateral roots and branches and greater plant height compared with the WT, indicating that TgLOX genes are involved in stem and root development, at least in Arabidopsis.
In addition, VIGS data showed that silencing of TgLOX4 and TgLOX5 repressed tulip bulb growth. Therefore, data from Arabidopsis and tulip were consistent, partially illuminating the roles played by TgLOX4 and TgLOX5 in tulip bulb growth. However, we cannot rule out the possibility that several other LOXs function as JA biosynthetic enzymes. The functions of TgLOX4 and TgLOX5 in the JA biosynthesis pathway are worthy of further discussion. Taken together, transcriptomic analyses showed that hormone-related pathways were extensively changed during tulip bulb growth and development. Ectopic overexpression of the tulip lipoxygenase genes TgLOX4 and TgLOX5 in Arabidopsis increased endogenous JA content and improved plant and root growth, whereas silencing of these genes inhibited tulip bulb development. We propose that TgLOX4 and TgLOX5 enhance JA biosynthesis, activate JA signaling pathways, and possibly promote tulip bulb growth and development by upregulating the expression of LBD and JA-responsive genes (Fig. S10). In addition, the effect of MeJA on bulb development may be combined with those of other hormones [26]. External application of MeJA can reduce the content of hormones that are not conducive to bulb expansion, such as GA1, GA3 and ABA, and increase the content of IAA, which is conducive to bulb expansion, thus better promoting the formation and expansion of renewed bulbs [46]. Moreover, the observation that LOX regulates tuber formation by directly interacting with light and growing temperature [42,43] suggests that it may be an important downstream signaling molecule in photoperiod-controlled signaling pathway(s). The regulatory networks among different hormones during the development of storage organs are worthy of further investigation.
Plant materials and growth conditions
Sixty-six tulip cultivars were used in this study. All mother bulbs were imported from the Netherlands and planted in Wuhan, China for investigation of bulb perimeter size. In total, 50 daughter bulbs were investigated and three replicates were used. Two tulip cultivars with contrasting bulb sizes, "Ad Rem" and "Red Power", were used for further study. The mother bulbs were imported from the Netherlands by Shangu Horticultural Company (Beijing) and stored at 5 • C for 12 weeks. The tulip bulbs were planted in a glasshouse at the Ornamental Plants Research Farm of Huazhong Agricultural University (Wuhan, China). The conditions in the greenhouse were maintained at 20 • C day/15 • C night, with a relative humidity of 60-70% and a 16-h light/8-h dark photoperiod.
Moreno-Pachon et al. classified the developmental stages of tulip plants and bulblets from the storage period (October to December) through the growing season (February to July) under field conditions in the Netherlands [10]. In this study, a cold forcing treatment was performed in a cold room maintained at 5 • C for 12 weeks. All cold-treated tulip bulbs were planted in February, bloomed in March, and senesced in April; the bulbs were harvested in May under greenhouse conditions in Wuhan, China. Bulblet samples from the following four developmental stages were collected for further analysis: S1, bulblets inside the mother bulbs after dormancy release in early February; S2, bulblets from tulip plants with green buds 4-5 cm in length in early March; S3, bulbs one week after full bloom in middle ("Red Power") or later ("Ad Rem") March; and S4, bulbs from plants that were senescent in April. These four stages are nearly equivalent to the stages of Dec, Mar, Apr, and Jun described by Moreno-Pachon et al. [10].
The Arabidopsis thaliana Columbia-0 (Col-0) ecotype was used in this study to generate transgenic plants. Arabidopsis seeds were sterilized for 4 min using 2% (v/v) sodium hypochlorite (NaClO) containing 0.1% (v/v) Triton X-100. The seeds were then washed five times with sterile water. Seeds were stored at 4 • C for 5 days under dark conditions for vernalization. After planting on MS plates, all seeds were cultured in a growth chamber maintained at 22 ± 1 • C with 60% relative humidity and a 16-h light/8h dark photoperiod. The chamber was supplemented with 100 μmol photons m −2 s −1 light intensity.
Determination of JA content
Plant tissues were frozen with liquid nitrogen and ground into fine powder in a mortar. Approximately 100-mg samples were transferred to 2-mL tubes containing 1 mL extraction solvent (2-propanol: H 2 O: concentrated HCl = 2:1:0.002, v/v/v). Then 100 μL of the 2H-JA working solution was added as an internal standard. The samples were mixed well and centrifuged at 4 • C and 13 000 g for 5 min. After centrifugation, 900 μL of the solvent from the lower phase was transferred and concentrated using a nitrogen evaporator with nitrogen flow. The samples were re-dissolved in 0.1 mL methanol, and 50 μL of sample solution was injected into a reverse-phase C18 Gemini HPLC column for HPLC-ESI-MS/MS analysis.
RNA sequencing analysis of tulip bulbs at four developmental stages
Tulip bulbs of two cultivars at four developmental stages were harvested for RNA isolation using a plant RNA purification kit (Tiangen, Beijing, China). A NanoDrop 2000 spectrophotometer (Thermo, USA) and a Bioanalyzer 2100 system (Agilent Technologies, USA) were used to assess RNA purity and integrity, respectively. A total amount of 1 μg RNA per sample was used for cDNA library construction with the NEBNext Ultra RNA Library Prep Kit for Illumina (NEB, USA) following the manufacturer's recommendations. The libraries were sequenced on an Illumina HiSeq platform, and 150bp paired-end reads were generated. Clean data (clean reads) were obtained by removing low-quality reads and reads that contained adapters and poly-N from the raw data. Trinity was used for transcriptome assembly based on the left.fq and right.fq files [47]. Gene functions of the tulip unigenes were annotated using the following databases: NR (NCBI non-redundant protein sequences), Swiss-Prot (a manually annotated and reviewed protein sequence database), KOG/COG/eggNOG (Clusters of Orthologous Groups of proteins), Pfam (Protein family), KEGG (Kyoto Encyclopedia of Genes and Genomes) and GO (Gene Ontology). HTSeq v0.6.1 was used to count the read numbers mapped to each gene. The FPKM value (Fragments Per Kilobase of transcript sequence per Million base pairs sequenced) for each unigene was calculated based on its length and mapped read count. Differential gene expression between combinations of cultivars and developmental stages was analyzed using the DESeq R package (1.18.0). The raw data have been deposited to the NCBI Gene Expression Omnibus (GEO) with the accession number GSE167530.
MapMan pathway enrichment analysis
Differentially expressed tulip unigenes were annotated based on their Arabidopsis homologs. Corresponding Arabidopsis Genome Initiative (AGI) locus codes for differentially expressed unigenes were used as input to the Classification SuperViewer Tool (http://bar.utoronto.ca/ ntools/cgi-bin/ntools_classification_superviewer.cgi) for pathway enrichment analyses [50]. MapMan (http://ma pman.gabipd.org/home) was selected as a classification source. The normalized frequency (NF) was calculated as described previously [49]: NF = sample frequency of each category in each sample/background frequency of each category.
Gene cloning, plasmid construction and gene transformation
Based on tulip RNA sequencing data (GEO database accession number GSE167530), the full-length sequences of F01.PB33674 (TgLOX4) and F01.PB63464 (TgLOX5) were identified. The coding regions of TgLOX4 and TgLOX5 were amplified using the specific primers listed in Table S4. The open reading frames (ORFs) of TgLOX4 and TgLOX5 were cloned into the pCAMBIA1300 vector using the XbaI and KpnI restriction sites, and the resulting plasmids were introduced into Agrobacterium tumefaciens strain GV3101. Transgenic Arabidopsis lines were generated by the f loral-dip method [51]. TgLOX4 and TgLOX5 transgenic plants were screened and verified by qPCR analysis with the primers listed in Table S4.
Phylogenetic analysis and sequence alignment
Amino acids sequence alignment of TgLOX4 and TgLOX5 and their closest orthologs was performed using BioEdit (Tom Hall, North Carolina State University, USA). Multiple protein sequence alignments and phylogenetic tree construction for TgLOX4 and TgLOX5 were performed using MEGA 7 with the maximum likelihood method.
Measurement of root length and lateral root number
Transgenic (T2 generation) and wild-type Arabidopsis were planted directly in soil for leaf and stem measurements. Each pot contained only one seedling at 1 week after planting. Leaf length, leaf width and seedling diameter were measured at 21 days after planting, and branch number, plant height and silique length were measured at 42 days after planting.
To investigate root length and lateral root number, wild-type and transgenic Arabidopsis seeds were sown on MS plates. One-week-old seedlings of identical size were transferred to fresh MS plates. Primary root length was measured and lateral root numbers were counted after 7 d of growth. For each genotype, three replicates of at least 30 seedlings each were measured, and the whole experiment was repeated three times.
Silencing of target genes in tulip
The expression of TgLOX4 and TgLOX5 genes was silenced through virus-induced gene silencing (VIGS) as described by Zhong et al. [52] and Wang et al. [53]. A 394-bp fragment of TgLOX4 and a 394-bp fragment of TgLOX5 were amplified by PCR and inserted into the pTRV2 vector to generate the pTRV2-TgLOX4 and pTRV2-TgLOX5 constructs, respectively. A. tumefaciens strain GV3101 was used for construct transformation. Tulip bulbs were immersed in infiltration buffer containing Agrobacterium cells transformed with equal amounts of pTRV1 and pTRV2 or pTRV2-target genes. Tulip bulbs submerged in the bacterial suspension were infiltrated under a vacuum at 0.8 MPa for 30 min to promote infection efficiency. After infiltration, the bulbs were kept in the dark at 22 • C for 48 h, then planted in a greenhouse at 22 • C with a relative humidity of 60-70% and a 16-h light/8-h dark cycle.
Real-time qRT-PCR analysis
Tulip bulbs at four developmental stages and whole seedlings of 2-week-old transgenic TgLOX4, TgLOX5, and WT Arabidopsis were collected for qPCR analysis. Total RNA was extracted from tulip bulbs and Arabidopsis using the EASYspin Plus Complex Plant RNA Kit (Vazyme, Nanjing China). Equal amounts (1 μg) of total RNA were used for reverse transcription with the HiScript II 1st Strand cDNA Synthesis Kit (Vazyme, Nanjing, China) following the manufacturer's instructions. AtACT2 (AT3G18780) was used as the reference gene for Arabidopsis, and tulip TgACTIN (unigene ID PB13161) was used as the reference gene for tulip. The web tool GenScript (https://www.ge nscript.com/ssl-bin/app/primer) was used to design realtime qRT-PCR primers. The relative transcription levels were calculated using the 2 − Ct method [54]. Primer sequences are listed in Table S4.
Effects of JA and the JA biosynthesis inhibitor DIECA on growth of bulblets in vitro
"Ad Rem" daughter bulbs (axillary buds) were separated from mother bulbs after three months of storage at 5 • C. The bulblets were sterilized in 70% ethanol for 1 min, then soaked in 20% (v/v) sodium hypochlorite (NaClO) for 20 min and finally washed with sterilized water 5 times. The sterile buds were cultured on solid MS medium containing 60 g l −1 sucrose, 1.0 g l −1 casein hydrolysate, 1.0 mg l −1 thiamine and 200 mg l −1 L-Gln. Different concentrations of JA and the JA biosynthesis inhibitor DIECA were added to the MS medium. After incubation at 22 • C with a 16-h light/8-h dark cycle for three months, fresh weights and perimeters of daughter bulbs were measured.
For exogenous JA treatment, JAs at the indicated concentrations and water (control) were sprayed on both sides of the plant leaves. Three concentrations (10 −4 M, 10 −5 M, and 10 −7 M) of JA solution were used based on preliminary results. Both the water control and the JA solutions were applied as foliar spray at the S2 and S3 stages once every two days, three times in total at each stage. For each replicate (30 plants), 2 L of solution containing the indicated concentration of JA or water was used.
Statistical analysis
All experiments in this study were performed three times, and the results shown are mean ± SEs (n = 3) of each replicate. At least 50 bulbs or plants were used for each treatment. Duncan's multiple range test (DMRT) was used to assess differences between the means. Different letters above the columns in each figure indicate significant differences at * P < 0.05. | 2022-02-12T06:23:44.625Z | 2022-02-11T00:00:00.000 | {
"year": 2022,
"sha1": "a6dcb01cc5a55ffceffbf153d2c5e9d8dbc98e1f",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/hr/article-pdf/doi/10.1093/hr/uhac006/43837384/uhac006.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c95b98445c6be46dba86126b920c8e2c1b594355",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259333057 | pes2o/s2orc | v3-fos-license | Chemical extraction and its effect on the properties of cordleaf burbark (Triumfetta cordifolia A. rich) fibres for the manufacture of textile yarns
Tropical Triumfetta cordifolia (TC) fibre extracted from the equatorial region of Cameroon has been characterized as a potential fibre for textiles. An investigation of extraction parameters to soften this fibre is crucial to use it as a biobased material in the spinning process. To obtain textile quality fibres, 34 sodium hydroxide extraction tests were carried out to study the effect of extraction conditions on its characteristics. Thus, three levels of concentrations (0.5, 1.0 and 1.5 wt%), temperatures (80, 100 and 120 °C) and durations (120, 180 and 240min) were used for extraction by cooking, and at room temperature, durations of 120, 150 or 180 min with three concentrations (2.5, 3.0 and 3.5 wt%) were considered. Only 6 combinations produced fibres that were clear and soft to the touch, without defects (corrugations, stuck fibres) and without residual bark epidermis at the macroscopic scale. For these fibres, the dissolution of non-cellulosic substances, morphological, physical, thermal and mechanical properties depended on the austerity of the alkaline retting. Under mild conditions, the SEM surfaces of the fibres showed large residues of the middle lamella, which made the lignin content (10 wt%) and hydrophilic function higher. Under medium conditions, the fibre surfaces were clean and slightly wrinkled (at 80 °C; 120min). Under severe conditions, heterogeneous transverse shrinkage and wrinkling were observed and accompanied by cellulose degradation (39 wt%) with a significant reduction in tenacity at 16cN/tex. The medium extraction conditions were considered more effective, and their fibres showed cellulose content up to 49 wt%, density up to 1.39 g cm−3, “Fickian” moisture absorption kinetics with saturation up to 11 wt%, thermal stability up to 237 °C, Young's modulus up to 3.7 GPa, tensile strength up to 113 MPa and tenacity up to 40cN/tex. These new results were compared with lignocellulosic textile fibres in the literature, showing similarity with banana, sisal and jute fibres.
Introduction
Yarns made from plant fibres are increasingly used to weave and knit a wide range of objects in many fields such as clothing, decoration, health, packaging industry or sports and leisure. These yarns combine softness, stretchability, tenacity, and lightness, but their performance depends mainly on the performance of the fibres [1,2]. Plant fibres are a biosourced raw material that is very abundant in nature. They are renewable, biodegradable, antistatic, porous and moisture-regulating, making them an attractive alternative to synthetic polyester and polyamide textile fibres. They generally have good specific properties. These are still rather dispersed, difficult to control and generally inferior to synthetic fibres, which limits their use [3,4]. However, the diversification of low environmental impact textile fibres is a key issue to replace synthetic fibres. The most used vegetable textile fibres (VTF) are cotton, flax, hemp and jute [5][6][7]. Some banana, sisal, pineapple and coconut fibres are also used. The low availability of these fibres due to a very low extraction yield (5-15 wt%) [8] is prompting researchers to investigate the possibility of using other lignocellulosic fibres. In addition, the idea of offering other types of lignocellulosic fibres such as Triumfetta cordifolia, Triumfetta pentadra, okra and Urena lobata is to reduce costs and greenhouse gas emissions due to the transportation of raw materials. However, the use of new VTF for the development of value-added textiles requires a better understanding and control of the extraction process and the properties induced by this process. In the case of chemical extraction, the effect of several parameters on fibre performance needs to be well controlled and understood [8][9][10][11].
Alkaline extraction with sodium hydroxide (NaOH) is the most used chemical extraction method [12,13]. In contrast to mechanical and biological extraction, chemical extraction allows for rapid extraction of the fibres, which represents a considerable energy saving. Specifically, the NaOH process allows for the simultaneous dissolution of non-cellulosic components such as hemicellulose, pectin and lignin, as well as other substances present in the epidermis of the raw material (bark or leaves). The dissolution of lignin is even more efficient when the NaOH solution is heated to a temperature above 75 • C [12,14,15]. This seems to be an interesting solution to produce spinnable and much less stiff textile fibres [5]. Among other things, extraction is considered efficient when significant removal of non-cellulosic material occurs without significant degradation of cellulose. To avoid cellulose degradation during the extraction process, the NaOH concentration, temperature and time should be defined according to the type of plant [4,5]. Elseify et al. [7] showed that, the alkaline extraction process effectively extracts long textile fibres with a low proportion of impurity in the date palm vein. The extraction was performed with NaOH concentrations of 1, 3 and 5%, with three temperature levels (25,75 and 100 • C) and three-time levels (1, 2 or 3 h). The fibres were light (1.324 g cm − 3 ), and could withstand temperatures up to 226 • C and tensile loads of about 453 MPa. In Hasan et al. [8], 3 NaOH concentration levels (4, 7 and 10% w/v), 4 temperature levels (70, 80, 90 and 95 • C) and 5 time levels (4, 6, 8, 10 and 12 h) were used to extract the fibres from Typha latifolia leaf. The authors showed that using a concentration of 7% NaOH, a temperature of 90 • C and a time of 10 h resulted in fibres with a higher strength (168 MPa) and a lower moisture absorption capacity (8% by weight). Vinod et al. [13] reported that increasing the concentration of NaOH between 3 and 5 wt% during the chemical extraction of fibres from the Yucca elephantine plant resulted in a progressive decrease in cellulose content from 66 wt% to 57% and in tenacity from 5.7 cN/dTex to 3.8 cN/dTex. This decrease shows that NaOH degraded the crystalline structure of the fibre and altered its ability to withstand loads [16]. All these results on chemical extraction with NaOH are very instructive and show that fibre performance depends fundamentally on extraction conditions and the nature of the plant.
Triumfetta cordifolia (TC) or cordleaf burbark fibre is a bast fibre that belongs to the Tiliaceae family. Congo is the largest producer of TC fibre with about 260 kg/ha, followed by Equatorial Guinea [17]. The TC plant is grown in savannahs, fallows and riverbanks in the humid regions of tropical Africa. It is grown from its seeds or from leafy stem cuttings. The plant is an erect, slightly branched shrub with fragrant leaves, reaching 2.5-5 m in height [18]. All parts of the plant are used in traditional African medicine to treat burns, muscle pain, lung and stomach infections [18]. Its leaves are eaten as a vegetable and the wood from its stem is used in house construction or as fuel [18,19]. The fibres extracted from its stem bark are used in handicrafts to make strong ropes and twine, bowstrings, fishing lines and belts used for climbing trees and palms. They are also used to make baskets, coffee bags, mats, hammocks, and traditional dance costumes [17]. Like all lignocellulosic fibres already used in textile development, TC fibre is composed of cellulose (44 wt%), hemicellulose (31 wt%), lignin (9-19 wt%), pectin (3.3 wt%), extractives (3 wt%), wax (0.5 wt%), minerals (2 wt%) and water (8 wt%). This high proportion of non-cellulosic substances is likely to make it difficult for TC fibre to be spun and for dyes to attach to its surface [20,21].
Studies have been conducted to extract fibres from TC bark to identify their potential for reinforcement in composites. Senwitz et al. [19] immersed TC stem bark for 3 and 6 weeks in standing water to extract fibre bundles. The 6-week retting made the cellulose content higher in the fibres. Surprisingly, the tensile strength of these fibres was lower, and their stiffness was higher due to their higher lignin content. Grosser et al. [18] fabricated non-woven and unidirectional composites based on polyamide and TC bast fibres extracted after 3 weeks of water retting. It was reported that TC fibres have positive effects on the mechanical performance of composites in the same way as hemp fibres. In a previous study, Mewoli et al. [22] extracted TC fibres by immersing the stem bark in water for about 30 days. The extracted fibre bundles were processed by carding to individualise them for addition as reinforcing material in polyamide and polypropylene. It was observed that, the cross section of the fibre has an elliptical lumen, and its shape varies from circular to flat oval. In addition, the surface of the fibre is covered with a rough sheath composed of impurities, pectin and lignin, which makes the fibre rough, hard to the touch and stiff, thus seriously hindering its use in the design and development of yarns for clothing. In the literature, no studies have been done to extract the fibres from TC bark for use in common textiles.
In view of the above, this study proposes to carry out for the first-time alkaline extraction tests at room temperature and under heat. Extraction parameters that will effectively separate the fibres, making them soft to the touch without blackening or agglomerating them into a paste will be selected. The fibres associated with the selected parameters will be subjected to standard gravimetric tests to assess their density, water absorption and moisture absorption kinetics. In addition, their chemical composition will be assessed using the industrial pulp and paper analysis technique. SEM and FTIR analysis will be used to analyse morphological and structural changes. Thermal behaviour will be analyzed by thermogravimetric tests (TGA). Mechanical properties will be determined by standard tensile tests.
Plant material
Triumfetta cordifolia (TC) stems ( Fig. 1a and b), were collected in November, in the Messock I forest, Mbankomo district, Centre region (Cameroon). In this locality, the relative humidity is 85% and the average temperature is 23 • C. Alkali retting experiments by cooking and at room temperature were carried out in a 5000 ml digester, equipped with a WRN-130K thermocouple (Bastor, SS304 M27, China).
Alkaline fibre extraction
For alkaline cooking, three temperature levels (80, 100 and 120 • C) were used, with three mass concentration levels (0.5, 1 and 1.5 wt%) of sodium hydroxide solution (NaOH) and three-time levels (120, 180 and 240 min). Similarly, three-time levels (120, 150 and 180 min) and three concentrated solutions (2.5, 3 and 3.5 wt%) of NaOH were used for alkaline retting at room temperature (T = 25 ± 2 • C). It is interesting to note that concentrations below 2.5 wt% had no defibrillating effect on the bark during the ambient alkaline retting. Temperature, time and concentration levels were chosen according to the processing conditions of the textile fibres [1,23].
The TC barks were placed in a freezer for 30 days, then defrosted in a conditioned medium (T = 25 • C, 50%RH) and cut to approximately 10 cm in length. For each extraction, a 12.5 g dry mass of bark was immersed in demineralised water for 60 min at room temperature (25 ± 2 • C) to soften the skin. The softened barks were then wiped with a cotton cloth to remove surface water, and placed in a digester containing a concentrated solution of NaOH (ratio of bark to alkaline solution is 1:40 w/v) previously heated to the target temperature. Cooking was carried out in a temperature-controlled electric water bath, while room alkaline retting was carried out in a room controlled for temperature (25 ± 2 • C) and relative humidity (50 ± 5%). The extraction was carried out at a fixed time t. The extracted fibres were washed in a concentrated solution of 1 wt% acetic acid for 10 min and subjected to several washes with lukewarm demineralised water (40-50 • C) to neutralise the dissolved substances and residual NaOH. The resulting fibres were dried in a vacuum oven at 80 • C for 24 h [22] and stored in sealed polyethylene bags.
Fibre selection
The physical appearances of some fibres and the fineness (2.9-6.2 tex) of all fibres are shown in Fig. 2(a-d). More details on the methodology for the determination of fineness by gravimetry are presented by Betené et al. [21].
It can be seen that the fineness and texture of the fibre depend on the extraction parameters (type, temperature, concentration, and time). The images show that the fibres are partially burnt (dark colour) and heaped (when C = 1.5 wt% for firing temperatures of 120 • C (Figs. 2d), and 100 • C at t = 240 min (Fig. 2c)), twisted, and partially composed of bark. In a textile application context, the fibres chosen ( Table 1) are those that are individualised, visibly unidirectional and light in colour, and relatively soft to the touch. Table 1 also gives the extraction parameters and linear densities of these fibres. The fibres selected for alkaline room retting have a fineness in the range of 3.0-5.7 tex and the fibres obtained by alkaline firing have a fineness of 3.0-5.7 tex. The average length of these fibres is of the same order as that of the base bark (about 10 cm).
SEM analysis
The morphology of the fibre surface was observed using a scanning electron microscope (HITACHI SEM, S-3500 N). Prior to observation, the fibres were coated with a thin layer of gold/palladium by sputtering. The images were taken on the longitudinal views at × 600 magnification.
Chemical composition
The estimation of the content of TC fibre constituents was carried out using the analytical technique for pulp and paper industry (TAPPI) [4,24,25]. It consists in isolating and quantifying (in mass percentage) successively the biochemical components of the fibre such as extractives, pectin, lignin, hemicellulose, and cellulose. In this analysis, 10 g/sample of ground fibres (average particle size 115 μm) were used and dried in a vacuum oven at 80 • C for 24 h. The method was performed by leaching in an ethanol-benzene solution (1:2 v/v) for 7 h to dissolve soluble extractables, such as waxes, proteins and lipids. After drying the resulting powder at 50 • C (for 1 h), heating under reflux with distilled water was carried out for 7 h to dissolve the residual extractables. The defatted (~3 mg) and dried powder (105 • C, 12 h) was then heated under reflux (with magnetic stirring) at 80 • C in a concentrated detergent solution of chloridric acid (2 wt%) for 4 h, resulting in the dissolution of pectin. To isolate the lignin, a few milligrams (~500 mg) of the pectin-free residue (dehydrated at 105 • C for 12 h) were treated according to the Klason method by hydrolysis in a concentrated sulphuric acid solution (72 wt%). An amount (~3 mg) of the same residue after pectin dissolution was then subjected to a detergent solution of acetic acid (1:5 v/v) and sodium chlorite (15%, v/v) to extract the holocellulose. This residue consisting mainly of holocellulose was then dissolved in an ethanol-nitric acid solution (1:4 v/v) by heating in a water bath for 1 h to remove the hemicelluloses. The residue obtained at each stage was weighed with a milligram balance to determine the mass fraction of the corresponding component. The hemicellulose content was calculated by subtracting the cellulose content from the cellulose content.
Determination of fibre diameter
A Bresser optical microscope (model Biolux NV 20x-1280×, France) was used to record images (magnification) of the longitudinal view of 10 mm long fibres. These images were used to measure the diameters of the fibres with the ImageJ software. Three measurements were made in the transverse direction for each of the fibres. This allowed the diameter distributions to be plotted, and the mean diameter to be determined using the normal and Weibull distributions.
Determination of density
A gravimetric method using a pycnometer, a 0.1 mg precision balance and toluene (density ρ T = 0.866 g cm − 3 at 25 • C) as immersion liquid was applied to estimate the density of the walls of each TC fibre. The fibres were first dried at 105 • C in a vacuum oven for 24 h [3,26], then cut into 5 mm long pieces and introduced into the pycnometer which will finally be filled by toluene. After 2 h of rest, the microbubbles were no longer visible on the surface of the fibres. By noting: m 0 the mass of the empty pycnometer, m 1 the mass of the pycnometer filled with the chopped fibres, m 2 the mass of the pycnometer filled with toluene, m 3 the mass of the pycnometer filled with chopped fibres and toluene solution, equation (1) [27] can be used to calculate the density of the fibres:
Evaluation of water absorption
A gravimetric method based on NF EN ISO 1097-6 was used to determine the water absorption of TC fibres. Before testing, the fibres were cut to 15 mm length and dried in an oven at 105 • C for 6 h. Three bundles of fibres of initial mass m 0 = 1 ± 0.1 g were prepared and placed in a pycnometer. The pycnometer was then filled with distilled water. After 24 h of immersion, the samples were wiped with a cotton cloth to remove the water from the surface of the fibres. The final mass m 24h of the sample is measured and the water absorption is determined using equation (2).
Monitoring of moisture uptake
The monitoring of moisture uptake by TC fibres was carried out using a gravimetric method according to NF EN ISO 3344. The TC fibres, previously dried at 105 • C for 6 h, were used to make samples (bundle of fibres of length 15 mm) of initial mass m 0 = 0.5 g. Three samples from each batch of fibres were placed in cups on a wire mesh inside a hygroscopic tray. A saturated ammonium nitrate solution was introduced into the tray 24 h prior to testing to create an atmosphere with a humidity of 65% at 23 • C [28]. In addition, a chemical solution based on thymol was placed directly in the hygroscopic tank to reduce the risk of alteration of the fibres by microbial attack. From the beginning of the conditioning process, weighing was carried out for increasing times with a milligram scale until a quasi-constant mass variation was obtained. The moisture content MC (equation (3)) and the moisture absorption ratio MR (equation (4)) were calculated for each measurement point to characterise the kinetics of fibre moisture uptake.
where m t is the mass of the wet sample at measurement time t, and m s is the mass of the moisture-saturated sample.
Fourier transform infrared (FTIR) spectroscopy analysis
The identification of the functional chemical groups in the alkaline retted TC fibres at room temperature and during firing was carried out using a Bruker Alpha-P spectrometer equipped with an ATR module. The spectra were recorded by crushing a few milligrams of the 110 μm milled fibre sample in transmittance mode in the spectral range 400-4000 cm − 1 with a resolution of 4 cm − 1 .
Thermogravimetric analysis
Thermal degradation of TC fibres was performed in a TA Q50 thermal analyser with an open platinum crucible. For these thermogravimetric analyses, a sample of fibre (m = 4 ± 0.2 mg) previously ground to a size of 115 μm was heated at a constant rate of 10 • C.min − 1 , in a temperature range of 25-600 • C, under a nitrogen atmosphere at 20 ml min − 1 .
Tensile tests on individual fibre bundles
NF T25 501-2 standard was used to prepare the TC fibre bundle specimens. The randomly selected fibre bundle from a batch (F1, F2, F3, F4, F5 and F6) was glued with an adhesive to a paper frame with a window cut to have a gauge length of 10 mm. This length was chosen to limit the likelihood of defects (creases, bands) being present [29]. Prior to the tensile test, the gauge length of each specimen was observed under an optical microscope to remove the specimens containing the fibre bundles. In addition, three transverse measurements were taken every 3 mm along the gauge length of the specimen and used to calculate the average diameter of the fibre bundle under test. Each validated specimen was gently placed on the LDW-5 universal mechanical testing machine, equipped with a 100 N load cell. With the fibre bundle aligned with the axis of movement of the jaws, it was stretched to failure at a speed of 2 mm min − 1 [30]. 25 specimens were tested in batches to determine their average mechanical properties, i.e., Young's modulus (in the 0.1%-0.25% deformation range), tensile strength, elongation at break and toughness (ratio of breaking strength to thickness). Specimen preparation and testing was carried out in a temperature-controlled room of 25 • C and 65% relative humidity.
Chemical composition of the fibres
The mass percentages of the chemical constituents of the studied TC fibres are given in Table 2. Cellulose, hemicellulose and lignin represent at least 77.6 wt% and 61.7 wt% for the fibres extracted at room temperature and by cooking, which is reassuring. However, they are lower than those obtained for cotton (>85 wt%), hemp (>80 wt%) and flax (>80 wt%), but close to the percentages of banana (~75 wt%) and jute (~78 wt%) fibres [1,4,11,26].
It is also observed that fibres extracted at room temperature (F1, F2 and F3) have a higher content of extractables (1.8-3.0 wt%), pectin (up to 3.7 wt%), lignin (up to 10 wt%) and hemicellulose (23.2-26.6 wt%) than fibres extracted by cooking (F4, F5 and F6). Furthermore, the percentages of these non-cellulosic materials, especially lignin and hemicellulose, tend to decrease with increasing temperature, time and applied NaOH concentration. This result shows that alkaline extraction by cooking is more effective in removing non-cellulosic materials on the surface of TC fibres. Furthermore, the hemicellulose content of the studied fibres is at least 17 wt%, which is about 3 and 8 times higher than that ( Table 2) of jute and cotton, respectively. This high hemicellulose content could make the fibres hydrophilic, which is interesting for the development of absorbent textile fabrics [23], but a major drawback for the interlocking of the fibres with polymer matrices [11,33]. For the lignin content, which makes the fibres stiff, the values found in this study are interesting, as they are lower than those found for fibres already used for the manufacture of placemats such as soda degummed jute [1] and banana [11]. The lower cellulose contents of F4, F5 and F6 fibres indicate that cellulose oxidation occurs partially during alkaline cooking. Such an effect could damage the fibre structure and consequently reduce its strength and stiffness [21,35,36]. The oxidation of cellulose could be avoided by adding a reducing agent such as sodium dithionite to the NaOH solution [37]. Despite this oxidation of cellulose, it is observed that (Table 2), the values found are in the same order as those for banana fibres and are within the wide range reported for kenaf fibres by Ramesh [11].
Compared to TC fibres extracted by water retting [22], F1, F2 and F3 fibres contain slightly more cellulose and slightly less lignin, pectin, and hemicellulose. This may improve the flexibility and probably the strength of the fibres [8,34]. Therefore, the extraction of TC fibres by alkaline retting at room temperature could be interesting for the manufacture of textile fabrics and yarns.
FTIR spectra of the fibres
In order to analyse the influence of the alkaline extraction conditions and parameters on the chemical functional groups of TC fibres, FTIR spectra were plotted between 400 and 4000 cm − 1 (Fig. 3). These spectra show characteristic peaks of the biochemical components of the plant fibres, such as cellulose, hemicellulose, pectin, and lignin [13,16,21].
The broad band visible between 3600 and 3200 cm − 1 , with a major peak at 3330 cm − 1 , corresponds to the stretching vibrations of the O-H and -OH bonds [12,15] of hemicelluloses and cellulose. The intensity of this broad band is clearly low for F2, and F3 fibres, and lower for all fibres extracted by cooking, indicating dissolution of hydroxyl (OH) groups. The peak at 2920 cm − 1 signals the asymmetric stretching of C-H groups in cellulose, hemicellulose, and lignin [38,39]. Due to the lower intensity observed for the F6 fibre, it is possible that cellulose degradation occurred under the combined effect of the temperature (100 • C) and concentration (1.5 wt%) applied during its extraction process. The absence of the peak at 1728 cm − 1 that was observed in a previous study [22] for TC fibre extracted by water retting shows that alkaline extraction is effective in removing the carboxyl (C-H) and carbonyl (C--O) groups from pectin or the acetyl and uronic ester from hemicellulose.
However, the presence of the peak at 1625 cm − 1 in the spectra indicates that there are still water molecules trapped in the structure of the TC fibres studied. Similarly, the small peak at 1594 cm-1 indicates that there are still small traces of fatty acids and wax in the different fibres, but much less for the fibres extracted by cooking. These water molecules and waxy and fatty materials can be completely removed by treatment in a concentrated NaOH solution [26,40], or by a bleaching process with hydrogen peroxide [1] or sodium chlorite [4,21]. In addition, the peak at 1424 cm − 1 is attributed to C-O bond stretching and C-H or O-H bond bending in hemicellulose [2,41]. The transmittance of this peak is significantly low for F2 and F3 fibres and tends to disappear for fibres extracted by alkaline cooking due to temperatures above room temperature. The strongest peak in the spectra is at 1027 cm − 1 and is associated with the stretching vibration of the -OH and C--O bonds of cellulose and lignin.
Due to the partial removal of lignin, the intensity of this peak decreased as the time and NaOH concentration decreased for the room temperature extraction, but also as the temperature increased up to 100 • C for the alkaline firing extraction. The small peak at 899 cm − 1 is attributed to the β-glycosidic lysis that links the saccharide carbon atoms in the complex structure of cellulose [42]. The reduction of this peak is evident for fibre extracted in a concentrated 1.5 wt% NaOH solution penda The reduction of this peak is evident for the fibre extracted in a concentrated 1.5 wt% NaOH solution for 180 min at 100 • C, clearly indicating that the cellulose on the F6 fibre has been partially degraded. This result correlates with the one presented in Table 2 when the mass percentage of cellulose on the F6 fibre was determined.
Morphology of the fibre surface
SEM micrographs reveal that F2 (Fig. 4b) and F3 (Fig. 4c) fibres have cleaner and smoother surfaces compared to F1 (Fig. 4a), F4 (Fig. 4d), F5 (Fig. 4e) and F6 (Fig. 4f) fibres. The differences between the SEM micrographs confirm those observed on the FTIR spectra in terms of dissolution of non-cellulosic substances and degradation of cellulose.
In a previous study [22], it was observed that the surface of TC fibres extracted by water retting is covered by a sheath composed of non-cellulosic materials such as hemicellulose, pectin, lignin, and wax. Thus, the micrographs in this study show that alkaline retting (ambient and cooking) is effective in dissolving non-cellulosic substances. Room temperature extraction with a concentration of 3.5 wt % NaOH for 120 min (Fig. 4a) resulted in partial degumming of the sheath. This can be justified by the high chemical resistance of the C-C bonds and aromatic groups present in the lignin [42]. However, firing with alkaline NaOH solution at 1 wt% (T = 100 • C, t = 240 min: Figs. 4e) and 1.5 wt% (T = 100 • C, t = 180 min: Fig. 4f) stripped the sheath and seemed to shrink the fibres, but produced rough, wrinkled fibres. This is very important for the adhesion between the fibre and the polymer matrices. The shrinkage of the fibre may be explained by the removal of pectin from the middle lamella and the reduction of the empty spaces between the fibrils. This could increase connectivity between individual fibres in fibre bundles and lead to shear stresses when the fibre bundle is subjected to tension due to deformation of the fibre-fibre interfaces [43,44]. Due to their smooth surface, like those of Ananas comosus [21] and jute [1] fibres, F2 and F3 fibres are expected to provide a better dye fixation effect for the manufacture of dyed fabrics compared to F1, F4, F5 and F6 fibres.
Fibre diameter distribution
The diameter distributions presented in Fig. 5a reveal a high variability in the diameters (coefficient of variance CV between 22 and 43%) of the TC fibres. This indicates irregularities along the length of the TC fibres due to their natural character and extraction defects visible on the SEM micrographs (Fig. 4). These distributions follow a normal distribution (Fig. 5b), which made it possible to calculate the mean values reported in Table 2. It can be noted that the baking process shrinks the TC fibres more than the room temperature extraction, which correlates well with the SEM observations (Fig. 4).
Furthermore, the experimental data correlate with the Weibull distribution (0.92 < R 2 < 98) as shown in Fig. 5b for the F2 fibre (R 2 = 0.97). The dispersion coefficients found are 3.5, 4.7, 4.1, 2.5, 2.9 and 4.8 for F1, F2, F3, F4, F5 and F6, respectively. The presence of size defects that can impact early failure and variability of mechanical properties of the fibres increases with the decrease of this coefficient [30,45]. Thus, an interesting ranking that informs about the effect of the treatment on the appearance of diameter irregularity defects induced by extraction can be proposed: F6 < F2 < F3 < F1 < F5 < F4. However, the theoretical mean values generated by Weibull's law using OriginPro software for fibres F1, F2, F3, F4, F5 and F6 increased by 20%, 8%, 9%, 12%, 11% and 4%, respectively, compared to the experimental values. Compared to other plant fibres (Table 2), which are already used in clothing textiles, TC fibres have average diameters that are in the range of flax, hemp, and pineapple comosus fibres.
Fibre densities
The densities of the fibres are presented in Table 2. It can be noted that the density tends to increase when the extraction time is extended (from 120 to 160 min) at room temperature but is almost constant for extraction by cooking. Compared to literature values for the same fibre, the values in this study are lower than the one (1.48 g cm − 3 ) found by Mewoli et al. [22], but very close to the one (1.26 g cm − 3 ) given by Senwitz et al. [19]. Furthermore, the use of TC fibres could contribute to the manufacture of lighter textile fabrics and composite structures compared to tropical Cola lepidota fibres and commercial fibres such as cotton, flax, kenaf and hemp.
Water absorption of fibres
Water absorption values by immersion for 24 h (Table 2) reveal that TC fibres extracted at room temperature can absorb between 217% and 285% of their dry mass. Similarly, fibres extracted by cooking absorb between 150 and 204% of their dry mass. An overall decrease in water absorption capacity was observed with increasing time, temperature and NaOH concentration. Fibres extracted at room temperature were more hydrophilic than fibres extracted by cooking due to their higher hemicellulose content. The maximum water uptake of 285 wt%, found for the F1 fibre, is similar to that of jute, while the lowest value 150.3 wt%, obtained for F6, is similar to that of sisal and Ananas comosus and hemp (Table 2). Furthermore, all the values found are lower than those in the literature [22] for the same fibre. This result indicates that, alkaline retting is effective in reducing the hydrophilic function of TC fibres compared to fresh water retting.
Fibre moisture uptake kinetics
The moisture uptake curves (Fig. 6a) of TC fibres, obtained by conditioning at 65%RH/23 • C for increasing times, show that they can absorb up to 11.9% of their dry mass following simple exponential kinetics (Equation (5)) of the "Fickian" type.
This two-phase moisture absorption behaviour is commonly described by natural fibres [29,46,47]. The first phase is linear and materialises a rapid absorption that occurs due to the porosity of the fibres and the many branches of the hemicelluloses. Its duration varies from 80 to 90 min, and its resistivity (or slope) is between 0.013 and 0.015 s − 1 . These parameters are less pronounced for fibres extracted at room temperature F5 and F6. The second absorption phase is non-linear and converges as the fibre becomes saturated with moisture. This convergence is due to the swelling of the fibre [40,45]. This ability to hold moisture without dripping leads to an interesting classification that provides information on porosity and structural variations induced by extraction conditions and parameters: The highest moisture content (11.9 wt% for F1) in this study is lower than that of fibres already used in textile yarn production such as sisal (13.6 wt%), Typha latifolia (13.0 wt%) and jute (12.3 wt%) [8,48,49]. However, it is similar to that of flax (12.0 wt%). Moreover, this content is higher than that given for the same fibre by Mewoli et al. [22]. This discrepancy can be attributed to the higher extractive content [49] for this fibre in the literature. It is important to mention that in this comparative study, the literature values were not obtained with the same conditioning parameters, which has an influence on the moisture uptake and the diffusion coefficient as presented in Fig. 7.
The constants y 0 , a and b in equation (6) were calculated using the Levenberg-Marquardt iteration algorithm [21] implemented in the OriginPro software. It can be seen in Table 3 that the theoretical values found and the experimental values agree with a very satisfactory R 2 correlation coefficient.
where r is the radius of the fibre, MC is the moisture content of the saturated fibre, H 2 and H 1 are the moisture content at the instants t 2 and t 1 belonging to the linear region of the curves in Fig. 6a. The moisture diffusion coefficient values found are presented in Table 3. An overall correlation between the evolution of the diffusion coefficient and the moisture content is clearly observed. Diffusion and moisture retention are higher for fibres extracted at room temperature, and seem to increase with increasing NaOH concentration, temperature, and cooking extraction time. This result confirms the previous ones (Table 2) and consolidates the fact that the shrinkage of the fibres induced by the cooking process decreased the volume of voids in its structure.
Thermal degradation of the fibres
The TG and DTG thermograms of the TC fibres in this study are presented between 30 and 600 • C in Fig. 8a and b. The evolution of these thermograms confirms the presence of several constituents in the fibres and is commonly described for other types of plant textile fibres modified with NaOH [1,8,32]. It is observed (Fig. 8a) that, the first mass loss (of about 6% occurs) between 30 and 120 • C and is manifested by an initial peak on the DTG curve (blue arrow, Fig. 8b). This initial loss is related to the evaporation of water and low molecular weight constituents from the fibres. After this dehydration, we can note a thermal stability of the fibres up to 236.7 • C, indicating the level of temperature resistance of the fibres. Thereafter, three phases of thermal decomposition (mass loss between 69 and 76%) are observed: (i) the first phase manifests itself as a small shoulder peak (green arrow, Fig. 8b) between 236 and 310 • C (mass loss up to 22%), and corresponds to depolymerisation of hemicellulose and cleavage of glycosidic bonds in amorphous cellulose; (ii) the second phase occurs from 310 to 427 • C (mass loss up to 50%), and is associated with the strongest peak (red arrow, Fig. 8b) and to damage of α-cellulose; (iii) the last phase of mass loss is attributed to an oxidation reaction of the carbonised products or residues.
The highest residue content is 15.3% for the F1 fibre, which is 41%, 24%, 28%, 27% and 5% higher than the F2, F3, F4, F5 and F6 fibres, respectively. It is interesting to note that lignin decomposition occurs slowly over the whole temperature range (30 • C-600 • C) due to its aromatic rings [4,26]. Table 4 presents a comparative study of the results. The dehydration of the fibres occurs up to 125 • C for F1, 118 • C for F2, 115 • C for F3 and F5, 110 • C for F4 and 118 • C for F6. This result indicates that drying is faster for the fibres extracted by cooking in general, and in particular for the F4 fibre. This thermoregulatory property makes it an ideal fibre for the manufacture of summer clothing [2,5]. In addition, this fibre showed the lowest mass loss, although its lignin content ( Table 2) was 1.25 and 1.33 times higher than F5 and F6, respectively. Thermal stability indicated a service temperature range of 231.7 ± 5.2 • C with a coefficient of variation (CV) of 2.2% for all fibres studied. For the batch of fibres [46,49]. RC stands for Rhecktophyllum camerunense. extracted at room temperature, it was 235.6 ± 5.2 • C (CV = 0.9%), which is about 3.3% higher than that of the fibres extracted by firing (CV = 1.7%). These results show a very small difference in the temperature resistance of the fibres. Furthermore, these thermal stability temperatures are very close to those of other fibres already used in the production of weaving and knitting yarns. For example, the fibres F1, F2, F3, F4, F5 and F6 can undergo heat bleaching treatments in the same way as hemp, cotton, linen, jute and banana (Table 4). Similarly, the maximum average temperature of thermal degradation was 316.7 ± 5.5 • C (CV = 1.5%), and the final average temperature was 414.2 ± 11.2 • C (CV = 2.7%). Mass losses at this stage were also similar (CV<3%). These results clearly show that the thermal stability and degradation of the main fibre components are not affected by the mode (ambient and hot) and parameters (NaOH concentration, temp and temperature) of extraction.
Mechanical properties of the individual fibre bundles
Typical stress-strain curves for TC fibre bundles extracted by alkaline retting at room temperature and by firing are shown in Fig. 9. Three types of stress-strain curves can be distinguished in each batch of fibre bundle. This difference is probably due to the location of the fibres in the stem as described by Duval et al. [55] for hemp fibres. The curve assigned to type I has a single domain of linear elastic Table 4 Thermal degradation properties of Triumfetta cordifolia fibres extracted by chemical retting and other fibres from the literature.
Fibres
Température de dégradation Teneur des composants References Fig. 9. Typical stress-strain curves of Triumfetta cordifolia fibres extracted by alkali retting at room temperature and by firing. behaviour (brittle failure) and the slope is the Young's modulus of the fibre. The type II behaviour consists of two linear parts separated by a very short zone of non-linear deformation due to a small slip of the microfibrils. This type of behaviour was observed for bleached Ananas comosus fibres [21] and raw Cola lepidota [56] fibres. The type III curve presents three distinct phases: the first phase is linear elastic and can extend up to 0.3% strain; the second phase (between 0.3% and 1.5% strain), non-linear is a transition zone due to significant slippage and progressive alignment of the microfibres with the fibre axis [30], while the last phase characterises the weakening (loss of stiffness) of the fibre. This type III behaviour is similar to that identified for Furcraea foetida [42], Agave americana [57] and Rhecktophyllum camerunense [30] fibre. The overall evolution of the mechanical properties represented in Fig. 10 and the variations of the average values found (Table 5) show a sensitivity that seems to be significant when the conditions (room temperature and by firing) and the parameters (temperature, concentration and time) of extraction are modified. The tensile strength of fibres extracted at room temperature (F1, F2 and F3) varies from 60 to 112 MPa and increases by 27% and 47% when the retting time is increased from 120 to 150 min and from 120 to 180 min, respectively. For the same time interval, the Young's modulus, which provides information on stiffness, increases from 2.2 (for F1) to 3.2 MPa (for F3). On the other hand, the elongation at break varies from 5.8 to 8.4%, and the tenacity varies between 23 and 40 cN/tex. For this extraction condition, the F3 fibre gave the highest strength (Fig. 10a), but the F2 fibre is the softest (Fig. 10b and c) and the strongest (Fig. 10d) due to its high cellulose content, and rather low lignin and pectin content. Similarly, fibres extracted by firing (F4, F5, and F6) gave ranges of 69-112 MPa, 1.7-3.7 GPa, 2.3-4.7%, and 21-26 cN/tex for tensile strength, Young's modulus, elongation at break, and toughness, respectively. For this extraction mode, the toughest (Fig. 10a) and strongest (Fig. 10d) fibres are those extracted with a concentration of 1 wt% NaOH, a temperature of 80 • C and a cure time of 120 min. However, the same F4 fibres are stiffer and elongate 1.7 and 1.4 times less than F5 and F6. The high stiffness can be explained by the high lignin content, while the low elongation is probably due to a too small microfibrillar angle [27,42]. Blending these F4 fibres with soft cotton fibres can be considered to make soft, strong, and tenacious hybrid yarns [5,23]. The standard deviations range from 32 to 55% of the mean values, revealing a large variability in mechanical properties. This variability is well illustrated in the box plots in Fig. 10. The visible marks are exception values that contribute to the overestimation of the standard deviation. In Fig. 11a and b, the wide dispersion of mechanical properties of TC fibres is mainly caused by the variability of their diameter which is inherent to natural fibres. Among others, tensile strength (Fig. 11b) tends to decrease with increasing diameter, as observed for Cola lepidota [56], Neuropeltis acuminatas [26] and Ananas comosus [21,30] fibres, but this is clearly not the case for toughness (Fig. 11a). The greater dispersion in the toughness and tensile strength of the F6 fibre can be explained by the defects induced (surface wrinkles and cellulose oxidation) by the combined effect of the extraction temperature and the applied NaOH concentration. In addition, the location of the fibres in the mill and the overestimation of the fibre cross-section can also explain the high variability of the tensile strength and Young's modulus [19,29,59]. Indeed, the estimated cross-sectional area in the microscopic longitudinal view is larger than the area that actually bears the tensile load due to the voids [22] in the fibre. Furthermore, this lumen increases in the bast fibres from the bottom to the top of the plant [3,4,19,30].
Tenacities (Fig. 11c) and tensile strengths (Fig. 11d) show random distributions as Young's modulus increases. The distributions of F1, F3 and F6 fibres are the most sensitive to changes in stiffness. It can also be noted that batches F2 and F4 combine low stiffness, with an overall higher tenacity, indicating an interesting mechanical performance for their spinning. Table 5 also compares the average mechanical properties of the TC fibres in this study with those of the same fibre and other fibre types in the literature. Specific mechanical properties that provide insight into the performance of the fibres were also compared.
The highest average tensile strength of the TC fibres in this study is 112.7 MPa (for the F4 fibre), which is 78.6% and 46.1% lower than the combed TC fibres extracted after 30 days of retting in fresh water [22], and those extracted by manual hulling [19], respectively. Similarly, the highest average stiffness is 3.7 GPa (for F4 fibre), which is 70% lower than that found by other authors [19,22] for the same fibre. In contrast, the TC fibres in this study show much higher elongations, indicating that the alkaline extraction increased the microfibrillar angle. In addition to the extraction method, these high deviations can be attributed to the combing operation, the method of determining the cross-sectional area, the initial test conditions (load cell sensitivity, device accuracy), the nature of the sample tested, the test parameters and conditions (gauge length, drawing speed, temperature and humidity) and the sample conditioning [11,26,27,29,39,59]. According to Dallel [23], the reliability of a natural fibre is possible if its tenacity is above 15 cN/tex. Thus, in addition to the low values of Young's moduli and high elongations, the allowable tenacities (> 15 cN/tex) of the six fibre batches show that alkali retting proves to be an interesting process to produce stretchy and tenacious TC fibres for textile applications (yarns for fabrics and knits). Furthermore, the comparison of F2, F3 and F4 fibres shows that the specific properties of TC fibres are superior to those of Esparto grass (Table 5). It is important to mention that, for these literature data, the fibres were not extracted in the same way and the mechanical properties were not estimated with the same gauge length, standard and speed, which plays an important role on the discrepancies observed as reported by Baley et al. [29] and Ramesh [11]. In addition, it is found that (Table 5), the TC fibres in this study possess poor mechanical properties. Compared to other types of plant fibres, except for tenacity which is in the same range as cotton, banana and sisal textile fibres.
In order to quantify the dispersion of the tensile strength and toughness of TC fibres, a static analysis was performed using the twoparameter Weibull distribution function defined by In this equation, Γ is the sampling variable, Γ 0 is the characteristic value of the variable and m is the shape parameter that characterises the dispersion of the variable. The Weibull function or breakpoint probability F(Γ) can also be written as ln( − ln(1 − F(Γ))) = mln(Γ) − mln(Γ 0 ), which allows the modulus m to be determined graphically, which is the slope of an equation obtained by a linear fit [30,51,56]. Furthermore, F(Γ) = (i − 0.5)/ n [26], where i is the rank of the ith data point and n is the number of data points corresponding to the number of samples tested. Fig. 12 shows the Weibull distribution for the tensile strength (Fig. 12a,c) and toughness (Fig. 12b,d) of TC fibres. Fig. 12 shows the Weibull distribution for the tensile strength (Fig. 12a,c) and toughness (Fig. 12b,d) of CT fibres. The Weibull parameters m and Γ 0 , as well as the correlation coefficient R 2 are summarised in Table 6. It can be seen that the correlation coefficien R 2 is greater than 0.86, which means that the experimental data fit the two-parameter Weibull model well, as shown in Fig. 12a and b. Furthermore, the characteristic values found are of the same order as those obtained from the descriptive statistics. These results indicate that the twoparameter Weibull model can be used to analyse the mechanical properties of TC fibres, at least for tensile strength and toughness. It should be noted that the dispersion of mechanical properties increases as the modulus m decreases. Consequently, the dispersions of tensile strength (resp. toughness) are higher for fibres designated F6, F3 and F4 (resp. F5, F4, F3 and F6). These dispersions are due to the high variability of the cross-section and, to a lesser extent, to structural defects (curved walls, knots) in the fibres [29,30].
Conclusions
The objective of this study was to extract with sodium hydroxide (NaOH) and characterise the fibres from the stem bark of Triumfetta cordifolia (TC) for the development of textile yarns. To this end, different concentrations (0.5, 1.0 and 1.5 wt%) of NaOH, temperatures (80, 100 and 120 • C) and times (120, 180 and 240 min) were used for alkaline extraction by cooking, while at room temperature, times of 120, 150 or 180 min with three concentrations (2.5, 3.0 and 3.5 wt%) were applied. A factorial experimental design was used to develop 34 combinations based on the extraction conditions and only six combinations produced clean, soft touch fibres. In addition, photographs proved the absence of basic bark skin residue and crimp on the fibres of these 6 batches. This study also aimed to understand the effect of alkaline extraction conditions on the biochemical composition and morphological, physical, thermal and mechanical properties of the fibres. The results revealed that the modification of the evaluated properties depended on the severity of the extraction conditions, except for the thermal stabilities which showed little difference (231.7 ± 5.2 • C).
Under mild conditions (3.5 wt%/25 • C/120 min), the SEM surfaces of the fibres showed large residues of the middle lamella, which made the lignin content (10 wt%) and hydrophilic function higher. Under medium conditions (2.5 wt%/25 • C/150min; 3.5/25 • C/ 180 min; 1.0 wt%/80 • C/120 min), the fibre surfaces were clean and slightly wrinkled (at 80 • C; 120min) with a cellulose content between 41 and 49 wt%, a density that varied from 1.31 to 1.36 g.cm-3 and a water absorption of 204-222 wt%. Under severe conditions (1.0 wt%/100 • C/240 min; 1.5 wt%/100 • C/180 min), heterogeneous transverse shrinkage and wrinkling were observed and accompanied by cellulose degradation (39 wt%) with a significant reduction in tenacity to 16cN/tex and a density of 1.32 g cm − 3 . The tensile properties of the TC fibres showed great variability and a large influence of diameter was observed, indicating the need to study the influence of fibre location in the stem. However, the average extraction conditions resulted in higher tensile strength (113 MPa), tenacity (40 cN/tex) and elongation at break (8.4%). In addition, these fibres can be used alone or mixed with other fibres such as cotton to produce light, soft, tough and stretchy yarns, as can Typha latifolia, jute and sisal.
Author contribution statement
A.G. Soppie: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. A.D.O. Betené: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Pierre Marcel Anicet Noah, Ateba Atangana: Conceived and designed the experiments; Analyzed and interpreted the data. A.E. Njom: Performed the experiments, Contributed reagents, materials, analysis tools or data. F. Betené Ebanda: Conceived and designed the experiments; Analyzed and interpreted the data, Contributed reagents, materials, analysis tools or data. A. Mewoli: Contributed reagents, materials, analysis tools or data. D. Nkemaja Efeze: Conceived and designed the experiments. R. Moukéné: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.
Data availability statement
Data will be made available on request.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Banyongen for their technical assistance in setting up the fibre extraction system. | 2023-07-06T05:07:18.743Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "3525cdd2c1fc01026edb502066a13b2911e0a590",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023047898/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3525cdd2c1fc01026edb502066a13b2911e0a590",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235262740 | pes2o/s2orc | v3-fos-license | CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee
In safe reinforcement learning (SRL) problems, an agent explores the environment to maximize an expected total reward and meanwhile avoids violation of certain constraints on a number of expected total costs. In general, such SRL problems have nonconvex objective functions subject to multiple nonconvex constraints, and hence are very challenging to solve, particularly to provide a globally optimal policy. Many popular SRL algorithms adopt a primal-dual structure which utilizes the updating of dual variables for satisfying the constraints. In contrast, we propose a primal approach, called constraint-rectified policy optimization (CRPO), which updates the policy alternatingly between objective improvement and constraint satisfaction. CRPO provides a primal-type algorithmic framework to solve SRL problems, where each policy update can take any variant of policy optimization step. To demonstrate the theoretical performance of CRPO, we adopt natural policy gradient (NPG) for each policy update step and show that CRPO achieves an $\mathcal{O}(1/\sqrt{T})$ convergence rate to the global optimal policy in the constrained policy set and an $\mathcal{O}(1/\sqrt{T})$ error bound on constraint satisfaction. This is the first finite-time analysis of primal SRL algorithms with global optimality guarantee. Our empirical results demonstrate that CRPO can outperform the existing primal-dual baseline algorithms significantly.
Introduction
Reinforcement learning (RL) has achieved great success in solving complex sequential decision-making and control problems such as Go (Silver et al., 2017), Star-Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s).
Craft (DeepMind, 2019) and recommendation system (Zheng et al., 2018), etc. In these settings, the agent is allowed to explore the entire state and action space to maximize the expected total reward. However, in safe RL (SRL), in addition to maximizing the reward, an agent needs to satisfy certain constraints. Examples include self-driving cars (Fisac et al., 2018), cellular network (Julian et al., 2002), and robot control (Levine et al., 2016). The global optimal policy in SRL is the one that maximizes the reward and at the same time satisfies the cost constraints.
The current safe RL algorithms can be generally categorized into the primal and primal-dual approaches. The primal-dual approaches (Tessler et al., 2018;Ding et al., 2020a;Stooke et al., 2020;Yu et al., 2019;Achiam et al., 2017;Yang et al., 2019a;Altman, 1999;Borkar, 2005;Bhatnagar & Lakshmanan, 2012;Liang et al., 2018;Paternain et al., 2019a) are most commonly used, which convert the constrained problem into an unconstrained one by augmenting the objective with a sum of constraints weighted by their corresponding Lagrange multipliers (i.e., dual variables). Generally, primal-dual algorithms apply a certain policy optimization update such as policy gradient alternatively with a gradient descent type update for the dual variables. Theoretically, (Tessler et al., 2018) has provided an asymptotic convergence analysis for primal-dual method and established a local convergence guarantee. (Paternain et al., 2019b) showed that the primal-dual method achieves zero duality gap. Recently, (Ding et al., 2020a) proposed a primal-dual type proximal policy optimization (PPO) and established the regret bound for linear constrained MDP. The convergence rate of primal-dual method based on a natural policy gradient algorithm was characterized in (Ding et al., 2020b). network to enforce constraints. None of the existing primal algorithms are shown to have provable convergence guarantee to a globally optimal feasible policy.
Comparing between the primal-dual and primal approaches, the primal-dual approach can be sensitive to the initialization of Lagrange multipliers and the learning rate, and can thus incur extensive cost in hyperparameter tuning (Achiam et al., 2017;Chow et al., 2019). In contrast, the primal approach does not introduce additional dual variables to optimize and involves less hyperparamter tuning, and hence holds the potential to be much easier to implement than the primal-dual approach. However, the existing primal algorithms are not yet popular in practice so far, because of no guaranteed global convergence and no strong demonstrations to have competing performance as the primal-dual algorithms. Thus, in order to take the advantage of the primal approach which is by nature easier to implement, we need to answer the following fundamental questions.
✄ Can we design a primal algorithm for SRL, and demonstrate that it achieves competing performance or outperforms the baseline primal-dual approach?
✄ If so, can we establish global optimality guarantee and the finite-time convergence rate for the proposed primal algorithm?
In this paper, we will provide the affirmative answers to the above questions, thus establishing appealing advantages of the primal approach for SRL.
Main Contributions
A New Algorithm: We propose a novel primal approach called Constraint-Rectified Policy Optimization (CRPO) for SRL, where all updates are taken in the primal domain. CRPO applies unconstrained policy maximization update w.r.t. the reward on the one hand, and if any constraint is violated, momentarily rectifies the policy back to the constraint set along the descent direction of the violated constraint also by applying unconstrained policy minimization update w.r.t. the constraint function. From the implementation perspective, CRPO can be implemented as easy as unconstrained policy optimization algorithms. Without introduction of dual variables, it does not suffer from hyperparameter tuning of the learning rates to which the dual variables are sensitive, nor does it require initialization to be feasible. Further, CRPO involves only policy gradient descent for both objective and constraints, whereas the primal-dual approach typically requires projected gradient descent, where the projection causes higher complexity to implementation as well as hyperparameter tuning due to the projection thresholds.
To further explain the advantage of CRPO over the primal-dual approach, CRPO features immediate switches between optimizing the objective and reducing the constraints whenever constraints are violated. However, the primaldual approach can respond much slower because the control is based on dual variables. If a dual variable is nonzero, then the policy update will descend along the corresponding constraint function. As a result, even if a constraint is already satisfied, there can often be a significant delay for the dual variable to iteratively reduce to zero to release the constraint, which slows down the algorithm. Our experiments in Section 5 validates such a performance advantage of CRPO over the primal-dual approach.
Theoretical Guarantee: To provide the theoretical guarantee for CRPO, we adopt NPG as a representative policy optimizer and investigate the convergence of CRPO in two settings: tabular and function approximation, where in the function approximation setting the state space can be infinite. For both settings, we show that CRPO converges to a global optimum at a convergence rate of O(1/ √ T ). Furthermore, the constraint violation also converges to zero at a rate of O(1/ √ T ). To the best of our knowledge, we establish the first provably global optimality guarantee for a primal SRL algorithm of CRPO.
To compare with the primal-dual approach in the function approximation setting, the value function gap of CRPO achieves the same convergence rate as the primal-dual approach, but the constraint violation of CRPO decays at a rate of O(1/ √ T ), which is much faster than the rate O(1/T 1 4 ) of the primal-dual approach (Ding et al., 2020b).
Technically, our analysis has the following novel developments. (a) We develop a new technique to analyze a stochastic approximation (SA) that randomly and dynamically switches between the target objectives of the reward and the constraint. Such an SA by nature is different from the analysis of a typical policy optimization algorithm, which has a fixed target objective to optimize. Our analysis constructs novel concentration events for capturing the impact of such a dynamic process on the update of the reward and cost functions in order to establish the high probability convergence guarantee. (b) We also develop new tools to handle multiple constraints, which is particularly nontrivial for our algorithm that involves stochastic selection of a constraint if multiple constraints are violated.
Related Work
Safe RL: Algorithms based on primal-dual methods have been widely adopted for solving constrained RL problems, such as PDO (Chow et al., 2017), RCPO (Tessler et al., 2018), OPDOP (Ding et al., 2020a) and CPPO (Stooke et al., 2020). Constrained policy optimization (CPO) (Achiam et al., 2017) extends TRPO to handle constraints, and is later modified with a two-step projection method (Yang et al., 2019a). The effectiveness of primaldual methods is justified in (Paternain et al., 2019b), in which zero duality gap is guaranteed under certain assumptions. A recent work (Ding et al., 2020b) established the convergence rate of the primal-dual method under Slater's condition assumption. Other methods have also been proposed. For example, (Chow et al., 2018; leveraged Lyapunov functions to handle constraints. (Yu et al., 2019) proposed a constrained policy gradient algorithm with convergence guarantee by solving a sequence of sub-problems. (Dalal et al., 2018a) proposed to add a safety layer to the policy network so that constraints can be satisfied at each state. (Liu et al., 2019b) developed an interior point method for safe RL, which augments the objective with logarithmic barrier functions. Our work proposes a CRPO algorithm, which can be implemented as easy as unconstrained policy optimization methods and has global optimality guarantee under general constrained MDP. Our result is the first convergence rate characterization of primaltype algorithms for SRL.
Finite-Time Analysis of Policy Optimization: The finite-time analysis of various policy optimization algorithms under unconstrained MDPs have been well studied.
The convergence rate of policy gradient (PG) and actor-critic (AC) algorithms have been established in (Shen et al., 2019;Papini et al., 2017;2018;Xu et al., 2020a;2019a;Xiong et al., 2020;Zhang et al., 2019) and(Xu et al., 2020b;Wang et al., 2019;Yang et al., 2019b;Kumar et al., 2019;Qiu et al., 2019), respectively, in which PG or AC algorithm is shown to converge to a local optimal. In some special settings such as tabular and LQR, PG and AC can be shown to convergence to the global optimal (Agarwal et al., 2019;Yang et al., 2019b;Fazel et al., 2018;Malik et al., 2018;Tu & Recht, 2018;Bhandari & Russo, 2019;2020). Algorithms such as NPG, NAC, TRPO and PPO explore the second order information, and achieve great success in practice. These algorithms have been shown to converge to a global optimum in various settings, where the convergence rate has been established in (Agarwal et al., 2019;Shani et al., 2019;Liu et al., 2019a;Wang et al., 2019;Cen et al., 2020;Xu et al., 2020c).
Markov Decision Process
A discounted Markov decision process (MDP) is a tuple (S, A, c 0 , P, ξ, γ), where S and A are state and action spaces; c 0 : S × A × S → R is the reward function; P : S × A × S → [0, 1] is the transition kernel, with P(s ′ |s, a) denoting the probability of transitioning to state s ′ from previous state s given action a; ξ : S → [0, 1] is the initial state distribution; and γ ∈ (0, 1) is the discount factor. A policy π : S → P(A) is a mapping from the state space to the space of probability distributions over the actions, with π(·|s) denoting the probability of selecting action a in state s. When the associated Markov chain P(s ′ |s) = A P (s ′ |s, a)π(a|s) is ergodic, we denote µ π as the stationary distribution of this MDP, i.e. S P(s ′ |s)µ π (ds) = µ π (s ′ ). Moreover, we define the visitation measure induced by the police π as ν π (s, a) = (1 − γ) ∞ t=0 γ t P(s t = s, a t = a). For a given policy π, we define the state value function as V 0 π (s) = E[ ∞ t=0 γ t c 0 (s t , a t , s t+1 )|s 0 = s, π], the state-action value function as Q 0 π (s, a) = E[ ∞ t=0 γ t c 0 (s t , a t , s t+1 )|s 0 = s, a 0 = a, π], and the advantage function as A 0 π (s, a) = Q 0 π (s, a) − V 0 π (s). In reinforcement learning, we aim to find an optimal policy that maximizes the expected total reward function defined
Safe Reinforcement Learning (SRL) Problem
The SRL problem is formulated as an MDP with additional constraints that restrict the set of allowable policies. Specifically, when taking action at some state, the agent can incur a number of costs denoted by c 1 , · · · , c p , where each cost function c i : S ×A×S → R maps a tuple (s, a, s ′ ) to a cost value. Let J i (π) denotes the expected total cost function with respect to c i as J . The goal of the agent in SRL is to solve the following constrained problem where d i is a fixed limit for the i-th constraint. We denote the set of feasible policies as Ω C ≡ {π : ∀i, J i (π) ≤ d i }, and define the optimal policy for SRL as π * = arg min π∈ΩC J 0 (π). For each cost c i , we define its corresponding state value function V i π , state-action value function Q i π , and advantage function A i π analogously to V 0 π , Q 0 π , and A 0 π , with c i replacing c 0 , respectively.
Constraint-Rectified Policy Optimization (CRPO) Algorithm
In this section, we propose the CRPO approach (see Algorithm 1) for solving the SRL problem in eq. (2). The idea of CRPO lies in updating the policy to maximize the unconstrained objective function J 0 (π wt ) of the reward, alternatingly with rectifying the policy to reduce a constraint function J i (π wt ) (i ≥ 1) (along the descent direction of this constraint) if it is violated. Each iteration of CRPO consists of the following three steps.
Policy Evaluation: At the beginning of each iteration, we estimate the state-action value functionQ i πt (s, a) ≈ Q i πw t (s, a) (i = {0, · · · , p}) for both reward and costs under current policy π wt .
Constraint Estimation: After obtainingQ i πt , the con- can then be approximated via a weighted sum of approximated stateaction value function:J i,Bt = j∈Bt ρ j,tQ i t (s j , a j ). Note this step does not take additional sampling cost, as the generation of samples (s j , a j ) ∈ B t from distribution ξ · π wt does not require the agent to interact with the environment.
Policy Optimization: We then check whether there exists an i t ∈ {1, · · · , p} such that the approximated con-straintJ it,Bt violates the conditionJ it,Bt ≤ d i + η, where η is the tolerance. If so, we take one-step update of the policy towards minimizing the corresponding constraint function J it (π wt ) to enforce the constraint. If multiple constraints are violated, we can choose to minimize any one of them. If all constraints are satisfied, we take one-step update of the policy towards maximizing the objective function J 0 (π wt ). To apply CRPO in practice, we can use any policy optimization update such as natural policy gradient (NPG) (Kakade, 2002), trust region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), ACKTR (Wu et al., 2017), DDPG (Lillicrap et al., 2015) and SAC (Haarnoja et al., 2018), etc, in the policy optimization step (line 7 and line 10).
The advantage of CRPO over the primal-dual approach can be readily seen from its design. CRPO features immediate switches between optimizing the objective and reducing the constraints whenever they are violated. However, the primal-dual approach can respond much slower because the control is based on dual variables. If a dual variable is nonzero, then the policy update will descend along the corresponding constraint function. As a result, even if a constraint is already satisfied, there can still be a delay (sometimes a significant delay) for the dual variable to iteratively reduce to zero to release the constraint, which yields unnecessary sampling cost and slows down the algorithm. Our experiments in Section 5 validates such a performance advantage of CRPO over the primal-dual approach.
From the implementation perspective, CRPO can be implemented as easy as unconstrained policy optimization such as unconstrained policy gradient algorithms, whereas the primal-dual approach typically requires the projected gradient descent to update the dual variables, which is more complex to implement. Further, without introduction of the dual variables, CRPO does not suffer from hyperparameter tuning of the learning rates and projection threshold of the dual variables, whereas the primal-dual approach can be very sensitive to these hyperparamters. Nor does CRPO require initialization to be feasible, whereas the primal-dual approach can suffer significantly from bad initialization. We also empirically verify that the performance of CRPO is robust to the value of η over a wide range, which does not cause additional tuning effort compared to unconstrained algorithms. More discussions can be referred to Section 5. CRPO algorithm is inspired by, yet very different from the cooperative stochastic approximation (CSA) method (Lan & Zhou, 2016) in optimization literature. First, CSA is designed for convex optimization subject to convex constraint, and is not readily capable of handling the more challenging SRL problems eq. (2), which are nonconvex optimization subject to nonconvex constraints. Second, CSA is designed to handle only a single constraint, whereas CRPO can handle multiple constraints with guaranteed constraint satisfaction and global optimality. Thus, the finite-time analysis for CSA and CRPO feature different approaches due to the aforementioned differences in their designs.
Convergence Analysis of CRPO
In this section, we take NPG as a representative optimizer in CRPO, and establish the global convergence rate of CRPO in both the tabular and function approximation settings. Note that TRPO and ACKTR update can be viewed as the NPG approach with adaptive stepsize. Thus, the convergence we establish for NPG implies similar results for CRPO that takes TRPO or ACKTR as the optimizer.
Tabular Setting
In the tabular setting, we consider the softmax parameterization. For any w ∈ R |S|×|A| , the corresponding softmax policy π w is defined as Clearly, the policy class defined in eq. (3) is complete, as any stochastic policy in the tabular setting can be represented in this class.
Policy Evaluation: To perform the policy evaluation in Algorithm 1 (line 3), we adopt the temporal difference (TD) learning, in which a vector θ i ∈ R |S|×|A| is used to estimate the state-action value function Q i πw for all i = 0, · · · , p. Specifically, each iteration of TD learning takes the form of where s ∼ µ πw , a ∼ π w (·|s), s ′ ∼ P(·|s, a), a ′ ∼ π w (·|s ′ ), and β k is the learning rate. In line 3 of Algorithm 1, we perform the TD update in eq. (4) for K in iterations. It has been shown in (Sutton, 1988;Bhandari et al., 2018;Dalal et al., 2018b) that the iteration in eq. (4) of TD learning converges to a fixed point θ i * (π w ) ∈ R |S|×|A| , where each component of the fixed point is the corresponding state-action value: θ i * (π w )(s, a) = Q i πw (s, a). After performing K in iterations of TD learning as eq. (4), we letQ i t (s, a) = θ i Kin (s, a) for all (s, a) ∈ S × A and all i = {0, · · · , p}.
Constraint Estimation:
In the tabular setting, we let the sample set B t include all state-action pairs, i.e., B t = S×A, and the weight factor be ρ j,t = ξ(s j )π wt (a j |s j ) for all t = 0, · · · , T − 1. Then, the estimation error of the constraints can be upper bounded as Thus, our approximation of constraints is accurate when the approximated value functionQ i t (s, a) is accurate. Policy Optimization: In the tabular setting, it can be checked that the natural policy gradient of J i (π w ) is ∆ i (w) s,a = (1 − γ) −1 Q i πw (s, a) (see Appendix B). Once we obtain an approximationQ i t (s, a) ≈ Q i πw (s, a), we can use it to update the policy in the upcoming policy optimization step: where α > 0 is the stepsize and∆ t (s, a) = (1 − γ) −1Q0 t (s, a) (line 7) or (1 − γ) −1Qit t (s, a) (line 10). Our main technical challenge lies in the analysis of policy optimization, which runs as a stochastic approximation (SA) process with random and dynamical switches between optimization objectives of the reward and cost targets. Moreover, since critics estimate the constraints and help actor to estimate the policy update, the interaction error between actor and critics affects how the algorithm switches between objective and constraints. The typical analysis technique for NPG (Agarwal et al., 2019) is not applicable here, because NPG has a fixed objective to optimize, and its analysis technique does not capture the overall convergence performance of an SA with dynamically switching optimization objective. Furthermore, the updates with respect to the constraint functions involve the stochastic selection of a constraint if multiple constraints are violated, which further complicates the random events to analyze. To handle these issues, we develop a novel analysis approach, in which we focus on the event in which critic returns almost accurate value function estimation. Such an event greatly facilitates us to capture how CRPO switches between objective and multiple constraints and establish the convergence rate.
The following theorem characterizes the convergence rate of CRPO in terms of the objective function and constraint error bound.
Theorem 1. Consider Algorithm 1 in the tabular setting with softmax policy parameterization defined in eq. (3) and any initialization w 0 ∈ R |S|×|A| . Suppose the policy evaluation update in eq. (4) takes for all i = {1, · · · , p}, where the expectation is taken with respect to selecting w out from N 0 .
As shown in Theorem 1, starting from an arbitrary initialization, CRPO algorithm is guaranteed to converge to the globally optimal policy π * in the feasible set Ω C at a sublinear rate O(1/ √ T ), and the constraint violation of the output policy also converges to zero also at a sublinear rate O(1/ √ T ). Thus, to attain a w out that satisfies with each policy evaluation step consists of approximately K in = O(T ) iterations when σ is close to 1. Theorem 1 is the first global convergence for a primal-type algorithm even under the nonconcave objective with nonconcave constraints.
Outline of Proof Idea. We briefly explain the idea of the proof of Theorem 1, and the detailed proof can be referred to Appendix B. The key challenge here is to analyze an SA process that randomly and dynamically switches between the target objectives of the reward and the constraint. To this end, we construct novel concentration events for capturing the impact of such a dynamic process on the update of the reward and cost functions in order to establish the high probability convergence guarantee.
More specifically, we focus on the event in which all policy evaluation step returns an estimation with high accuracy. Then we show that under the parameter setting specified in Theorem 1, either the size of the approximated feasible policy set N 0 is large, or the average policies in the set N 0 is at least as good as π * . In the first case we have enough candidate policies in the set N 0 , which guarantees the convergence of CRPO within the set N 0 . In the second case we can directly conclude that J(w out ) ≥ J(π * ). To establish the convergence rate of the constraint violation, note that w out is selected from the set N 0 , and thus the violation cost is not worse than the summation of constraint estimation error and the tolerance.
Function Approximation Setting
In the function approximation setting, we parameterize the policy by a two-layer neural network together with the softmax policy. We assign a feature vector ψ(s, a) ∈ R d with d ≥ 2 for each state-action pair (s, a). Without loss of generality, we assume that ψ(s, a) 2 ≤ 1 for all (s, a) ∈ S × A. A two-layer neural network f ((s, a); W, b) with input ψ(s, a) and width m takes the form of md are the parameters. When training the two-layer neural network, we initialize the parameter via where d 1 and d 2 are positive constants), for all [W 0 ] r in the support of D w . During training, we only update W and keep b fixed, which is widely adopted in the convergence analysis of neural networks (Cai et al., 2019;Du et al., 2018). For notational simplicity, we write f ((s, a); W, b) as f ((s, a); W ) in the sequel. Using the neural network in eq. (6), we define the softmax policy for all (s, a) ∈ S × A, where τ is the temperature parameter, and it can be verified that π τ W (a|s) = π τ W (a|s). We define the feature mapping for all (s, a) ∈ S × A and for all r ∈ {1, · · · , m}.
Policy Evaluation: To estimate the state-action value function in Algorithm 1 (line 3), we adopt another neural network f ((s, a); θ i ) as an approximator, where f ((s, a); θ i ) has the same structure as f ((s, a); W ), with W replaced by θ ∈ R md in eq. (7). To perform the policy evaluation step, we adopt the TD learning with neural network parametrization, which has also been used for the policy evaluation step in (Cai et al., 2019;Wang et al., 2019;Zhang et al., 2020). Specifically, we choose the same initialization as the policy neural work, i.e., θ i 0 = W 0 , and perform the TD iteration as where s ∼ µ πW , a ∼ π W (·|s), s ′ ∼ P(·|s, a), a ′ ∼ π W (·|s ′ ), β is the learning rate, and B is a compact space defined as B = {θ ∈ R md : θ − θ i 0 2 ≤ R}. For simplicity, we denote the state-action pair as x = (s, a) and x ′ = (s ′ , a ′ ) in the sequel. We define the temporal difference error as . We then describe the following regularity conditions on the stationary distribution µ πW , state-action value function Q i πW , and variance, which have been adopted widely in the analysis of TD learning with function approximation and stochastic approximation (SA) (Cai et al., 2019;Wang et al., 2019;Zhang et al., 2020;Fu et al., 2020).
Assumption 2. We define the following function class: Assumption 1 implies that the distribution of ψ(s, a) has a uniformly upper bounded probability density over the unit sphere, which can be satisfied for most of the ergodic Markov chain. Assumption 2 is a mild regularity condition on Q i πW , as F R,∞ is a function class of neural networks with infinite width, which captures a sufficiently general family of functions. Assumption 3 on the variance bound is standard, which has been widely adopted in stochastic optimization literature (Ghadimi & Lan, 2013;Nemirovski et al., 2009;Lan, 2012;Ghadimi & Lan, 2016).
In the following lemma, we characterize the convergence rate of neural TD in high probability, which is needed for our the analysis. Such a result is stronger than the convergence in expectation provided in (Bhandari et al., 2018;Cai et al., 2019;Wang et al., 2019;Zhang et al., 2020;Srikant & Ying, 2019), which is not sufficient for our need later on. Lemma 1 (Convergence rate of TD in high probability). Consider the TD iteration with neural network approximation defined in eq. (8).
. Suppose Assumptions 1-3 hold, assume that the stationary distribution µ πW is not degenerate for all W ∈ B, and let the stepsize Lemma 1 implies that after performing the neural TD learning in eq. (8)-eq. (9) for Θ( √ m) iterations, we can ob- Constraint Estimation: Since the state space is usually very large or even infinite in the function approximation setting, we cannot include all state-action pairs to estimate the constraints as for the tabular setting. Instead, we sample a batch of state-action pairs (s j , a j ) ∈ B t from the distribution ξ(·)π Wt (·|·), and let the weight factor ρ j = 1/ |B t | for all j. In this case, the estimation error of the constrains is small when the policy evaluationQ i t is accurate and the batch size |B t | is large. We assume the following concentration property for the sampling process in the constraint estimation step. Similar assumptions have also been taken in (Ghadimi & Lan, 2013;Nemirovski et al., 2009;Lan, 2012;Ghadimi & Lan, 2016).
Assumption 4. For any parameterized policy π W , there exists a constant C f > 0 such that for all k ≥ 0, Policy Optimization: In the neural softmax approximation setting, at each iteration t, an approximation of the natural policy gradient can be obtained by solving the following linear regression problem (Agarwal et al., 2019;Wang et al., 2019;Xu et al., 2019b): Given the approximated natural policy gradient∆ t , the policy update takes the form of Note that in eq. (11) we also update the temperature parameter by τ t+1 = τ t + α simultaneously, which ensures w t ∈ B for all t. The following theorem characterizes the convergence rate of Algorithm 1 in terms of both the objective function and the constraint violation.
Theorem 2. Consider Algorithm 1 in the function approximation setting with neural softmax policy parameterization defined in eq. (7). Suppose Assumptions 1-4 hold. Suppose the same setting of policy evaluation step stated in Lemma 1 holds, and consider performing the neural TD in eq. (8) and eq.
Then with probability at least 1 − δ, we have and for all i = 1, · · · , p, we have where the expectation is taken only with respect to the randomness of selecting W out from N 0 .
Theorem 2 guarantees that CRPO converges to the global optimal policy π * in the feasible set at a sublinear rate To compare with the primal-dual approach in the function approximation setting, Theorem 2 shows that while the value function gap of CRPO achieves the same convergence rate as the primal-dual approach, the constraint violation of CRPO decays at a convergence rate of O(1/ √ T ), which substantially outperforms the rate O(1/T 1 4 ) of the primal-dual approach (Ding et al., 2020b). Such an advantage of CRPO is further validated by our experiments in Section 5, which show that the constraint violation of CRPO vanishes much faster than that of the primal-dual approach.
Remark 1. Our convergence analysis for Theorem 2 can still go through without Assumptions 3 and 4. As a result, the convergence rate of CRPO would have polynomial dependence on δ rather than logarithmic dependence.
Experiments
In this section, we conduct simulation experiments on different SRL tasks to compare our CRPO with the primaldual optimization (PDO) approach. We consider two tasks based on OpenAI gym (Brockman et al., 2016) with each having multiple constraints given as follows: Cartpole: The agent is rewarded for keeping the pole upright, but is penalized with cost if (1) entering into some specific areas, or (2) having the angle of pole being large.
Acrobot: The agent is rewarded for swing the end-effector at a specific height, but is penalized with cost if (1) applying torque on the joint when the first link swings in a prohibited direction, or (2) when the the second link swings in a prohibited direction with respect to the first link.
The detailed experimental setting is described in Appendix A. For both experiments, we use neural softmax policy with two hidden layers of size (128, 128). For fair com-parison, we adopt TRPO as the optimizer for both CRPO and PDO. In CRPO, we let the tolerance η = 0.5 in both tasks. In PDP, we initialize the Lagrange multiplier as zero, and select the best tuned stepsize for dual variable update in both tasks. We find that the performance of CRPO is robust to the value of η over a wide range, while in PDO method the convergence performance is very sensitive to the stepsize of the dual variable (see additional experiments of hyperparameters comparison in Appendix A). Thus, in contrast to the difficulty of tuning the PDO method, CRPO is much less sensitive to hyper-parameters and is hence much easier to tune.
The learning curves for CRPO and PDO are provided in Figure 1. At each step we evaluate the performance based on two metrics: the return reward and constraint value of the output policy. We show the learning curve of unconstrained TRPO (the green line), which although achieves the best reward, does not satisfy the constraints. In both tasks, CRPO tracks the constraint returns almost exactly to the limit, indicating that CRPO sufficiently explores the boundary of the feasible set, which results in an optimal return reward. In contrast, although PDO also outputs a constraints-satisfying policy in the end, it tends to over-or under-enforce the constraints, which results in lower return reward and unstable constraint satisfaction performance. In terms of the convergence, the constraints of CRPO drop below the thresholds (and thus satisfy the constraints) much faster than that of PDO, corroborating our theoretical comparison that the constraint violation of CRPO (given in Theorem 2) converges much faster than that of PDO given in (Ding et al., 2020b).
Conclusion
In this paper, we propose a novel CRPO approach for policy optimization for SRL, which is easy to implement and has provable global optimality guarantee. We show that CRPO achieves an O(1/ √ T ) convergence rate to the global optimum and an O(1/ √ T ) rate of vanishing constraint error when NPG update is adopted as the optimizer. This is the first primal SRL algorithm that has a provable convergence guarantee to a global optimum. In the future, it is interesting to incorporate various momentum schemes to CRPO to improve its convergence performance. (2) having the angle of pole larger than 6 degree.
In our constrained Acrobot environment, each episode has length 500. During the training, the agent receives a reward +1 when the end-effector is at a height of 0.5, but is penalized with cost +1 when (1) a torque with value +1 is applied when the first pendulum swings along an anticlockwise direction; or (2) a torque with value +1 is applied when the second pendulum swings along an anticlockwise direction with respect to the first pendulum.
For details about the update of PD, please refer to (Achiam et al., 2017)[Section 10.3.3]. The performance of PD is very sensitive to the stepsize of the dual variable's update. If the stepsize is too small, then the dual variable will not update quickly to enforce the constraints. If the stepsize is too large, then the algorithm will behave conservatively and have low return reward. To appropriately select the stepsize for the dual variable, we conduct the experiments with the learning rates {0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05} for both tasks. The learning rate 0.005 performs the best in the first task, and the learning rate 0.0005 performs the best in the second task. Thus, our reported result of Cartpole is with the stepsize 0.005 and our reported result of Acrobot is with the stepsize 0.0005.
Next, we investigate the robustness of CRPO with respect to the tolerance parameter η. We conduct the experiments under the following values of η {10, 5, 2, 1, 0.5} for the Acrobot environment. It can be seen from Figure 2 that the learning curves of CRPO with the tolerance parameter η taking different values are almost the same, which indicates that the convergence performance of CRPO is robust to the value of η over a wide range. Thus, the tolerance parameter η does not cause much parameter tuning cost for CRPO. The following lemma characterizes the convergence rate of TD learning in the tabular setting. Lemma 2 ( (Dalal et al., 2019)). Consider the iteration given in eq. (4) with arbitrary initialization θ i 0 . Assume that the stationary distribution µ πw is not degenerate for all w ∈ R |S|×|A| . Let stepsize β k = Θ( 1 t σ ) (0 < σ < 1). Then, with probability at least 1 − δ, we have Note that σ can be arbitrarily close to 1. Lemma 2 implies that we can obtain an approximationQ i t such that Lemma 3 (Performance difference lemma (Kakade & Langford, 2002) ). For all policies π, π ′ and initial distribution ρ, we have where J ρ i (π) and ν ρ denote the accumulated reward (cost) function and visitation distribution under policy π when the initial state distribution is ρ.
Lemma 4 (Lemma 5.6. (Agarwal et al., 2019)). Considering the approximated NPG update in line 7 of Algorithm 1 in the tabular setting and i = 0, the NPG update takes the form: Note that if we follow the update in line 10 of Algorithm 1, we can obtain similar results for the case i ∈ {1, · · · , p} as stated in Lemma 4.
Lemma 5 (Policy gradient property of softmax parameterization). Considering the softmax policy in the tabular setting (eq. (3)). For any initial state distribution ρ, we have where 1 as is an |S| × |A|-dimension vector, with (a, s)-th element being one, and the rest elements being zero.
Proof. The first result can follows directly from Lemma C.1 in (Agarwal et al., 2019). We now proceed to prove the second result.
Lemma 6 (Performance improvement bound for approximated NPG). For the iterates π wt generated by the approximated NPG updates in line 7 of Algorithm 1 in the tabular setting, we have for all initial state distribution ρ and when i = 0, the following holds Proof. We first provide the following lower bound.
Thus, we conclude that We then proceed to prove Lemma 6. The performance difference lemma (Lemma 3) implies: where (i) follows from the update rule in Lemma 4 and (ii) follows from the facts that Note that if we follow the update in line 10 of Algorithm 1, we can obtain similar results for the case i ∈ {1, · · · , p} as stated in Lemma 6.
Lemma 7 (Upper bound on optimality gap for approximated NPG). Consider the approximated NPG updates in line 7 of Algorithm 1 in the tabular setting when i = 0. We have Proof. By the performance difference lemma (Lemma 3), we have where (i) follows from Lemma 4, (ii) follows from Lemma 6 and (iii) follows from the Lipschitz property of which is proved by Proposition 1 in (Xu et al., 2020b).
Note that if we follow the update in line 10 of Algorithm 1, we can obtain the following result for the case i ∈ {1, · · · , p} as stated in Lemma 7: Lemma 8. Considering CRPO in Algorithm 1 in the tabular setting. Let K in = Θ(T 1/σ log 2/σ (|S| 2 |A| 2 T 1+2/σ /δ)). Define N i as the set of steps that CRPO algorithm chooses to minimize the i-th constraint. With probability at least 1 − δ, we have Proof. If t ∈ N 0 , by Lemma 7 we have If t ∈ N i , similarly we can obtain CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee Taking the summation of eq. (12) and eq. (13) from t = 0 to T − 1 yields Note that when t ∈ N i (i = 0), we haveJ i (θ i t ) > d i + η (line 9 in Algorithm 1), which implies that Substituting eq. (15) into eq. (14) yields By Lemma 2, we have with probability at least 1 − δ, the following holds Thus, if we let then with probability at least 1 − δ/T , we have Applying the union bound to eq. (17) from t = 0 to T − 1, we have with probability at least 1 − δ the following holds which further implies that, with probability at least 1 − δ, we have which completes the proof.
Lemma 9. If then with probability at least 1 − δ, we have the following holds 1. N 0 = ∅, i.e., w out is well-defined, 2. One of the following two statements must hold, Proof. We prove Lemma 9 in the event that eq. (18) holds, which happens with probability at least 1 − δ. Under such an event, the following inequality holds, which is also the result of Lemma 8.
We first verify item 1.
B.2. Proof of Theorem 1
We restate Theorem 1 as follows to include the specifics of the parameters.
Theorem 3 (Restatement of Theorem 1). Consider Algorithm 1 in the tabular setting. Let α = (1 − γ) 1.5 / |S| |A| T , (1−γ) 1.5 √ T (3 + E s∼ν * D KL (π * ||π w0 ) + 3c max + c 2 max ), and Suppose the same setting for policy evaluation in Lemma 2 hold. Then, with probability at least 1 − δ, we have and for all i ∈ {1, · · · , p}, we have To prove Theorem 1 (or Theorem 3), we still consider the following event given in eq. (18) that happens with probability at least 1 − δ: We first consider the convergence rate of the objective function. Under the above event, the following holds If t∈N0 (J 0 (π * ) − J 0 (π wt )) ≤ 0, then we have J 0 (π * ) − J 0 (π wout ) ≤ 0. If t∈N0 (J 0 (π * ) − J 0 (π wt )) ≥ 0, we have |N 0 | ≥ T /2, which implies the following convergence rate We then proceed to bound the constrains violation. For any i ∈ {1, · · · , p}, we have Under the event defined in eq. (18), we have C. Proof of Lemma 1 and Theorem 2: Function Approximation Setting For notation simplicity, we denote the state action pairs (s, a) and (s ′ , a ′ ) as x and x ′ , respectively. We define the weighted norm f D = f (x) 2 dD(x) for any distribution D over |S| × |A|. We will write θ i k as θ k whenever there is no confusion in this subsection. We define as the local linearizion of f (x, θ) at the initial point θ 0 . We denote the temporal differences as δ 0 ( The approximated stationary point θ * satisfiesḡ 0 (θ) ⊤ (θ − θ * ) ≥ for any θ ∈ B. We define the following function spaces and define f 0 (x, θ * π ) as the projection of Q π (x) onto the function space F 0,m in terms of · µπ norm. Without loss of generality, we assume 0 < δ < 1 e in the sequel.
C.1. Supporting Lemmas for Proof of Lemma 1
We provide the proof of supporting lemmas for Lemma 1.
For the following Lemma 11 and Lemma 12, we provide slightly different proofs from those in (Cai et al., 2019), which are included here for completeness.
Lemma 11. Suppose Assumption 1 holds. For any policy π and all k ≥ 0, it holds that which further implies Then, we can derive the following upper bound where (i) follows from Assumption 1 and (ii) follows from the fact that θ 0,r 2 ≥ d 1 .
Lemma 12. Suppose Assumption 1 holds. For any policy π and all k ≥ 0, it holds that Proof. By definition, we have where (i) follows from eq. (21). We can then obtain the following upper bound.
where (i) follows from Holder's inequality, and (ii) follows from the derivation in Lemma 11 after eq. (22).
Lemma 13. Suppose Assumption 1 holds. For any policy π and all k ≥ 0, with probability at least 1 − δ, we have Proof. By definition, we have where (i) follows from the fact that ∇ θ f (x, θ k ) 2 ≤ 1. Then, eq. (26) implies that We first upper bound the term . By definition, we have where (i) follows from Lemma 12. We then proceed to bound the term ]. By definition, we have where (i) follows because |b r | ≤ 1 and ψ(s) 2 ≤ 1, and (ii) follows from eq. (21). Further, eq. (29) implies that where (i) follows from the derivation in Lemma 11 after eq. (22).
We proceed as follows.
Since F 0,m ⊂ F 0,m . Lemma 10 implies that with probability at least 1 − δ, we have Thus, with probability at least 1 − δ, we have Combining eq. (28), eq. (30) and eq. (33), we can obtain that, with probability at least 1 − δ, we have which implies that with probability at least 1 − δ, we have which completes the proof.
We consider the convergence of θ i k for a given i under a fixed policy π. For the iteration of θ k , we proceed as follows.
We first consider the term K−1 t=0 ζ k (θ k ) 2 2 . We proceed as follows.
where (i) follows from Markov's inequality, (ii) follows from Assumption 3. Then, eq. (36) implies that with probability at least 1 − δ 1 , we have We then consider the term which implies (1).
Proof. We define N i as the set of steps that CRPO algorithm chooses to minimize the i-th constraint. If t ∈ N 0 , by Lemma 15 we have If t ∈ N i , similarly we can obtain Taking summation of eq. (12) and eq. (13) from t = 0 to T − 1 yields Note that when t ∈ N i (i = 0), we haveJ i (θ i t ) > d i + η (line 9 in Algorithm 1), which implies that To bound the term J i (θ i t ) − J i (π τtWt ) , we proceed as follows , a),θ t )] + E νπ τ t W t [f i ((s, a),θ t )] − J i (π τtWt ) , a),θ t )] + f i ((s, a),θ t ) − Q i πτ t W t (s, a) where (i) can be obtained by following steps similar to those in eq. (50). Substituting eq. (58) into eq. (57) yields Then, substituting eq. (59) into eq. (56) yields We then upper bound the term T −1 t=0 f i ((s, a),θ t ) − Q i πτ t W t (s, a) µπ τ t W t . Lemma 1 implies that if we let K in = C 1 ((1 − γ) 2 √ m), then with probability at least 1 − δ 1 /T , we have f ((s, a);θ K ) − Q π (s, a) µπ ≤ C 2 1 (1 − γ) 1.5 m 1/8 log where C 1 and C 2 are positive constant. Applying the union bound, we have with probability at least 1 − δ 1 , We then bound the term For simplicity, we denote J ′ i (θ t ) = E ξ·µπ τ t W t [f i ((s, a),θ t )]. Recall thatJ i (θ i t ) = 1 N N j=1 f i ((s j , a j ),θ t ). For each t ≥ 0, we bound the errorJ i (θ i t ) − J ′ i (θ t ) as follows: where (i) follows from Markov's inequality. Then, eq. (62) implies that with probability at least 1 − δ 2 /T , we have | 2020-11-12T02:01:26.289Z | 2020-11-11T00:00:00.000 | {
"year": 2020,
"sha1": "c28e09dea1ac353af7d721d49ffe39d2dae44b12",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "700e73bf63837c555c40a6d67d919cb8154c52a0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
245773792 | pes2o/s2orc | v3-fos-license | Variation in water contact behaviour and risk of Schistosoma mansoni (re)infection among Ugandan school-aged children in an area with persistent high endemicity
Background Annual mass drug administration with praziquantel has reduced schistosomiasis transmission in some highly endemic areas, but areas with persistent high endemicity have been identified across sub-Saharan Africa, including Uganda. In these areas many children are rapidly reinfected post treatment, while some children remain uninfected or have low-intensity infections. The aim of this mixed-methods study was to better understand variation in water contact locations, behaviours and infection risk in school-aged children within an area with persistent high endemicity to inform additional control efforts. Methods Data were collected in Bugoto, Mayuge District, Uganda. Two risk groups were identified from a longitudinal cohort, and eight children with no/low-intensity infections and eight children with reinfections were recruited. Individual structured day-long observations with a focus on water contact were conducted over two periods in 2018. In all identified water contact sites, four snail surveys were conducted quarterly over 1 year. All observed Biomphalaria snails were collected, counted and monitored in the laboratory for Schistosoma mansoni cercarial shedding for 3 weeks. Results Children came into contact with water for a range of purposes, either directly at the water sources or by coming into contact with water collected previously. Although some water contact practices were similar between the risk groups, only children with reinfection were observed fetching water for commercial purposes and swimming in water sources; this latter group of children also came into contact with water at a larger variety and number of sites compared to children with no/low-intensity infection. Households with children with no/low-intensity infections collected rainwater more often. Water contact was observed at 10 sites throughout the study, and a total of 9457 Biomphalaria snails were collected from these sites over four sampling periods. Four lake sites had a significantly higher Biomphalaria choanomphala abundance, and reinfected children came into contact with water at these sites more often than children with no/low-intensity infections. While only six snails shed cercariae, four were from sites only contacted by reinfected children. Conclusions Children with reinfection have more high-risk water contact behaviours and accessed water sites with higher B. choanomphala abundance, demonstrating that specific water contact behaviours interact with environmental features to explain variation in risk within areas with persistent high endemicity. Targeted behaviour change, vector control and safe water supplies could reduce reinfection in school-aged children in these settings. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13071-021-05121-6.
Background
Schistosomiasis is a neglected tropical disease caused by a water-borne parasitic infection. The World Health Organization (WHO) recently launched the 2021-2030 roadmap for neglected tropical diseases in which the global goal to eliminate schistosomiasis as a public health problem by 2030 is outlined (defined as < 1% of infections classified as high intensity) [1]. The disease disproportionately affects school-aged children (SAC) [2], and in areas where prevalence of schistosomiasis is ≥ 50% in SAC, the WHO recommends community-wide annual mass drug administration (MDA) with praziquantel to kill adult worms and reduce egg production. The target is to treat > 75% of SAC and at-risk adults in these highly endemic areas to prevent morbidity from schistosomiasis and ultimately reduce transmission [3].
After years of MDA, areas with persistent high prevalence and high intensity of infection and associated high morbidity, often termed persistent hotspots, remain in sub-Saharan Africa [4]. In Uganda, where an estimated 29% of the population are SAC [5], the target coverage for MDA was reached in only 43% of endemic districts in 2019, with a reported 61% of SAC receiving MDA across the country [6]. Although treatment can be highly effective in reducing morbidity (primarily caused by the eggs), treatment does not prevent subsequent reinfection, which is a key challenge in areas with active transmission [7]. Even in settings where target coverage for SAC is almost reached [8], more than half of children can become reinfected 6 months after clearance of parasites with successful treatment, with the majority of reinfections detected only 9 weeks after treatment [10]. Understanding both the behaviours associated with exposure and the biological drivers of infection in these communities is crucial to achieving WHO global targets.
The lack of available and/or access to safe water is a main driver in perpetuating the burden of schistosomiasis [11]. Although three in four children living in rural areas in Uganda are reported to have access to safe drinking water [12], often multiple water sources, including unsafe water sources, are accessed for additional uses, such as for domestic, personal care and recreational purposes [13,14]. If no protective measures or preventative treatment of water are used [15], contact with this water can expose children to Schistosoma mansoni parasites that develop in the intermediate hosts, species of Biomphalaria snails. Infection risk can increase by exposure factors, such as duration of water contact, frequency of water contact and level of submersion in water [16][17][18] and with snail presence and abundance [19]. Some environmental factors have been found to be associated with the abundance of Biomphalaria snails, such as dry seasons and little rainfall [20][21][22], as well as lake sites in comparison to inland habitats [23]. Studies on Biomphalaria abundance and physicochemical water factors have reported different findings, including associations with high pH [24], low pH [25,26], low conductivity [24,25], high temperature [23] and low turbidity [27,28], whereas some studies did not find any associations at all [20,29].
Some children in areas with persistent high endemicity remain uninfected or have such low-intensity infections that they are indetectable, suggesting variation in host susceptibility, biological exposure and/or behavioural susceptibility. While research has focussed on individuals who are persistently infected-either through rapid reinfection or because they are never treated with MDA [8]there has been minimal focus on consistently uninfected children. Identifying the risk behaviours of this group of children and comparing their behaviour to children with high-intensity infections and/or rapid reinfections will provide important insights and enable recommendations to be made on locally feasible ways to reduce infections and reinfections among SAC and across the wider community. We reason that it might be easier for people to act on recommendations to reduce water contact when these recommendations are based on water contact behaviours and locations already known to be performed and used within specific communities.
In this mixed-methods study we identified groups of SAC who are rarely infected and who are rapidly reinfected, respectively, within an area with persistently high schistosomiasis endemicity. We used ethnographic observations, parasitological surveys and snail surveys to better understand variations in behaviour and exposure risk to improve guidance for sustainable integrated control.
The specific objectives of this study were: (i) to identify water contact risk behaviours and locations of water contact among SAC; (ii) to compare water contact behaviours between children with rapid reinfection and children with no or low-intensity infections; (iii) to assess variation in environmental infection risk at all water contact sites.
Study site
All data for this study were collected in Bugoto, Mayuge District, south-eastern Uganda (Fig. 1a, b), a community situated on the shores of Lake Victoria. Persistent high prevalence of schistosomiasis has been reported in Bugoto [9,30]. The community comprises two villages: Bugoto A (densely populated, close to the lake, predominately a fishing village) and Bugoto B (sparsely populated, inland from the lake, predominately a farming village) (Fig. 1c). Christianity and Islam are the main religions in the community, and the majority of people are of the Basoga tribal group. We selected participants from a primary school (Bugoto Lake View) that is situated between Bugoto A and B. It has students from both villages and is the main primary school in the area and the only public one.
Cohort selection
To better understand water contact behaviour and differences in exposure, two groups of children with differing S. mansoni parasitological statuses were selected among children from a larger longitudinal study cohort of SAC (SCHISTO_PERSIST) (n = 274; [31]). In this larger cohort, presence of S. mansoni eggs and mean infection intensities were calculated from 3 days of duplicate kato-katz thick smears at four time points: March 2017 (week 0), September 2017 (week 28), October 2017 (week 32) and December 2017 (week 38) (Fig. 2). After both the March and September sample collections, all children were treated with 40 mg/kg praziquantel, regardless of infection status. In December 2017, only children with ≥ 100 eggs per gram (epg) of stool were treated.
Children were selected and placed into two groups based on the results of this longitudinal parasitological survey. Eight children with rapid reinfection (CRI) were selected for one group; CRI were defined as those who were infected at weeks 0 and 28, and who although cleared infection following treatment (0 epg at week 32) were rapidly reinfected 9 weeks post treatment (week 38). An additional eight children with low-intensity infection (CLI) were chosen for the second group; CLI were defined as having no detectable or low-intensity infections (< 100 epg) at weeks 0 and 28, and who after successful clearance of infections in week 32 (0 epg) still had 0 epg at 9 weeks following treatment (week 38) (see Additional file 1: Table S1 for cohort specific parasitological data used for selection). We aimed for even representation of village residence, gender, age and both religions across the groups.
Data collection Parasitological surveys
We carried out additional parasitological surveys in 2018 and 2019 for the participating CRI and CLI present at weeks 51, 70, 82 and 101 after initial SCHISTO_PERSIST cohort selection (Fig. 2). Stool samples were collected on 1 to 3 consecutive days and examined by duplicate standard kato-katz thick smear method [32] to determine the presence of S. mansoni eggs and to quantify infection intensity. All children detected with egg presence were treated with 40 mg/kg praziquantel at each time point.
Ethnographic observations
Ethnographic methods for this study have been described in detail elsewhere [9]. In short, a team comprising one researcher and one community member/translator conducted two individual day-long ethnographic observations, for a total of 32 days for all selected children, of the participants' daily activities at home, school and in the community, with a focus on water contact behaviours. The 2 days of observation for each child was split to incorporate differences between school days and weekends/holidays and different seasons (March [rainy] and October [light rainy] 2018; Fig. 2). GPS coordinates of locations where children were observed to contact a water body were recorded with a Garmin ™ eTrex10 (Garmin Ltd., Olathe, KS, USA), and these data informed the sites for the snail surveys. Non-structured observation transect walks were performed throughout the data collection periods, across all days of the week, to record any additional potential water contact sites.
Snail surveys
All water contact sites identified through the ethnographic observations were included in the snail surveys. We collected snails during four quarterly quantitative surveys over the period of 1 year, performed in March, July and October 2018 and February 2019 (Fig. 2), to assess potential differences in seasonality. Our aim was to complete two surveys per water contact site per time point; however no surveys were carried out in sites where water depth was too high to be safely entered at the time of the survey. Snail surveys followed the standard WHO protocol [33]: in brief, a handheld snail scoop was used to scoop the water body floor for 30 min per site and all scooped snails were collected. Physicochemical water factors, including temperature, total dissolved solids, pH and conductivity, were measured using a handheld water meter (model ProDSS; YSI Inc., Yellow Springs, OH, USA) at each site at each time point and recorded on a paper form. After collection, snails were sorted by genus based on shell morphology, counted and individually placed in wells of shedding plates in 3 ml bottled water to microscopically check for cercarial shedding.
After the initial check of the collected snails, non-shedding snails were housed in a well-ventilated aquaria at the Ugandan Ministry of Health-Vector Control Division, in Kampala, at room temperature, with frequent water changes, fed on dried lettuce ad libitum and monitored for 3 consecutive weeks to maximise the likelihood of detecting an infected snail, as cercariae can take up to several weeks to be released [34]. Snails were checked weekly for cercarial shedding by placing them under a warm indirect light source to induce cercarial release.
All snail collections, identification of snail genera and cercariae species, and monitoring for shedding were performed by trained staff members (FB and others) of the Ugandan Ministry of Health-Vector Control Division.
Data analysis Parasitological surveys
Data from the parasitological surveys were double entered into Microsoft Excel (Microsoft Corp., Redmond, WA, USA) and imported into R 4.0.4 [35] for merging, cleaning and analysis. The infection status of each participating child was determined at all sampling time points, and mean infection intensities were classified as per WHO guidelines, with a mean epg of 1-99 classified as low infection intensity, 100-399 epg as moderate and ≥ 400 epg as high infection intensity [36]. Medians and inter-quartile ranges (IQR) of infection intensities by risk group and sampling week were calculated and compared using Mann-Whitney U-tests.
Ethnographic data
Detailed notes from daily observations were transcribed, coded, categorised and compared using NVivo 12 [37] and Microsoft Excel.
Snail surveys
Survey data were double entered into Microsoft Excel and imported into R. Snail abundance, distribution of species and occurrence of cercarial shedding were calculated by water contact site and survey time point, and snail abundances were compared using Kruskal-Wallis tests and Mann-Whitney U-tests. Generalised linear mixed models (GLMM) using the R package glmmtmb [38] were performed to assess the effect of physicochemical water factors on snail abundance. Collinearity was assessed by the variance inflation factor score (VIF) using the package car [39]. Based on the differences in water site attendance and frequency of contact with water sites between CLI and CRI, as well as differences in high-risk behaviours, such as swimming, sites were assigned a low-or high-risk status. Maps were produced with QGIS 3.14 [40] using reference maps from Google Maps (satellite imagery) and Uganda Bureau of Statistics (district boundaries) [41].
Cohort
Sixteen children were initially selected for participation in the study. The group of CLI comprised four girls and four boys, equally distributed by village, ranging in age from 8 to 14 years (median: 11 years). The group of CRI comprised five boys and three girls, ranging in age from 6 to 13 years (median: 10), with five children living in Bugoto A and three in Bugoto B. Representation of religions was similar in both groups: three CLI were Muslim, four CRI were Muslim; all others were Christian. In both groups, one child was not present at the time of the second set of ethnographic observations; both were replaced with children with similar characteristics, resulting in data collected from 18 children in total.
Parasitological surveys
The post-selection parasitological surveys (weeks 51-101) showed significant and sustained differences in infection status and intensity between CRI and CLI over time (Fig. 3). For the majority of CRI, infections were detected at each time point, despite repeated praziquantel treatment. In contrast, no or few schistosome eggs were found in CLI across the time points (Additional file: Table S1). Significantly higher infection intensities (in epg) were found in CRI compared to CLI for week 38
Ethnographic observations Water sources
There are several types of water sources in Bugoto, both natural and man-made. The largest permanent source of water is Lake Victoria, with adjacent swamps (Fig. 1). Inland of Bugoto B there are some rice paddies, ditches and ponds, some of which are temporary and can dry up in the dry season. Rainfall provides an additional source of water during the rainy seasons. At the time of the study, there were no piped water systems in Bugoto A or B; however the villages have a small number of boreholes with taps, some of which were out of order at the time of the study and some of which were in use, with a payper-quantity system. The nearest freely accessible borehole with a tap was a few kilometres outside of Bugoto in a different village. In addition to the distance, there were often long queues by the tap. Small plastic sachets of drinking water are also sold in the shops in Bugoto A and B.
Exposure behaviours
Children had skin contact with water for a wide range of activities (Table 1), at the sources directly as well as later on with water collected and then transported away from the collection source. Direct exposure was mainly through fetching water for the household, which 14 of the total 18 children recruited across both data collection periods carried out at least once during the observations. Children accessed the water barefoot, standing submerged at different depths of between the ankles and the hips for a few minutes at a time to fill the jerrycans, and subsequently exiting the water. We did not observe noticeable differences in the frequency and duration of household water fetching or in submersion depth between the CLI and CRI children. However, in comparison to CLI, CRI had additional water contact behaviours, in particular, fetching water for money and swimming in the lake which were not observed among any of the CLI. Four children engaged in swimming or playing in the water up to 4 times per day, ranging from 8 to 50 min of water contact in total. These children were all boys in the CRI group, aged up to 10 years and from both villages and religions. Three CRI, including one who also engaged in the swimming activity, collected water for other community members in return for money, up to five times per day. This group comprised both genders and religions and were of different ages, but all lived in Bugoto A, the village on the shores of the lake. The duration of this commercial fetching activity and the submersion depth were similar to those of household fetching, consisting of a few minutes each time and a depth ranging up to waist height. Only one child, a male CRI from Bugoto A, was observed to play with the collected water at home, pouring a layer of water from a jerrycan in a room with a concrete floor to use in a slip and slide activity. Across all observation days, CLI accessed six different water contact sites and CRI accessed nine sites. Five CLI accessed one site, two accessed two water sites and two accessed three sites. CRI accessed a larger variety of sites than CLI, with one CRI accessing one site, three accessing two different sites, three accessing three different sites and two accessing four different sites. Both CLI and CRI had direct water contact most often between 1700 hours and 1900 hours (Fig. 4), with almost half of the total water contact episodes occurring during this time.
Although we recorded only two instances of direct water contact among CLI before 1700 hours (one between 0900 hours and 1100 hours, and one between 1500 hours and 1700 hours), CRI accessed the water several times throughout the day, including around the midday; only one child accessed the water after 1900 hours, when it was already dark.
Children collected water in hard plastic jerrycans, mainly 20 l; however smaller jerrycans of 10 l and 5 l were also used, especially by young children who could not yet carry large jerrycans. Full jerrycans were mostly carried back by hand or on the head, but they were sometimes put on or tied to bicycles. In case of missing caps, children sometimes used thick leaves to prevent spillage.
The children placed the jerrycans with collected water by the house for immediate or later household use; however it was not recorded how long the water was stored before use. No households reported, or were observed, processing the water to eliminate potential pathogens before use. Some children heated bathing water for their parents on a fire, but this was reportedly done for warmth on cooler days, not for infection prevention.
Collected water was used for many purposes. Children used water for washing foods before cooking, for mopping the house floors (barefoot or in flipflops) and for washing their hands, feet or face. They added soap from soap bars to water for washing crockery and cutlery, washing jerrycans, washing clothes and footwear as well as for bathing and helping household members to bathe. We did not observe differences between CLI and CRI in the frequency or duration of these water contact activities in the household.
Half of the households collected rain water, an activity carried out more often by households of CLI (n = 6) than households of CRI (n = 3). People mainly collected rain by putting jerrycans out when it rained: one family had a larger open plastic tank next to their house and another family had constructed a round brick cistern on their compound. Rain water was used both for drinking and for household use. Other water for drinking was reported to come from boreholes, and no child or other household member was observed to drink water from lakes, swamps, ditches or ponds, either directly or after collection.
We carried out observations on an equal number of school days for CLI and CRI (both 9 school days vs 7 non-school days). Three children were found skipping a full day of school, all of whom were CRI. There was a higher frequency of all water contact activities on non-school days. Swimming and commercial fetching were only observed on non-school days. We observed more water contact on non-school days than on school days for several water contact activities; children fetched water for the household on 71% of non-school days vs 39% of school days, this was 36% and 6% for washing clothes respectively, 71% and 50% for washing plates and 86% and 72% for bathing. The only water contact children had at school was washing hands after eating a snack or lunch, reflected in a minor difference in handwashing between non-school days (79%) and school days (72%).
Snail surveys
Through the ethnographic observations, 10 water contact sites were identified (A-J, Fig. 1). Most sites were by Lake Victoria (A-F), two were ponds (H, J), one was a swamp (G) and one was a ditch (I). Sites A-G were in the more densely populated area of Bugoto A; sites H, I and J were in rural areas of the surrounding villages. Site A was excluded from the survey in March, July and October and site B in July due to the water levels at these sites being too high to safely scoop for snails.
In total, we collected 9457 Biomphalaria snails across the four time points and 10 sites ( Of those collected snails whose species was identified (n = 6149), 64% were Biomphalaria choanomphala, 20% were Biomphalaria sudanica and 15% were Biomphalaria pfeifferi (Additional file: Table S2). At the water contact sites by the lake (Fig. 1, sites A-F), the majority of snails were B. choanomphala (90%), whereas in the ponds, swamp and stream B. sudanica (58%) and B. pfeifferi (42%) were more frequently present and abundant. Table S4; Additional file 5: Table S5), no variables were significantly different between the designated lowor high-risk sites.
Of the 9457 collected Biomphalaria snails, six snails (0.06%) shed S. mansoni cercariae during the 3 weeks of post-collection monitoring. Four shedding snails were B. choanomphala and two were B. sudanica. They were found during the March 2018 survey (n = 5) and in October 2018 (n = 1). Two were collected from site E, and one each from sites B, C, D and G. From the six snails that were found shedding cercariae, four were from high-risk sites.
Discussion
Although in recent years progress has been made towards decreasing the burden of schistosomiasis in several endemic countries [42], areas with persistent high endemicity with rapid reinfections after treatment remain a major challenge for schistosomiasis control [43]. Despite very high prevalence in some of these communities, not everyone is infected, and understanding how individuals do not become infected in these areas may help to guide novel and current interventions. Here we demonstrate consistent and sustained differences in infection risk among SAC within an area with persistent high S. mansoni endemicity in Uganda. We integrated parasitological, ethnographic and malacological methods to identify water contact behaviours and locations, and environmental factors at these locations among SAC, and explored differences between children with rapid reinfection (CRI) and those with no/low-intensity infections (CLI). By using mixed methodologies, we were able to differentiate key risk behaviours and locations that have implications for differences in schistosomiasis infection within a persistent high endemic area. SAC in this setting had frequent direct and indirect water contact for a range of purposes. The ethnographic observations highlighted several direct water contact risk behaviours by CRI that were not observed among CLI. Only CRI were observed to go swimming, an activity found to be a risk factor for schistosomiasis infection in previous studies among SAC [44,45]. Swimming could contribute to a higher infection risk due to the greater degree of body submersion in the water and, therefore, increased skin contact with water, as well as the observed additional and extended time of water contact. Integrated schistosomiasis control through the development of safe water supplies, although important, does not affect the risk of infection from swimming. Swimming often has a different purpose, such as recreation [46] or cooling down from the heat [47]. In order to address this identified risk in CRI, additional interventions, such as parental guidance or community supervision, aimed at prohibiting children from going swimming [48,49] may have an effect, but alternative options for cooling down or recreation [50] should be considered.
To our knowledge, this study is the first to report the practice of commercial water fetching; some children, all of them CRI, fetched water for other community members in return for money. This practice not only adds to the frequency of water contact but could also pose an economically compelling alternative to attending school, especially in an area where the mean daily income is around $1/day [51]. We observed more water contact during the days when children were not in school and some CRI were even skipping school days, suggesting school attendance may-aside from the educational benefits-reduce exposure and infection. In addition, where MDA is carried out in school settings, school nonattendance could mean some high-risk children may not be reached [52].
For most household chores involving water contact we did not observe considerable differences in the frequency and duration of water contact and in depth of submersion in water between the two groups. Collected water was not treated or processed (e.g. left in the sunshine) by both groups, and although we did not record how long water was stored before use, a practice that can decrease the number of viable cercariae in water [15], we learned from discussion with community members, Ugandan ethnographers working in this village and Ministry of Health technicians that water is rarely stored long enough for natural cercarial death. Further research on the storing of water at a household level could provide additional understanding around indirect exposure and possible differences in infection risk. For some chores, such as handwashing, bathing and washing clothes and dishes, infection risks could have also been minimised with the use of soap, which has been reported to kill cercariae [53,54]. Although other possible protective measures, such as wearing gloves and boots, could be used for household chores with previously collected water, submersion when fetching water is often too deep for these measures to be useful at the water contact sites, and structural interventions such as jetties may be more beneficial.
Drinking water was retrieved from safe water sources, but the additional time, cost or effort this took was possibly too great for the larger water needs for household use. Households of CLI collected rain water more often than households of CRI. Until sufficient and accessible safe water is supplied, appropriate containers to collect rain could be used to increase safe water security in the rainy seasons.
Children from both groups had direct water contact most often in the after-school hours between 1700 hours and 1900 hours. CRI however were observed entering the water throughout the day, while no CLI were seen entering the water at midday, when cercarial shedding is thought to be the highest [55,56]. Timing of direct water contact could therefore pose an additional risk for CRI. Children with rapid reinfection also accessed a larger variety of water contact sites than CLI. Multiple water source use is common in low-and middle-income countries, with the reasons including seasonality, perception of quality, distance or cost [13]. This practice could possibly impact infection risk in areas where infection risk between water contact sites is not homogeneous, such as in our study. Sites annotated as high-risk sites, combining observed risk behaviour and site attendance, were mainly lake sites with high human activity and nearby latrines. These sites had a higher abundance of B. choanomphala snails in comparison to the low-risk, mostly non-lake, sites where B. pfeifferi and B. sudanica were mainly found. These results are in line with those of previous studies where these latter species were also mostly found in shallow swampy waters [25,28,57]. The WHO urges snail control to become a more prominent part of control strategies [58], including in areas with persistent high endemicity [33]. Although the impact of chemical-based molluscicides on schistosomiasis transmission has been reported in several studies, challenges include cost, toxicity and the need for regular application [59]. We recorded 10 water contact sites for the children in our study, adding to the complexity of vector control required in this setting. Novel ecological solutions, such as introducing snail predators, could offer a lower cost, more sustainable option [60]. In this setting, the sheer volume of Lake Victoria poses additional complexities and, therefore, treating the smaller non-lake water bodies would be more manageable; however our findings suggest this would mainly target already lower-risk sites and low-risk children.
A larger proportion of snails infected with S. mansoni cercariae were B. choanomphala. This species has been found to be more susceptible to S. mansoni than B. pfeifferi [22] or B. sudanica [61], but snail infection rates in our study were too low overall (0.06%) to explicitly show differences in susceptibility among snails in this community. Finding low numbers of snails shedding cercariae in an area with high prevalence of human infections has been reported in several settings [23,24,62]. Although the shedding method could underestimate the infections in snails [63], even a low prevalence of snails releasing cercariae may be sufficient to sustain transmission, if exposure is frequent and prolonged, such as found in this study. Additionally, we found very large populations of snails, indicating that even a very low infection prevalence would still result in high numbers of infected snails overall, especially as many of these sites were perennial snail sites capable of supporting year-round transmission.
In the study area, temperatures are also suitable for snail and parasite survival throughout the year. Temperature was positively associated with snail abundance, and the highest abundance of snails was found in the dry season, similar to results from other studies carried out around Lake Victoria [20,22]. Rainfall is suggested to increase pH and turbidity [20], which were also negatively associated with snail abundance in our study. Although snails were found all year around and, therefore, infection potential is continuously present, the occurrence of more snails during the dry season is of concern. Although water contact behaviour during the dry season was not recorded in this study, children have been observed to swim more during the dry season [49,64,65], as it is the hottest season and swimming in water is refreshing. [66]. In addition, during the dry season, rain water is not available and some smaller water bodies dry up, possibly making people divert to permanent water bodies, as observed in Kenya [67] and Senegal [68], which in this study area is associated with a higher risk of water contact. Additional research on water contact behaviours across seasons could therefore provide greater insights for this. Furthermore, if exposures are highest in the dry seasons, MDA may, therefore, be most effective if planned directly after the dry season, although further research would be needed to confirm this.
Our in-depth focus on these two villages provided the opportunity to combine several research methods to gain a more complete understanding of variations in infection risk. The study community is comparable to other communities in the district [51] and diverse in aspects such as density, livelihood and distance to the water bodies. Even given the limited geography of the study area and the limited size of the study population, we found significant and sustained differences in infection status and infection intensity between the two risk groups. No heavy infections were found after repeated treatments with praziquantel, an encouraging finding towards the target of < 1% heavy infections by 2030 expressed in the recently published Roadmap for Neglected Tropical Diseases [1]. Although water contact sites were identified based on ethnographic observations of the selected CRI and CLI, regular presence in the community of the study team members, transect walks through the area and discussion with people from the community did not reveal any major water contact sites that possibly had been missed, indicating that these findings are relevant for the wider community.
While we used a standardised method for snail collection, deep water contact sites could not be sampled safely. Water depth in these sites were seasonal and limited our access, but water contact by study participants and other community members still occurred. Future research could consider conducting snail collection at these sites with longer scoops or other methods, such as dredging [69], or taking water samples instead to detect cercariae using fluorescent assays [70]. In addition, citizen science projects provide an opportunity to increase the frequency of snail collection to more consistently monitor snail species abundance and distribution as well as increase awareness and participation among the communities affected by schistosomiasis [71].
Another limitation of the study could have been that the presence of the researchers, both long-term in this village and during the ethnographic data collection, may have changed the behaviours of the children being studied. However, this was minimised by having a community member in the research team during the ethnographic work who was aware of any atypical behaviour that may have been biased, and this topic was openly discussed. Furthermore, there is no reason to believe that the children, who did not know they were classified into the two CLI and CRI groups, would have changed their behaviour differently, indicating that these findings are likely representative of these two groups of SAC. Findings from this study also provided input into ongoing qualitative research with SAC and their parents to further understand perceptions and attitudes towards water contact and schistosomiasis more generally.
Conclusions
The findings of this study highlight specific water contact behaviours and environmental risk factors that can explain variation in S. mansoni infection risk in SAC within an area with persistent high endemicity. We recommend complementing existing MDA programmes with targeted vector control, safe water supply, including collection of rain water, as well as addressing directly contributing factors, such as commercial water fetching and swimming, and indirect factors, such as increased school attendance, in order to reduce (re)infections in these highly endemic settings.
Abbreviations CLI: Children with no or low-intensity infection and no infection after clearance; CRI: Children with rapid reinfection; epg: Eggs per gram; GLMM: Generalised linear mixed models; MDA: Mass drug administration; SAC: School-aged children; VIF: Variance inflation factor score; WHO: World Health Organization.
Additional file 1: Table S1. Schistosoma mansoni infection status and intensity (mean number of eggs per gram of stool from 1-3 days of duplicate kato-katz thick smears) per timepoint for CLI and CRI. Week 0 occurred in March 2017.
Additional file 3: Table S3. Infection risk factors and site risk classification.
Additional file 4: Table S4
Temperature, pH, total dissolved solids and conductivity by type of water contact site. Temperature was not found to be different between lake and non-lake sites, but pH was significantly higher for the lake sites. Total dissolved solids and conductivity were significantly higher in the non-lake sites.
Additional file 5: Table S5. Best-fitted generalised linear mixed models (GLMM) of Biomphalaria spp. abundance and physicochemical factors by type of water contact site. Temperature showed a significantly positive association with snail abundance in lake sites as well as non-lake sites. Increased pH had a significantly negative association with abundance in both lake sites and non-lake sites and a slight negative association was also found with conductivity. | 2022-01-07T14:15:38.902Z | 2022-01-06T00:00:00.000 | {
"year": 2022,
"sha1": "4f967167496be865bd2f9397b1e8d24141ead650",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "935198636b3759954bea667e9a55ef6ab728a64c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236225592 | pes2o/s2orc | v3-fos-license | Helping Junior High School Student to Learn Fibonacci Sequence with Video-Based Learning
In this 21st century, teachers need to make teaching and learning activities conducive and fun to improve students' learning interests. According to preliminary studies, the Fibonacci sequence is one of the materials that are difficult for Junior High school students to master. Therefore, this study aims to design instructional material videos on the Fibonacci sequence using Hawgent and Camtasia studio. The research and development (RnD) method was used to research at the Guangxi Normal University, China, using the 4D model comprising Define, Design, Development, and Disseminate. From February to April 2020, the authors developed instructional materials using engaging animations that are easy for students to understand. This study showed that the instructional materials were valid with a score of 83.33, 79.17, and 87.50 from the instructional material expert, media expert, and lecturer, respectively. These scores indicated that the instructional materials in video form are valid and can be sent to schools to help students understand the Fibonacci sequence. However, the implementation stage and the effect on students' mathematical abilities are still needed. Keywords—Fibonacci, Junior High School student, Video-Based Learning, Geometry Sequence
Introduction
The use of technology in mathematics teaching is not new because both are inseparable. Technology helps students understand a mathematical concept, and it also significantly improves their mathematical ability [1]- [6]. Its use also makes them more active in class [7], [8], thereby improving their ability to ask unique and creative questions. There has been much research in China on using technology in teaching and learning activities [4], [9]- [11]. However, this research was carried out to determine the significance associate with the use of technology in mathematics learning [12], [13]. University students in China and other countries have used much software to develop learning media for mathematics [14], [15]. Indonesia has also focused on developing a learning media for mathematics using technology-based learning media in all levels of education, from primary schools to universities [16], [17]. Most universities in Indonesia are also developing various technology-based learning media to help teachers and students explain and understand mathematics concepts. Martin Bernard, a lecturer from IKIP Siliwangi, Bandung, has developed many technology-based learning media to help teachers and students during mathematics lessons [18]- [21]. Many studies develop instructional materials; however, none has developed interesting instructional materials on the Fibonacci sequence and daily life and its history and figure.
Fibonacci is a series of numbers with each obtained from 2 numbers before them. Leonardo da Pisa first introduced Fibonacci numbers in the 13th century [22]. It is used for numerous purposes, such as predicting the price change of a product in economics. Fibonacci numbers are also used in our everyday life, and their sequence also has a close relationship with the arithmetic sequence taught in 11th grade. Furthermore, it is one of the introductory university courses [23], [24]. An arithmetic sequence is also one of the concepts tested in the Indonesian national exam. Unfortunately, irrespective of the importance of Fibonacci and arithmetic sequence, there are essential and exciting instructional materials to help students. Therefore, the authors decided to develop instructional materials in video form.
The aims of this study are as follows: 1. Determining the best way for teachers to explain the Fibonacci sequence to students (define stage). 2. Making videos that contained instructional materials on Fibonacci sequence using Hawgent and Camtasia Studio (Design stage) 3. Validating the videos by a media expert, an instructional material expert, and a lecturer.
Method
The development of learning media used the 4D method comprising Define, Design, Develop and Disseminate [25]. Firstly, the authors carried out a preliminary study to determine the difficulties and attitudes of Junior High School students' when learning arithmetic sequence. The learning video was developed in Guangxi Normal University, China, with data obtained from 51 first-year students. Secondly, a learning video was made according to the initial observation result using Hawgent dynamic mathematics software, Microsoft PowerPoint, and Camtasia studio. The video was first designed using Hawgent then Camtasia studio before it was validated by material and media experts. The validators are experts in their field, which includes three from China and three from Indonesia. This research only focuses on making the learning video process is conducted separately to ensure its quality.
When validating the learning video on Fibonacci, the authors gave out questionnaires to validators, as shown in Table 1. If the validator feels the learning video is not good 2 If the validator feels the learning video is not so good 3 If the validator feels the learning video is pretty good 4 If the validator feels the learning video is good The questionnaire result was analyzed using the four-degree percentage analysis technique. The data analysis formula is as follows.
Where: Pi: A percentage value Xi: Accumulation of validation result Yi: Maximum score of the validation result The percentage result was analyzed further using the validation category. When the percentage score is above 70.1%, as shown in Table 2, the learning video is feasible and can be continued to the micro class stage.
Define stage
The authors analyzed the importance of Fibonacci in school. They realized that the Fibonacci sequence is a part of arithmetic taught in 11th grade and introductory university courses. Ouellette [27] stated a correlation between studying the Fibonacci sequence and increasing problem-solving ability.
Furthermore, based on the school's observation, the authors found that students were not familiar with the Fibonacci sequence despite studying a similar topic known as an arithmetic sequence. When the authors asked students about Fibonacci, most of those in Junior High School had no idea, with only a few aware of its interconnection with arithmetic sequence. According to preliminary studies, students need to know the history of mathematical concepts and understand why it is unique and exciting. Furthermore, teachers directly teach students the concept of sequence without a proper introduction. This makes it difficult for students to understand, with those in Junior High school only remember the formula and guess the sequence. Therefore, they feel the learning sequence is boring.
The above problem encouraged the authors to use technology to help Junior High students and teachers develop a unique learning media that introduces the history of mathematics to achieve deep and proper learning.
After discussing with Professor Tang Jianlan from Guangxi Normal University, China, the authors designed a learning video on arithmetic sequence to explain its relationship with the Fibonacci sequence. This learning video was designed using three software: Hawgent Dynamic Mathematics Software, Microsoft PowerPoint, and Camtasia Studio.
In the design stage, the authors carried out concept and curriculum analysis to determine the importance of the basic concept of sequence and the relationship between Fibonacci and arithmetic sequence for high school and university students. After analyzing the concept and curriculum, students need to master the sequence as tested in the Indonesian National Examination.
Design stage
Hawgent dynamic mathematics software was used to make a learning media on an arithmetic sequence. Its latest version 3.0 design is user-friendly as it enables users to change pictures from 2D format to 3D using a button. This software helps teachers to create exciting learning media using pictures easily.
Furthermore, it enables the easy conversion of animations into pictures savable in Microsoft PowerPoint. The authors used PowerPoint to explain the arithmetic sequence and connected it with the history and culture of Fibonacci. The history of Fibonacci is obtainable from Google and Wikipedia.
At the beginning of the learning video, the authors introduced Fibonacci (figure 1), its relationship with mathematics, and the Fibonacci sequence. Before moving to the main topic, the video learning introduces the relationship between arithmetic and Fibonacci sequence. Students were also introduced to the mathematics' historical figures to increase their learning interest in mathematics, associated with numbers and formulas and its own culture and history.
Fig. 1. History introduction on Fibonacci
After introducing students to Fibonacci and its relationship with mathematics, the video explains its sequence, as shown in Figure 2. The authors used rabbit breeding as a context to explain the Fibonacci sequence to students. The cute rabbit animation grabbed their attention and made them understand the initial topic better [28]. This is also in line with preliminary research, which stated that contextual problems improve students' mathematical ability [29], [30]. The learning video also explains that students find many things connected to the Fibonacci sequence in everyday activity. For instance, the authors used flowers (Figures 3) and fruits (figures 4) to cite instances in this research. It guides students to understand that the Fibonacci sequence is present in their daily lives and understand the importance of mastering basic mathematics concepts.
Development stage
The learning video was validated by two material experts, two lecturers, and two media experts. Of the six validators, three were from Indonesia and the remaining three from China. Furthermore, the average validation score from the material expert is 83.33%. The media expert's scores were 75% and 83.33%, thereby leading to an average of 79.17%. The lectures' scores were 91.66% and 83.33%; hence, an average of 87.50% was obtained. Based on the validation score given by material, media experts, and lecturers, it was concluded that the learning video is valid and can be tested on a small scale. The graphic on the validation score is shown in figure 5. As previously discussed, this research only focuses on developing the product from the define to product validation stage. Subsequent research needs to implement the learning media to high school and university students to analyze their learning ability, outcome, interest, and other mathematical abilities.
Video Learning on Fibonacci sequence 4 Conclusion
In conclusion, learning videos tend to increase students' learning interest in the Fibonacci sequence. The learning video on Fibonacci made using Hawgent with a duration of 6 minutes and 36 seconds has been validated with inputs obtained from material and media experts. The validation results from the material expert, the media expert, and the lecturer, showed that the instructional materials in video form could be implemented in schools to help students understand the Fibonacci sequence. In this study, the authors only carried out the preliminary analysis, media design, and validation. Further studies need to be carried out to determine the effect of learning videos on the students' mathematical ability. | 2021-07-26T00:06:26.223Z | 2021-06-04T00:00:00.000 | {
"year": 2021,
"sha1": "4d3c2aab27483512b36dc30e1e800668b670fd0a",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jim/article/download/23097/9333",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "501c075f30693442a9a506eebdf6c2b1d47b6f0f",
"s2fieldsofstudy": [
"Education",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
236323835 | pes2o/s2orc | v3-fos-license | TPL Inhibits the Invasion and Migration of Drug-Resistant Ovarian Cancer by Targeting the PI3K/AKT/NF-κB-Signaling Pathway to Inhibit the Polarization of M2 TAMs
Chemoresistance is the primary reason for the poor prognosis of patients with ovarian cancer, and the search for a novel drug treatment or adjuvant chemotherapy drug is an urgent need. The tumor microenvironment plays key role in the incidence and development of tumors. As one of the most important components of the tumor microenvironment, M2 tumor-associated macrophages are closely related to tumor migration, invasion, immunosuppressive phenotype and drug resistance. Many studies have confirmed that triptolide (TPL), one of the principal components of Tripterygium wilfordii, possesses broad-spectrum anti-tumor activity. The aims of this study were to determine whether TPL could inhibit the migration and invasion of A2780/DDP cells in vitro and in vivo by inhibiting the polarization of M2 tumor-associated macrophages (TAMs); to explore the mechanism(s) underlying TPL effects; and to investigate the influence of TPL on murine intestinal symbiotic microbiota. In vitro results showed that M2 macrophage supernatant slightly promoted the proliferation, invasion, and migration of A2780/DDP cells, which was reversed by TPL in a dose-dependent manner. Animal experiments showed that TPL, particularly TPL + cisplatin (DDP), significantly reduced the tumor burden, prolonged the life span of mice by inhibiting M2 macrophage polarization, and downregulated the levels of CD31 and CD206 (CD31 is the vascular marker and CD206 is the macrophage marker), the mechanism of which may be related to the inhibition of the PI3K/Akt/NF-κB signaling pathway. High-throughput sequencing results of the intestinal microbiota in nude mice illustrated that Akkermansia and Clostridium were upregulated by DDP and TPL respective. We also found that Lactobacillus and Akkermansia were downregulated by DDP combined with TPL. Our results highlight the importance of M2 TAMs in Epithelial Ovarian Cancer (EOC) migration ability, invasiveness, and resistance to DDP. We also preliminarily explored the mechanism governing the reversal of the polarization of M2 macrophages by TPL.
INTRODUCTION
Ovarian cancer is the leading cause of death among all gynecological malignances, and chemoresistant ovarian cancer is the principal cause of poor healing in patients (1,2). Considering the shortcomings of current treatment modalities for ovarian cancer, including cytoreductive surgery and platinum-taxane combined chemotherapy, it is of paramount importance to develop novel strategies to treat this disease.
The occurrence and development of drug-resistant ovarian cancer is a complex and multifactorial process, involving the tumor microenvironment (3), matrix metalloproteinases (4), the epithelial-mesenchymal transition (5), and autophagy (6). Of these, the tumor microenvironment is closely related to tumor invasion and metastasis, and significantly affects the efficiency and effectiveness of tumor treatment (7). The tumor microenvironment is composed of tumor cells, the surrounding tissue fluid, cytokines, and stromal cells, including various immune cells, fibroblasts, endothelial cells, pericytes, platelets, and macrophages (8). Tumor-associated macrophages (TAMs) represent an important component of the tumor microenvironment and can be divided into M1 and M2 types (9). A large number of studies have shown that M2 TAMs promote the occurrence and development of tumors by secreting vascular endothelial growth factor (VEGF), which then participates in angiogenesis. Further, matrix metalloproteinases (MMP2, MMP9), which promote tumor invasion and metastasis (10,11), are significantly related to a poor tumor prognosis (pancreatic cancer) (12,13). Notably, the activation of the PI3K/AKT/NF-kB-signaling pathway is conducive to M2 macrophage polarization and is involved in tumor progression and resistance to chemotherapy (14).
The intestinal microbiota constitutes the body's normal intestinal microorganisms and is closely related to human health (15). Indeed, imbalance of the intestinal microbiota can lead to various diseases such as inflammatory bowel disease (16), obesity (17), diabetes (18), and even cancer (19). In recent years, there has been increasing interest in the relationship between the intestinal microbiota and tumors in the context of tumor treatment (20). Studies have indicated that the intestinal microbiota can mediate significant anti-tumor effects via modulating inflammationand restore immune functions (21). In this context, chemotherapy leads to dysregulation and damage to the intestinal microbiota, which is characterized by a reduction in beneficial lactic-acid-generating bacteria, such as Enterococcus and Bifidobacterium, and by an increase in the pathogens Escherichia coli and Staphylococcus (22,23). For example, an imbalance in the intestinal microbiota reduces the effect of anti-PD1 treatment on patients with advanced cancer. The use of antibiotics can reverse this phenomenon by increasing the relative abundance of Akkermansia muciniphila (24), which can regulate the thickness of the intestinal mucosa and maintain the integrity of the intestinal barrier (25). Although no link has been reported between the gut microbiome and immunity in ovarian cancer patients, Studies have showed that when mice receive antibiotic treatment, the progression of xenograft ovarian tumors was delayed (26). Although there are reports that Traditional Chinese Medicine can treat diabetes (27), obesity (28), and colitis (29) by regulating the intestinal microbiota, the relationship between anti-tumor Traditional Chinese Medicine and the intestinal microbiota has not yet been reported. Here, we hypothesized that the effect of TPL on drug-resistant ovarian cancer may be related to its ability to improve the relative abundance of intestinal microbiota.
We previously showed that triptolide (TPL), one of the primary active ingredients of Tripterygium wilfordii, which is a Traditional Chinese Medicine that has been reported to be therapeutically efficacious in rheumatoid arthritis, inhibited the growth, invasion, and migratory capability of drug-resistant ovarian cancer cells (30) and reversed the resistance of ovarian cancer cells to cisplatin by inhibiting the phosphorylation of AKT (31). In this study, drug-resistant ovarian cancer cells (A2780/DDP) were used to investigate whether TPL inhibited the invasion and migration of drug-resistant ovarian cancer by inhibiting the polarization of M2 TAMs through the PI3K/AKT/ NF-kB signaling pathway, and to explore the relationship between TPL and the intestinal microbiota.
Morphological Observations
Cells in the logarithmic growth phase were seeded in a 6-well plate at a concentration of 5 × 105 cells/well. After 24 h of incubation, A2780/DDP cells were treated with varying concentrations of triptolide (TPL) (J&K Scientific Ltd, Cat. no.: T2899) (0, 6.25, 12.5, 25, 50, and 100 nM). After 24 h of treatment, we observed the growth and morphological changes of the cells under an inverted phase-contrast microscope (Olympus, Japan).
CCK8 Cytotoxicity Assay
We used the CCK8 assay to evaluate the inhibitory effect of TPL on A2780/DDP cells. We seeded 100 µL of A2780/DDP cells (10 5 cells/mL) into 96-well plates and conducted a control culture or treatment with varying concentrations of TPL (6.25, 12.5, 25, 50, and 100 nM) for 24 h. We added 10 mL of CCK-8 solution to each well and incubated the wells for 1 h. The optical density of each well was measured at 490 nm using a Microplate Reader (Molecular Devices, CA, USA).
Cell Proliferation Assay
TPL was diluted in cell culture supernatant to different concentrations (Co-c, 6.25, 12.5, 25, 50, and 100 nM, where Coc refers to the cell supernatant collected above). We then measured the proliferative capability of A2780/DDP cells in the various groups treated with different concentrations of TPL for 24 h using the commercially available Cell-Light EdU Apollo567 in vitro kit according to the manufacturer's instructions (RIBOBIO).
Transwell Migration and Invasion Assay
We used a 24-well Boyden chamber (with 8-mm pore size; Corning Costar, USA) for the transwell migration assay. A2780/DDP cells (3 × 10 4 or 6 × 10 4 ) were loaded into the top of the 24-well migration chamber in 200 µL serum-free medium, and 700 µL of RPMI 1640 medium containing 20% FBS was added to the lower chamber to induce cell migration. Cells were incubated with a range of TPL concentrations (0, 6.25, 12.5, 25, 50, and 100 nM), or different concentrations of TPL (0, 6.25, 12.5, 25, 50 and 100 nM) diluted with cell culture supernatant or control in an incubator for 72 h. The cells that migrated into the lower surface of the filter were fixed with 4% paraformaldehyde for 20 min, washed 3 times with phosphate buffered saline (PBS), and then stained with 0.1% crystal violet solution for 30 min. Photomicrographs (100 ×) were taken with an Olympus IX51 (Olympus Optical, Melville, NY, USA) inverted microscope and three visual fields were counted. The sterile Borden chamber was also used for invasion measurements. BD Matrigel (BD Biosciences, USA) was placed in a −4°C refrigerator overnight before commencing the experiment. We placed the pipette tip and Boyden chamber in a refrigerator at −4°C 30 min before the start of the experiment, at which point the BD matrix liquified. We then diluted the BD matrix with serum-free RPMI 1640 medium at a 1:9 ratio. When the upper chamber had been precoated with Matrigel, we performed operations similar to the above migration analysis.
Extracellular Matrix-Adhesion Assay
A2780/DDP cells were seeded in 6-well plates, treated with different concentrations of TPL (0, Co-c, 6.25, 12.5, 25, 50, and 100 nM) and then transplanted to 12-well plates (1 × 10 5 cells/ well). The cells were cultured in an incubator at 37°C in an atmosphere of 5% CO 2 for 3 or 6 h before adding the culture solution, and the unattached cells were aspirated at five replicates per group. The 6-well plate was washed twice with PBS, and all cells were collected after treatment with trypsin. We counted the number of cells to calculate the cellular adhesion rates using the following formula: cell adhesion rate = (cell adhesion number/ total cell number) × 100%. All experiments were performed in triplicate.
Establishment of the Xenograft Tumor Model
The A2780/DDP cells were cultured, and the densities were adjusted to 1 × 10 7 cells/mL. We used inbred female BALB/c nude mice at 6 weeks of age and weighing 15-20 g that were raised under strict Specific Pathogen Free (SPF) conditions; all mice were provided with free access to water and chow. We then extracted 0.1 mL of cell suspension (containing 1 × 10 6 cells) and inoculated the cells into the axilla of the mice using a 1-mL syringe. The control group (i.e., animals without cell injection) was created by administration of 0.1 mL of normal saline to the stomach once every 2 days for a total of 10 times. The tumor model (M) group was administered in the same manner as the control group. The cisplatin (DDP) treatment group was administered 4 mg/kg/d cisplatin intraperitoneally on days 1 and 8. The TPL treatment group received 0.15 mg/kg/d of triptolide diluted to a final volume of 0.1 mL with normal saline, and the drug was administered to the abdominal cavity once a day for 14 days. In addition, the TPL + DDP group was administered the combination of 0.15 mg/kg/d TPL to the abdominal cavity once a day for 14 days and 4 mg/kg/d DDP to the abdominal cavity on days 1 and 8.
Tumor Growth in Nude Mice
Following the establishment of our model, tumor volumes were measured by Vernier calipers every 2 days and tumors were photographed. On the 15th day, three mice in each group (except for the control group) were euthanized; their tumors were excised and frozen at −80°C. The tumors were then weighed, and the tumor inhibition rate was calculated. The remaining nude mice were used to generate survival curves to evaluate the survival of tumor-bearing mice.
Western Blotting
Cells were lysed on ice with Radio Immunoprecipitation Assay
Statistical Analysis
All statistical analyses were performed using Prism 7 (GraphPad). The log-rank test and 1-or 2-way analysis of variance (ANOVA) followed by Tukey's multiple-comparison test were used in all studies, as noted in the figure legends. Data are presented as mean ± standard deviation (SD). Statistical significance was defined as *P < 0.05, **P < 0.01, and ***P < 0.001.
Ethics Statement
This study was approved by the Ethics Committee of Nanchang Royo Biotech Co. Ltd (RYE2019062702), and all studies were conducted according to approved guidelines.
TPL Inhibits the Survival, Migration, and Invasion of A2780/DDP Cells
The A2780/DDP cell line is a cisplatin-resistant human epithelial ovarian cancer cell line. In the current study, to examine the response of A2780/DDP cells to TPL, we used a series of different concentrations of TPL. After 24 h, the morphological changes of the cells were analyzed, as shown in Figure 1A. As the concentration of TPL increased, the cell density gradually decreased, and the cellular debris increased commensurately. We also performed a CCK8 cytotoxicity assay to evaluate the effect of TPL on the survival rate of A2780/DDP cells. We found that as the TPL concentration gradually increased, the optical density value was gradually reduced ( Figure 1B), with the IC50 for TPL at 18.26 nM. Next, we used transwell invasion and migration experiments to investigate the effect of TPL on the migratory and invasive capabilities of A2780/DDP cells ( Figure 1C, D). Compared to the control group, as the concentration of TPL increased, the migratory and invasive capabilities of the cells were gradually reduced in a dosedependent manner. These data suggest that TPL can significantly inhibit the survival of A2780/DDP cells and downregulate the migration and invasion of tumor cells.
Establishment of the TAM Model In Vitro
We next established a TAM model in vitro. We seeded acute mononuclear leukemia cells (THP-1) in their logarithmic growth phase in a 6-well plate at 1 × 10 6/ mL and treated the cells with 200 ng/mL PMA for 24 h, 48 h, or 72 and 96 h. When we observed their cellular morphology (Figure 2A), we found that the cells were in a circular suspension state with almost no cellular attachment without PMA. However, after adding 200 ng/ mL PMA as an inducer, in addition to extending the induction time, the proportion of adherent cells increased. After 72 h of induction, we uncovered the largest number of adherent cells, irregular shapes, and obvious filopodia. At this time, the irregular cells were M0 unpolarized macrophages. To induce polarization of M0 macrophages to form M2 macrophages, after 72 h of PMA treatment, we changed the complete medium containing 20 nM recombinant human IL-4, and 20 nM recombinant human IL-13, and continued to culture cells for 24 h, 48 h, and 72 h. Next, we detected the concentrations of IL-10 and IL-12 in the cell culture supernatant using the Human IL-10/IL-12p70 ELISA kit ( Figures 2B, C). Compared to the THP-1 cell culture supernatant, IL-4 and IL-13 induced high levels of IL-10 in the cellular supernatant 48 h after treatment, but IL-12 did not. Significant differences are shown at the different time-points. Subsequently, we collected the above treatment group cell extraction and determined the total expression of Arginine Protease 1(Arg-1) protein ( Figures 2D, E). Compared to THP-1 cells, the longer the cells were incubated with IL-4 and IL-13, the higher the expression of Arg-1 protein. We found an interesting phenomenon in that the level of Arg-1 did not increase when the incubation time was longer than 48 h to 72 h, which we speculated might be related to the depletion of cell culture medium nutrients. These results indicate that the M2 tumorassociated macrophage model was successfully established. Figures 3A, B. We found that compared to the control group (cells treated with complete medium), the proliferative rate of A2780/DDP cells in the Co-C group was slightly increased; however, the cell proliferation rate gradually decreased commensurately with increasing TPL concentration. We next performed a transwell experiment to assess the effect of TAM supernatant on the migratory and invasive abilities of A2780/ DDP cells (Figures 3C-E). The result of the transwell experiment was similar to that of the EdU experiment, with the number of A2780/DDP cells passing through the membrane found to be the largest in the Co-C group. However, as the concentration of TPL increased, the number of cells passing through the cell membrane gradually decreased in a dosedependent manner. We also performed extracellular matrix- adhesion experiments, the results of which are shown in Figure 3F. We found that although there was no significant difference in the number of adherent cells in each treatment group 3 h after seeding, compared to the control group, the number of adherent cells in the co-culture group was slightly increased. We also found that as the concentration of TPL increased, the number of adherent cells showed a tendency to decrease (similar results at 6 h). Taken together, the above results indicate that TAMs can slightly improve cellular proliferation, migration, and invasion, and that TPL inhibits the proliferative, migratory, and invasive capabilities of A2780/ DDP cells.
TPL May Inhibit the Polarization of M2-Type TAMs Through the Inhibition of the PI3K/AKT/NF-kB Signaling Pathway
We shifted to using an in vivo experiment to further study the effect of TPL on drug-resistant ovarian cancer. DDP, TPL, or DDP + TPL was administered intraperitoneally to tumor cell-implanted mice to evaluate the effects on tumor growth and survival time. As shown in Figures 4A, B, compared to the control group, both DDP and TPL inhibited tumor growth (as shown by the reduction in tumor weight). The combination of DDP + TPL not only exerted the optimal inhibitory effect on tumor growth, but also significantly prolonged the survival time of tumor-bearing mice ( Figure 4C). Our immunohistochemical results also showed that the expression of CD206 and CD31 was significantly inhibited by DDP + TPL ( Figure 4D). Therefore, we posit that DDP + TPL effectively reduced the number of M2 macrophages in tumor tissues, and that TPL inhibited the expression of the vascular marker CD31, inhibiting angiogenesis in tumor tissues. Western blotting experiments showed that TPL, DDP, and DDP + TPL downregulated the levels of MMP-9, MMP-2, VEGF, p-PI3K, p-AKT, and p-P65 ( Figures 4E, F). The above data showed that DDP + TPL can inhibit tumor invasion and migration, potentially by inhibiting the polarization of M2 macrophages through the PI3K/AKT/NF-kBsignaling pathway in vivo.
Effects of DDP and TPL on the Intestinal Microbiota
We next used high-throughput sequencing methods to investigate the effects of DDP and TPL on the intestinal microbiota of tumors in nude mice after ovarian cancer xenografts. A total of 14,826,581 filtered clean tags (411,849.5 tags/sample) and 15,807 OTUs were obtained from all the samples, with an average of 3161.4 OTUs per group (data not shown). Chao1 and observed species indices represent community diversity, while Shannon and Simpson indices represent total species; these indices were then used to evaluate the influence of DDP and TPL on the alpha diversity of the intestinal microbiota. We observed no significant changes in microbial abundance between different treatments ( Figure 5A). Compared to the M groups, DDP significantly reduced microbial diversity, while TPL and DDP + TPL significantly improved the microbial diversity ( Figure 5B). When analyzed using the Venn diagram method, 359 common OTUs were identified from all groups, and the unique OUT numbers in C, M, DDP, TPL, and DDP + TPL were 1039, 362, 360, 977, and 239, respectively ( Figure 5C). PCoA analysis showed that dots were clustered in the TPL group and relatively dispersed in the DDP + TPL group. In addition, samples in the C and M groups manifested a close proximity, but remained far from the DDP group, indicating that the microbial diversity in the DDP group was obviously different from that in either the C or M group ( Figure 5D). Next, when we selected the relatively abundant microbiota in the gut microbes of nude mice for analysis, our results indicated that the tumor model significantly increased the relative abundance of Sutterella ( Figure 6D). Compared to the M group, treatment with DDP greatly increased the relative abundance of Akkermansia ( Figure 6A) and reduced the relative abundances of Lactobacillus and Adlercreutzia (Figures 6B, E). Furthermore, treatment with TPL greatly increased the relative abundances of Flexispira, Clostridium, and Oscillospira ( Figures 6F-H), and reduced the relative abundances of Sutterella and Adlercreutzia (Figures 6D, E). Intriguingly, although we noted few microbial changes in Akkermansia, Bacteroides, Adlercreutzia, Flexispira, Clostridium Oscillospira, or Mucispirillum between the M and DDP + TPL groups ( Figures 6A, C, E-I), DDP + TPL reduced the relative abundances of Lactobacillus and Akkermansia ( Figures 6B, D).
DISCUSSION
The high fatality rate associated with ovarian cancer and its resistance to existing chemotherapeutic drugs have made the development of anti-tumor drugs a top priority for clinical researchers. Considering the current difficulties in developing anti-tumor drugs, it is a reasonable option to extract anti-tumor drugs from Traditional Chinese Medicines. As one of the primary components of the Chinese herbal medicine Tripterygium wilfordii (a classic drug for treating rheumatoid arthritis), TPL has been found to offer broad-spectrum antitumor effects. Indeed, recent studies have shown that TPL reverses drug resistance in ovarian cancer. In the present study, we found that TPL inhibits the growth of drug-resistant ovarian cancer in vivo and in vitro, potentially by inhibiting the polarization of M2 TAMs through the PI3K/AKT/NF-kBsignaling pathway.
Previous studies have shown that TPL exerts powerful antitumor actions by inhibiting cellular proliferation (33), blocking the cell cycle (34), interfering with tumor angiogenesis (35), inducing autophagy, and promoting cellular apoptosis (36). We presented similar findings herein, where we determined that administering increasing concentrations of TPL to A2780/DDP cells commensurately augmented cellular apoptosis, elevated their cytotoxicity, and attenuated their ability for cellular migration and invasion ( Figure 1). However, the underlying mechanism(s) of action remains unknown. It is evidence that M1 type macrophages secrete pro-inflammatory factors such as IL-12, which recognize tumor cells and play an important role in antigen presentation. Moreover, M2 macrophages secrete the cellular inflammatory factor IL-10 to regulate blood vessels, which, in turn, promotes the occurrence and development of tumors (37). Additionally, the characteristic factor Arg-1 of M2 macrophages was also significantly upregulated ( Figure 2). We also found that when A2780/DDP cells were co-cultured with the supernatant from M2 TAMs, the proliferative ability of the cells was enhanced ( Figure 3); compared to the control group, the invasive and migratory capabilities were also augmented. However, when TPL was added to the M2 TAM supernatant, the proliferative ability of A2780/DDP cells was inhibited in a dose-dependent manner; this coincides with the results of previous studies (38). M2 type macrophages participate in tumor angiogenesis through a variety of ways, including the release of a variety of matrix metalloproteinases, serine proteases, and cathepsins. The release of these factors contribution to destroy the endothelial cell basement membrane and decompose collagens and other components of the extracellular matrix, thereby aiding tumor and stromal cells in their migration (39). Western blotting (Figure 4) of nude mouse tumor tissues showed that TPL not only inhibited tumor growth, but also reduced the expression of MMP9, MMP2, VEGF, p-PI3K, p-AKT, and p-p65 protein. The immunohistochemical staining results additionally showed that TPL combined with DDP reduced the expression of CD206 and CD31, indicating that TPL inhibited tumor invasion and metastasis by inhibiting the expression of metalloproteinases and CD31. The inhibition of CD206 expression suggests that TPL inhibits the ability of M2 TAMs to undergo polarization, which may be accomplished by inhibiting the PI3K/AKT/NF-kB pathway. However, we have not examined the effect of TPL on the PI3K/AKT/NF-kB signaling pathway in vitro, nor have we specifically evaluated the regulation of TPL on the PI3K/AKT/NF-kB signaling pathway or determined how it affects the polarization of M2 TAMs, which are the limitation of our study. Finally, high-throughput sequencing methods were used to detect microbial changes in the gut of nude mice. It appears that except for the closer microbiota of the M group and the C group, the microbiotas of the other treatment groups were obviously different. However, treatment with DDP significantly altered the microbial compositions ( Figure 5). Akkermansia belongs to the genus Ekmansia, which is an intestinal symbiont that colonizes One-way repeated-measures ANOVA was used with Tukey's test for multiple comparisons (A-I, respectively). *P < 0.05, **P < 0.01, ***P < 0.01.
in the mucosal layer. Akkermansia not only participates in the immune regulation of the host, but also enhances the integrity of intestinal epithelial cells and the thickness of the mucous layer, thereby promoting intestinal tract health (40). A previous study showed an increase in the abundance of microbial in the intestines of patients with melanoma who were responsive to Akkermansia-combined immunotherapy, and the patient's fecal bacteria were then applied to the mouse melanoma model with obvious tumor suppressed effects (41). Clostridium butyricum is one of the normal intestinal microbiota, and the butyric acid it produces is not only the main source of nutrition and energy for intestinal mucosa cells but can also repair damaged intestinal mucosa, which is beneficial for regulating the human intestinal microecological balance (42). Related studies have shown that Clostridium butyricum inhibits the development of intestinal tumors (43). Furthermore, Sutterella, which belongs to the phylum Proteobacteria, is a common symbiotic bacterium in the human intestinal tract. Previous studies have shown that the abundance of Sutterella microbiota is positively correlated with intestinal diseases (44). Therefore, the increased abundance of the intestinal beneficial bacteria Akkermansia and the reduced abundance of Adlercreutzia following treatment with DDP confirm the ability of DDP to change the composition of the intestinal microflora. Moreover, the anti-tumor effect of TPL may be related to the increase in Clostridium and the reduction in Sutterella and Adlercreutzia (Figures 5 and 6), but this hypothesis needs to be further tested.
In summary, our results indicate that TPL combined with DDP can decreased the polarization of M2-type TAMs, thereby suppressing the proliferation, migration, and invasiveness of A2780/DDP cells in vitro, significantly prolonging the survival time of nude mice; the mechanism may be realized by inhibiting the PI3K/AKT/NF-kB pathway. In addition, DDP combined with TPL was found to promote the abundance of the beneficial intestinal bacteria, Akkermansia and Clostridium, and reduce the relative abundance of the opportunistic pathogenic bacteria, Sutterella and Adlercreutzia. Collectively, our results indicate that TPL is a promising adjuvant chemotherapy drug that can be used in the clinical treatment of drug-resistant ovarian cancer.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are publicly available. This data can be found here: PRJNA673986.
ETHICS STATEMENT
This study was approved by the Ethics Committee of Nanchang Royo Biotech Co. Ltd (RYE2019062702), and all studies were conducted according to approved guidelines. | 2021-07-26T13:24:19.384Z | 2021-07-26T00:00:00.000 | {
"year": 2021,
"sha1": "286116c8ff63cf729c3b7426d4b9ef9ca6728cda",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.704001/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "286116c8ff63cf729c3b7426d4b9ef9ca6728cda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26669725 | pes2o/s2orc | v3-fos-license | Bakuchiol Is a Phenolic Isoprenoid with Novel Enantiomer-selective Anti-influenza A Virus Activity Involving Nrf2 Activation
Background: Novel therapeutic approaches against influenza are required. Bakuchiol is a phenolic isoprenoid found in Babchi seeds. Results: Bakuchiol enantiomer-selectively inhibited influenza A viral infection and growth and activated the Nrf2 pathway. Conclusion: Bakuchiol showed novel enantiomer-selective anti-influenza viral activity. Significance: The study of bakuchiol will contribute to the development of novel approaches to influenza therapy.
Influenza represents a substantial threat to human health and requires novel therapeutic approaches. Bakuchiol is a phenolic isoprenoid compound present in Babchi (Psoralea corylifolia L.) seeds. We examined the anti-influenza viral activity of synthetic bakuchiol using Madin-Darby canine kidney cells. We found that the naturally occurring form, (؉)-(S)-bakuchiol, and its enantiomer, (؊)-(R)-bakuchiol, inhibited influenza A viral infection and growth and reduced the expression of viral mRNAs and proteins in these cells. Furthermore, these compounds markedly reduced the mRNA expression of the host cell influenza A virus-induced immune response genes, interferon- and myxovirus-resistant protein 1. Interestingly, (؉)-(S)bakuchiol had greater efficacy than (؊)-(R)-bakuchiol, indicating that chirality influenced anti-influenza virus activity.
In vitro studies indicated that bakuchiol did not strongly inhibit the activities of influenza surface proteins or the M2 ion channel, expressed in Chinese hamster ovary cells. Analysis of luciferase reporter assay data unexpectedly indicated that bakuchiol may induce some host cell factor(s) that inhibited firefly and Renilla luciferases. Next generation sequencing and KeyMolnet analysis of influenza A virus-infected and non-infected cells exposed to bakuchiol revealed activation of transcriptional regulation by nuclear factor erythroid 2-related factor (Nrf), and an Nrf2 reporter assay showed that (؉)-(S)-bakuchiol activated Nrf2.
Additionally, (؉)-(S)-bakuchiol up-regulated the mRNA levels of two Nrf2-induced genes, NAD(P)H quinone oxidoreductase 1 and glutathione S-transferase A3. These findings demonstrated that bakuchiol had enantiomer-selective anti-influenza viral activity involving a novel effect on the host cell oxidative stress response.
An influenza A pandemic caused 50 million deaths worldwide in 1918 (1,2), the influenza A virus that originated in swine (H1N1) caused a pandemic in 2009, and avian H5N1 and H7N9 influenza A viruses are highly pathogenic to humans (1)(2)(3). Although neuraminidase (NA) 2 inhibitors of the influenza virus have been widely used as antiviral drugs (4,5), adverse effects (6 -9) and the emergence of resistant viral strains (10,11) have been reported. Thus, to prevent and to control future influenza epidemics and pandemics, it is critically important that novel anti-influenza drugs be developed.
In the present study, we found that (ϩ)-(S)-bakuchiol and (Ϫ)-(R)-bakuchiol (a synthetic enantiomer that does not occur naturally; Fig. 1A) inhibited influenza A H1N1 viral infection and growth in Madin-Darby canine kidney (MDCK) cells and also reduced the expression of viral mRNAs and proteins. They reduced the induction of interferon- (IFN-) and myxovirusresistant protein 1 (Mx1) mRNAs by the influenza A virus. (ϩ)-(S)-Bakuchiol showed stronger antiviral activities than (Ϫ)-(R)bakuchiol, indicating that the steric structure was important for these activities. We used an influenza A virus minigenome assay employing a Dual-Luciferase system to analyze mRNA and protein levels, and this unexpectedly revealed that bakuchiol induced host factors that inhibited firefly and Renilla luciferases. Next generation sequencing (NGS) and KeyMolnet analysis revealed an up-regulation of transcriptional regulation by the nuclear factor erythroid 2-related factor (Nrf) pathway, and a Nrf2 reporter assay showed that (ϩ)-(S)-bakuchiol activated Nrf2. Reverse transcription quantitative polymerase chain reaction (RT-qPCR) analyses showed that bakuchiol upregulated mRNA expression of NAD(P)H quinone oxidoreductase 1 (NQO1) and glutathione S-transferase A3 (GSTA3); these are Nrf2-induced oxidative stress-responsive genes. Taken together, these results indicated that bakuchiol produced novel anti-influenza effects by targeting processes involved in the host oxidative stress response.
Analysis of the Effects of Influenza A Virus on MDCK Viability Using Naphthol Blue Black-MDCK cells were seeded in a 96-well plate (1 ϫ 10 4 cells/well). (ϩ)-(S)-Bakuchiol or (Ϫ)-(R)-bakuchiol (0.8 -100 M in DMSO) was mixed with an influenza A virus strain (A/PR/8/34, A/CA/7/09, or A/Aichi/2/68) in the growth medium at a multiplicity of infection (MOI) of 10 and then incubated for 30 min at 37°C under 5% CO 2 . The mixture was added to the cells, and the treated cells were incubated for 4 days at 37°C under 5% CO 2 . After incubation, the cells were fixed using 10% formaldehyde in phosphate-buffered saline (PBS). The viable cells were then stained with a naphthol blue black solution (0.1% naphthol blue black, 0.1% sodium acetate, and 9% acetic acid) as described previously (28).
Thiazolyl Blue Tetrazolium Bromide (MTT) Assay-The toxicities of (ϩ)-(S)-bakuchiol and (Ϫ)-(R)-bakuchiol toward MDCK cells were determined using an MTT cell proliferation assay kit, according to the manufacturer's instructions (Cayman). Briefly, MDCK cells were seeded in each well of a 96-well plate (1 ϫ 10 4 cells/well). Marchantin E (ME) was used as a positive control for anti-influenza viral activity (27,28). (ϩ)-(S)-Bakuchiol, (Ϫ)-(R)-bakuchiol, or ME (12.5-100 M) was prepared in DMSO (100 M, 1%; 50 M, 0.5%; 25 M, 0.25%; 12.5 M, 0.125%) and mixed with infection medium (DMEM supplemented with 1% bovine serum albumin (BSA; Wako, Osaka, Japan), 50 units/ml penicillin, 50 g/ml streptomycin, and 4 mM L-glutamine). The mixture was added to the cells, and the treated cells were incubated for 24 or 96 h at 37°C under 5% CO 2 . After incubation, the cells were treated with the MTT reagent and incubated for 4 h at 37°C under 5% CO 2 . The wells were then treated with crystal-dissolving solution to dissolve the formazan produced by the cells, and the absorbance of each well was measured at 570 nm using a microplate reader. Cell viability was calculated and expressed relative to that of DMSOtreated cells, which was set as 100%.
Immunostaining of Influenza A Virus-infected Cells-MDCK cells were seeded in a 96-well plate (1 ϫ 10 4 cells/well). (ϩ)-(S)-Bakuchiol, (Ϫ)-(R)-bakuchiol, or ME was mixed at a concentration of 12.5-50 M with influenza A virus (A/PR/8/34, A/CA/ 7/09, or A/Aichi/2/68) at an MOI of 0.1 in the infection medium and incubated for 30 min at 37°C under 5% CO 2 . DMSO (0.125-0.5%) was used as the negative control. Each mixture was added to the cells and incubated for 24 h at 37°C under 5% CO 2 . The cells were then fixed with 4% paraformaldehyde in PBS for 30 min at 4°C before permeabilization with 0.3% Triton X-100 for 20 min at room temperature. Mouse antibodies detecting the NP of A/PR/8/34 and A/Aichi/2/68 (FluA-NP 4F1, SouthernBiotech) or the NP of A/CA/7/09 (AA5H, AbD Serotec) were used as primary antibodies, as appropriate (26). Horseradish peroxidase-conjugated goat anti-mouse IgG (SouthernBiotech) was used as the secondary antibody. To visualize the infected cells, TrueBlue peroxidase substrate (KPL) was added and incubated for 15 min; color development was terminated by washing with H 2 O. The wells were photographed under a microscope, and the stained cells were counted. Each half-maximal (50%) inhibitory concentration (IC 50 ) value was then calculated based on the cell numbers.
To explore whether bakuchiol affected viral growth in preinfected cells, 1 ϫ 10 5 MDCK cells were seeded in each well of a 24-well plate. The cells were infected with A/PR/8/34 (MOI ϭ 0.001) in the infection medium for 1 h at 37°C under 5% CO 2 . The infected cells were washed, and then (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol (25 M) was added to the cells in the infection medium supplemented with 3 g/ml TPCK-treated trypsin. DMSO (0.5%) was the negative control, and ME (50 M) was the positive control. The cells were then incubated for 24, 48, or 72 h at 37°C under 5% CO 2 .
Cell culture media were collected from each well at the indicated time points. Serial dilutions of the conditioned media were added to naive monolayers of MDCK cells in a 96-well plate and incubated for 16 h at 37°C under 5% CO 2 . The cells were immunostained using FluA-NP 4F1 (SouthernBiotech), as described above (26), and the stained cells were counted. The viral titers in the conditioned media were calculated using these cell numbers (26).
RT-qPCR-Total RNA was extracted from MDCK cell lysates using an RNeasy minikit (Qiagen, GmbH, Hilden, Germany). Total RNA (500 ng) was used to synthesize cDNA using Super-Script VILO (Invitrogen), according to the manufacturer's instructions. The synthesized cDNA was used as a template for RT-qPCR, which was performed using SYBR Green real-time PCR Master Mix (TOYOBO, Osaka, Japan); each gene-specific primer employed is shown in supplemental Table 1. PCR and data analyses were performed on an Applied Biosystems StepOne Plus Real-time PCR system (Life Technologies). Relative expression was calculated by the ⌬⌬Ct method. The levels of viral mRNAs encoding nonstructural protein 1 (NS1), NP, RNA polymerase subunits (PA, PB1, and PB2), and matrix genes (M1 and M2) were normalized to that of 18S ribosomal RNA (rRNA) (29), and the levels of IFN-, Mx1, NQO1, GSTA3, firefly luciferase, and Renilla luciferase mRNAs were normalized to that of -actin.
Western Blotting-The cells were lysed in a buffer containing 125 mM Tris-HCl, pH 6.8, 5% SDS, 25% glycerol, 0.1% bromphenol blue, and 10% -mercaptoethanol and boiled for 5 min. The cell lysates were then separated on a 10% polyacrylamide gel. The proteins were transferred to a polyvinylidene fluoride microporous membrane (Millipore). FluA-NP 4F1 (SouthernBiotech), a goat anti-influenza A viral NS1 antibody (vC-20, Santa Cruz Biotechnology, Inc.), a rabbit anti-firefly luciferase polyclonal antibody (MBL, Nagoya, Japan), and a rabbit anti-Renilla luciferase polyclonal antibody (MBL) were used as primary antibodies to detect their respective proteins. A rabbit anti--actin antibody (13E5, Cell Signaling) was used as an internal control. The secondary antibodies, horseradish peroxidase-conjugated goat anti-mouse IgG (SouthernBiotech), donkey anti-goat IgG (sc-2020, Santa Cruz Biotechnology), or goat anti-rabbit IgG (KPL), were used as appropriate. The signals were detected using Western Lightning ECL Pro (PerkinElmer Life Sciences). Signal intensities were measured using ImageJ software, and the protein levels of firefly and Renilla luciferase were normalized to that of -actin.
Trypsin Protection Assay with Influenza A Viral Hemagglutinin (HA)-The HA protein trypsin protection assay was performed as described previously (31). Recombinant influenza A virus (A/PR/8/34) HA protein (0.5 g) (Sino Biological Inc., Beijing, China) was incubated with DMSO (0.25% in PBS, adjusted to pH 5.0 with 0.25 M HCl) for 30 min at 31°C. The pH of the reaction was then neutralized to a final pH of 7.5 using 0.25 M NaOH. TPCK-treated trypsin (0.0001-1 g) (Sigma-Aldrich) was added to each mixture and digested for 30 min at 37°C. Trypsin-mediated HA cleavage was determined by SDS-PAGE followed by staining with Coomassie Blue G-250. Next, 0.5 g of recombinant influenza A virus (A/PR/8/34) HA protein (Sino Biological Inc.) was incubated with (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol (25-100 M) in PBS for 15 min at 37°C and then adjusted to a final pH of 5.0 with 0.25 M HCl and incubated for a further 15 min at 31°C. The pH was then neutralized to 7.5 using 0.25 M NaOH. TPCK-trypsin (Sigma-Aldrich) (1 g in PBS) was added to each mixture and digested for 30 min at 37°C. Trypsin-mediated HA cleavage was determined by SDS-PAGE followed by staining with Coomassie Blue G-250.
Influenza A Viral M2 Channel Activity-Using the wholecell patch clamp technique, M2 channel currents were recorded from Chinese hamster ovary (CHO-K1) cells that had been transfected for 24 -48 h with pCA-M2 encoding a M2 cDNA cloned from influenza virus A/PR/8/34 together with pCA-GFP, encoding the green fluorescent protein (GFP) gene. Gene expression was driven by the chicken -actin promoter with the cytomegalovirus enhancer (CA). Transfected cells were plated on a coverslip, placed in a recording chamber fixed to the microscope stage, and perfused with a solution containing 135 mM N-methyl-D-glucamine, 25 mM HEPES, 5 mM CaCl 2 , and 10 mM glucose (pH adjusted to 7.4 with 1 N HCl). Recordings were made at a flow rate of 1.0 -1.5 ml/min at room temperature (27-28°C). Cells expressing M2 channels were identified using confocal laser-scanning microscopy (LSM510, Carl Zeiss, Jena, Germany) by detecting the co-expressed GFP fluorescence at an excitation wavelength of 488 nm. Borosilicate glass capillaries (1B150F-4, World Precision Instruments, Inc.) were used to produce patch electrodes using a Flaming/ Brown micropipette puller (P-97, Sutter Instruments). The electrode had a resistance of 2-5 megaohms when filled with a pipette solution of 90 mM N-methyl-D-glucamine, 10 mM EGTA, and 180 mM MES, with the pH adjusted to 6.0 using 1 N HCl. Membrane currents were recorded from cells held at Ϫ40 mV using Multiclamp 700B (Axon Instruments) via a Digidata 1322A interface (Axon Instruments) and stored on a computer hard disk with Clampex version 9.2 software (Axon Instruments). M2 current data were analyzed by Clampfit version 9.2 (Axon Instruments). M2 currents were induced by exposing CHO-K1 cells to a brief puff (duration, 0.1-1 s) of a low-pH solution (135 mM N-methyl-D-glucamine, 25 mM MES, 5 mM CaCl 2 , and 10 mM glucose, pH adjusted to 6.0 with HCl) every 20 s via a glass micropipette using a Pneumatic PicoPump (PV830, World Precision Instruments). Drugs were dissolved in the external control solution (pH 7.4) and applied by perfusion after a control period (3 min), during which M2 currents with stable amplitudes were obtained. (ϩ)-(S)-Bakuchiol (20 or 50 M) was dissolved in control solution supplemented with 0.1% DMSO to aid dissolution and 0.5% BSA to prevent adsorption to the perfusion lines. Amantadine (100 M) was used as the positive control.
Analyses of Reporter Assays and Protein Levels of Firefly and Renilla Luciferases-MDCK cells were seeded in a 96-well plate (1 ϫ 10 4 cells/well). The cells were transfected with pGL3-control (Promega, 0.1 g), expressing firefly luciferase driven by the SV40 promoter, and pRL-TK-Rluc (0.1 g). At 24 h posttransfection, the cells were treated with 1, 5, 25, or 50 M (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol or with 50 M ribavirin at 37°C under 5% CO 2 . DMSO (0.5%) was used as a negative control. After a 24-h incubation, luciferase activity in the transfected MDCK cells was measured using the Dual-Glo luciferase assay system, and the levels of firefly and Renilla luciferase protein were also measured by Western blotting.
Transcriptome Analysis by NGS-We used NGS to conduct a comprehensive transcriptome analysis in MDCK cells treated with bakuchiol and influenza virus (A/PR/8/34), using the method previously reported by Kanematsu et al. (34). Briefly, 1 ϫ 10 5 MDCK cells were seeded in each well of a 24-well plate. (ϩ)-(S)-Bakuchiol or (Ϫ)-(R)-bakuchiol (25 M) were mixed with A/PR/8/34 at an MOI of 0.1. DMSO (0.25%) was used as a negative control. Each mixture was added to the MDCK cells and incubated for 24 h before extracting total RNA from the cell lysates. mRNA-sequencing libraries were constructed from each total RNA extract using the SureSelect strand-specific RNA library preparation kit (Agilent Technologies), according to the manufacturer's instructions. Thirty-six-base pair, singleend-read RNA sequencing tags were generated using an Illumina Hiseq2500 sequencer (Illumina). RNA sequencing tags that mapped to the dog reference genome sequences (CanFam3 genome) were analyzed. The reads per kilobase per million mapped reads (RPKM) were calculated for the mRNA transcripts in Ensemble. Genes that showed a Ͼ1.5-fold change in RPKM value in (ϩ)-(S)-bakuchiol-treated MDCK cells are indicated in supplemental Table 2. The complete NGS transcriptome analysis has been deposited in the DNA Data Bank of Japan database (accession number DRA003499) and in the Gene Expression Omnibus (accession number GSE73750).
Molecular Network and Pathway Analysis-The molecular networks and pathways in the NGS analysis were analyzed by the KeyMolnet Viewer program version 5.9, in silico (35).
Nrf2 Reporter Assay-An Nrf2 reporter assay based on the Dual-Luciferase system was performed as described previously (36). The plasmid, pNQO1-ARE (antioxidant response element)-luc expresses a firefly luciferase gene driven by Nrf2 activation (36), and pRL-TK-Rluc was used as an internal control. MDCK cells (1 ϫ 10 5 ) were seeded in each well of a 24-well plate and transfected with pNQO1-ARE-luc (0.25 g) and pRL-TK-Rluc (0.25 g). At 24 h post-transfection, the cells were treated with 25 M (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol in the infection medium at 37°C under 5% CO 2 . DMSO (0.25%) or 25 M DL-sulforaphane (Sigma-Aldrich), which enhances Nrf2driven gene expression (37), were used as the negative and positive control, respectively. Total RNA was extracted from the MDCK cell lysates after a 24-h incubation. The levels of firefly and Renilla luciferase mRNA were analyzed by RT-qPCR, normalized to -actin mRNA.
Statistical Analysis-All results were expressed as the mean Ϯ S.E. Differences between two groups were analyzed for statistical significance by Student's t test, whereas those between more than two groups were analyzed by one-way analysis of variance. The results were considered significantly different when p was Ͻ0.05.
Bakuchiol and bakuchiol (12.5-50 M) or ME for 24 h. The wells were observed under the microscope and photographed (Fig. 3, A-C), and the immunostained cells were counted (Fig. 4, A-C). The numbers of NP-stained cells were significantly decreased in cells treated with (ϩ)-(S)-bakuchiol, (Ϫ)-(R)-bakuchiol, or ME (positive control) and infected with A/PR/8/34 or A/CA/7/ 09, as compared with DMSO-treated cells (Fig. 4, A and B). The number of stained cells in (ϩ)-(S)-bakuchiol-treated wells was lower than that observed in wells containing (Ϫ)-(R)-bakuchiol (Fig. 4, A and B). In cells infected with A/Aichi/2/68, the numbers of stained cells in (ϩ)-(S)-bakuchiol-or (-)-(R)-bakuchioltreated wells were equal to those observed in DMSO-treated wells (Fig. 4C). The number of NP-stained cells in wells exposed to 50 M ME and infected with A/Aichi/2/68 was significantly lower than that observed in DMSO-treated wells (Fig. 4C). The (Table 1). Therefore, these data showed that bakuchiol had an enantiomer-specific inhibitory effect on influenza A virus H1N1 infection.
Next, to investigate whether bakuchiol inhibited influenza A virus H1N1 infection and growth in MDCK cells, we investigated the effects of both enantiomers on viral infection and growth (Fig. 5A). The bakuchiol concentration used in these 24 -72-h experiments was 25 M, which did not show cytotoxicity following 96-h exposure to MDCK cell in infection medium (Fig. 2B). The preincubation experiment (Fig. 5A) involved mixing (ϩ)-(S)-bakuchiol, (Ϫ)-(R)-bakuchiol, or ME with A/PR/8/34 and adding this mixture to MDCK cells (Fig. 5B) in order to evaluate whether bakuchiol inhibited viral attachment to these cells. The preinfection experiment (Fig. 5A) examined the effects of these treatments in A/PR/8/34infected MDCK cells (Fig. 5C) in order to evaluate whether bakuchiol inhibited viral growth. In both of these approaches, the viral titers in conditioned media from cells treated with (ϩ)-(S)-bakuchiol were significantly decreased at 24 -72 h, as compared with those in media conditioned by DMSOtreated cells (Fig. 5, B and C). The viral titers in conditioned media from cells treated with (Ϫ)-(R)-bakuchiol were significantly decreased at 24 and 48 h, as compared with media con-ditioned by DMSO-treated cells, whereas these titers showed no significant difference at 72 h (Fig. 5, B and C). In addition, the viral titers in culture media conditioned by cells treated with (ϩ)-(S)-bakuchiol for 48 or 72 h using both the preincubation and preinfection approaches were significantly decreased, as compared with those of media from cells exposed to (Ϫ)-(R)bakuchiol (Fig. 5, B and C). The viral titers in culture media conditioned by cells treated with ME were significantly decreased in both preincubation (24 h) and preinfection (48 and 72 h) experiments, as compared with those in media from DMSO-treated cells (Fig. 5, B and C). These data showed that bakuchiol inhibited the growth of the influenza A virus H1N1 strain. Taken together, these findings demonstrated that bakuchiol had enantiomer-specific inhibitory effects on influenza A viral infection and growth.
Bakuchiol Reduced Expression of Influenza A Virus H1N1 mRNAs and Proteins-To evaluate whether bakuchiol inhibited the expression of influenza A virus H1N1 mRNAs and proteins, we performed RT-qPCR and Western blotting in MDCK cells treated with a mixture of A/PR/8/34 (MOI ϭ 0.1) and bakuchiol or ME for 24 h before the extraction of total RNA and cDNA synthesis. Relative mRNA expression levels of viral genes (NP, NS1, PA, PB1, PB2, M1, and M2), analyzed by RT-
TABLE 1 Antiviral effects of bakuchiol against influenza A virus H1N1 and H3N2 strains
Data represent the mean Ϯ S.E. half maximal (50%) inhibitory concentration (IC 50 ) values and are representative of two independent experiments. ND, not detected. (Fig. 6B), and (ϩ)-(S)-bakuchiol produced a greater reduction than (Ϫ)-(R)-bakuchiol. Therefore, these data showed that bakuchiol reduced the expression of influenza A virus H1N1 mRNAs and proteins in a chiral-selective manner.
Influenza A virus strains
Bakuchiol Reduced the Expression of Host Cell IFN- and Mx1 mRNA following Viral Infection-Based on our findings indicating that bakuchiol inhibited the infection and growth of the influenza A virus H1N1, we hypothesized that bakuchiol may reduce the host cell immune response induced by this virus. It has previously been reported that influenza A viral infection induced the expression of IFN- and Mx1 in host cells (38 -41). We therefore analyzed the IFN- and Mx1 mRNA levels in MDCK cells infected with A/PR/8/34 (MOI ϭ 0.1) and treated with bakuchiol. Total RNA was extracted from cell lysates, and the relative expression levels of IFN- and Mx1 mRNA were analyzed by RT-qPCR (Fig. 7, A and B). Whereas the IFN- and Mx1 mRNA levels were up-regulated in MDCK cells treated with A/PR/8/34 and DMSO, this host cell response was significantly reduced in the presence of (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol (Fig. 7, A and B). The inhibitory effect of (ϩ)-(S)-bakuchiol was greater than that of (Ϫ)-(R)-bakuchiol (Fig. 7, A and B). These data showed that bakuchiol produced a chiral-selective reduction of the host cell immune response induced by influenza A viral infection.
Bakuchiol Had No Marked Inhibitory Effects on Influenza Surface Proteins or Channels-As described above, bakuchiol inhibited infection by influenza A virus H1N1 but not by H3N2 (Fig. 1B, lanes 3 and 4). We therefore investigated the effects of bakuchiol on the influenza surface proteins (NA, HA, and M2), which possess sialidase, hemagglutination, and H ϩ ion channel activity, respectively (42).
To examine the effect of bakuchiol on influenza A H1N1 viral sialidase, we performed the NA assay with H1N1 NA protein or particles (A/PR/8/34 and A/CA/7/09). We showed that oseltamivir carboxylate, an NA inhibitor (9), strongly inhibited the sialidase activity of NA protein (Fig. 8A) and viral particles (Fig. 8, B and C), but (ϩ)-(S)-bakuchiol only produced a weak inhibition of NA protein and A/CA/7/09 particle activity and did not inhibit A/PR/8/34 particle activity (Fig. 8, A-C).
Next, we tested whether bakuchiol could inhibit the hemagglutination activity of A/PR/8/34 or A/CA/7/09 influenza viral strains. Chicken red blood cells were agglutinated by A/PR/ 8/34 or A/CA/7/09 (Fig. 9, A-C), and this activity was not inhibited by bakuchiol (Fig. 9, B and C). Influenza A viral HA0, the precursor of HA, is cleaved by cellular proteases like trypsin to produce HA1 and HA2, and this triggers the fusion of viral envelope and endosome membrane in an acidic environment (43)(44)(45). We therefore tested HA digestion by trypsin and could not detect any bakuchiol-mediated inhibition of this cleavage (Fig. 10, A and B).
We performed a patch clamp assay using Chinese hamster ovary cells expressing influenza virus A/PR/8/34 M2 in order to evaluate the effect of bakuchiol on this viral ion channel. Aman- NOVEMBER 13, 2015 • VOLUME 290 • NUMBER 46 tadine, the positive control, produced a weak inhibition of M2 ion channel activity, whereas (ϩ)-(S)-bakuchiol did not inhibit this activity (Fig. 11). To explore why the activity of amantadine was weak, we sequenced the A/PR/8/34 M2 cDNA in the pCA-M2 plasmid and identified V27A and S31N mutations (data not shown). These mutations have been reported to produce amantadine-insensitive influenza A virus phenotypes (46 -48). The M2 cDNA gene in pCA-M2 was cloned from the A/PR/8/34 strain used in the assays described above, where (ϩ)-(S)-bakuchiol inhibited A/PR/8/34 infection and growth (Fig. 1, lanes 3 and 4) but did not inhibit A/PR/8/34 M2 ion channel activity (Fig. 11). Therefore, (ϩ)-(S)-bakuchiol did not appear to target the A/PR/8/34 M2 ion channel. Taken together, these data showed that bakuchiol had no observable effects on the functions of influenza A viral surface proteins that were strong enough to explain its anti-influenza virus activity, suggesting that this compound may act on other targets within the influenza virus or the host cell.
Enantiomer-selective Anti-influenza Activity of Bakuchiol
Bakuchiol Induced Host Factor(s) That Inhibited Luciferase Activity-Transcription and replication of the influenza A viral genome require the activity of a highly conserved RdRp (49). To evaluate whether bakuchiol inhibited influenza A viral RdRp, we used the minigenome assay employing firefly and Renilla luciferase reporters driven by the viral RdRp and the endogenous RNA polymerase II, respectively (32,33). Ribavirin, a viral RdRp inhibitor (50), reduced firefly luciferase activity, as compared with the activity observed in the presence of DMSO, without affecting Renilla luciferase activity (Fig. 12A); this indi- NOVEMBER 13, 2015 • VOLUME 290 • NUMBER 46 cated that the assay detected ribavirin's selective inhibition of viral RdRp activity. As shown in Fig. 12A, (ϩ)-(S)-bakuchiol and (Ϫ)-(R)-bakuchiol both reduced firefly luciferase activity, and (ϩ)-(S)-bakuchiol also produced an unexpected reduction of the Renilla luciferase activity, as compared with that observed in the presence of DMSO. This finding suggested that (ϩ)-(S)-bakuchiol inhibited influenza RdRp and also endoge-nous RNA polymerase II. We therefore transfected MDCK cells with plasmids expressing firefly and Renilla luciferases, without influenza RdRp. This study confirmed that (ϩ)-(S)-bakuchiol and (Ϫ)-(R)-bakuchiol dose-dependently reduced these luciferase activities, as compared with DMSO (Fig. 12B), whereas ribavirin did not. (ϩ)-(S)-Bakuchiol had a greater inhibitory effect on Renilla luciferase than did (Ϫ)-(R)-bakuchiol (Fig. 12B). Additionally, Յ50 M (ϩ)-(S)-bakuchiol and (Ϫ)-(R)-bakuchiol did not reduce the cell viability (Fig. 2). Therefore, these data confirmed that bakuchiol inhibited firefly and Renilla luciferase independent of RdRp, in the absence of any effects on cell viability. We considered three possible interpretations of these observations: (i) (ϩ)-(S)-bakuchiol inhibited expression of the transfected luciferase gene in an RdRp-independent manner; (ii) (ϩ)-(S)-bakuchiol inhibited the enzymatic activity of luciferase; or (iii) (ϩ)-(S)-bakuchiol induced some host factors that inhibited luciferase expression or activity. other groups, n ϭ 15) were measured by the Dual-Glo luciferase assay system. Each luciferase activity was expressed relative to the DMSO-treated cells (set as 100%). C, protein levels (n ϭ 9) were evaluated by Western blotting. The level of protein was normalized to the -actin level and expressed relative to the DMSO-treated cells (set as 1). Data are presented as the mean Ϯ S.E. of 3-5 independent experiments. The results were reproducible. One symbol, p Ͻ 0.05; two symbols, p Ͻ 0.01; three symbols, p Ͻ 0.001.
Enantiomer-selective Anti-influenza Activity of Bakuchiol
To examine the first possibility, we analyzed the levels of firefly and Renilla luciferase mRNA (Fig. 13A) and protein (Fig. 12C). These were not reduced in firefly and Renilla luciferasetransfected MDCK cells treated with (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol, as compared with the levels observed in the presence of DMSO (Figs. 13A and 12C, respectively). This indicated that bakuchiol treatment did not affect the transfection efficiency.
To examine the second possibility, we analyzed whether bakuchiol directly inhibited firefly and Renilla luciferase activities in vitro (Fig. 13B). These enzyme activities were not affected by (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol (Fig. 13B). Taken together, these findings indicated that the third possibility, that bakuchiol induced host factor(s) that inhibited firefly and Renilla luciferase, warranted further investigation.
Bakuchiol Induced Nrf2 Activation and Up-regulated NQO1 and GSTA3 mRNA Levels-To investigate host factor(s) affected by bakuchiol, we performed NGS analysis of the MDCK transcriptome in cells treated with bakuchiol and influenza virus A/PR/8/34 (34) (supplemental Table 2). To identify molecular pathways activated by bakuchiol in the cells, we also performed molecular network analysis using KeyMolnet and the NGS results. This showed that bakuchiol activated the Nrf pathway (Table 2 and Fig. 14). We then analyzed whether bakuchiol activated Nrf2 using a Nrf2 reporter assay (Fig. 15A). This showed that (ϩ)-(S)-bakuchiol and DL-sulforaphane, but not (Ϫ)-(R)-bakuchiol, induced Nrf2 activation (Fig. 15A). Furthermore, we found that mRNA levels of NQO1 and GSTs, were up-regulated following exposure to (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol (Table 3). To confirm these findings, we performed a quantitative analysis of NQO1 and GSTA3 mRNA in MDCK cells treated with bakuchiol in the presence and absence of A/PR/8/34 using RT-qPCR. The levels of NQO1 (Fig. 15B) and GSTA3 (Fig. 15C) mRNAs in MDCK cells treated with (ϩ)-(S)-bakuchiol or (Ϫ)-(R)-bakuchiol were significantly increased, as compared with DMSO-treated cells. (ϩ)-(S)-Bakuchiol had a greater effect than did (Ϫ)-(R)-bakuchiol (Fig. 15, B and C), indicating a correlation with the enantiomer-specific anti-influenza virus activity of bakuchiol. It has been reported that the mRNA expression of NQO1 and GSTs are regulated by the Nrf2 transcription factor and are related to the cellular response to oxidative stress
Discussion
In the present study, we found that (ϩ)-(S)-bakuchiol enhanced the survival of influenza A virus-infected MDCK cells and inhibited influenza A viral infection, growth, and gene expression; in addition, (ϩ)-(S)-bakuchiol reduced the expression of influenza A virus-induced immune response genes in the host cells. We also found that (ϩ)-(S)-bakuchiol induced the activation of Nrf2 and the up-regulation of NQO1 and GSTA3 mRNAs. This is the first report indicating that (ϩ)-(S)-bakuchiol possesses antiinfluenza virus activity. We found that the chirality of bakuchiol was important for this activity, and this should be considered when synthesizing bakuchiol derivatives as novel anti-influenza A virus H1N1 drugs.
We showed that (ϩ)-(S)-bakuchiol had greater anti-influenza activity than (Ϫ)-(R)-bakuchiol, suggesting that the chirality of bakuchiol was important for this activity. Although the reason for this is still unclear, (Ϫ)-(R)-bakuchiol may have a reduced interaction with the target protein or be more easily degraded in cells, as compared with (ϩ)-(S)-bakuchiol.
TABLE 3 Analysis of mRNA expression in MDCK cells by next generation sequencing
The data indicate the number of reads per kilobase per million mapped reads. Also see supplemental Table 2. The entire data set has been deposited in the DNA Data Bank of Japan (accession number DRA003499) and in the Gene Expression Omnibus (accession number GSE73750).
Gene Without A/PR/8/34
With A/PR/8/34 Fig. 1B, lanes 3 and 4). This may reflect strain differences in viral proteins or in the host cell response. The HA and NA viral proteins differ between the H1N1 and H3N2 strains. It has also been reported that an anti-M2 ectodomain monoclonal antibody (clone rM2ss23) inhibited the viral replication of A/Aichi/2/68 and an A/PR/8/34 recombinant variant expressing A/Aichi/2/68-HA and/or M segment strains but did not inhibit the A/PR/8/34 strain (55). HA and M2 were co-localized in infected MDCK cells during virus budding (56), suggesting that strain-dependent differences in HA-M2 interactions might affect the inhibition of viral replication. Therefore, although bakuchiol did not inhibit the functions of A/PR/ 8/34 HA and M2 proteins (Figs. 9 -11), it might affect their interaction while not affecting the A/Aichi/2/68 HA-M2 interactions.
DMSO (؉)-(S)-Bakuchiol (؊)-(R)-Bakuchiol DMSO (؉)-(S)-Bakuchiol (؊)-(R)-Bakuchiol
Bakuchiol induced Nrf2 activation and up-regulated NQO1 and GSTA3 mRNA levels in MDCK cells (Fig. 15), indicating that it influenced the host response to oxidative stress. It has been reported that the host cell responses, including the innate immune response (57) and the cellular microRNA signature (58), differed following infection by H1N1 or H3N2 strains. Therefore, the different effects of bakuchiol on A/PR/8/34 and A/Aichi/2/68 strains may reflect differences in the MDCK host response to oxidative stress following infection with these viruses.
Nrf2 reporter assay, transcriptome, and RT-qPCR analyses in MDCK cells treated with bakuchiol and A/PR/8/34 showed that bakuchiol induced Nrf2 activation and up-regulated NQO1 and GSTA3 mRNA levels (Fig. 15). NQO1 catalyzes the reduction of various quinones via a two-electron mechanism involving NADH or NADPH, preventing the formation of free radicals and ROS. An increase in the level of ROS activates Nrf2 binding to the NQO1 promoter, increasing NQO1 production (59). Additionally, NQO1 stabilizes p53 in an NADH-dependent manner, promoting accumulation of p53 protein in cells (59). Chen et al. (15) reported that bakuchiol increased p53 expression and induced apoptosis via ROS-dependent reduction of mitochondrial membrane potential in A549 cells. Therefore, we speculate that the up-regulation of NQO1 mRNA by bakuchiol is induced by ROS-dependent Nrf2 activation and increases the level of p53 protein in MDCK cells. Furthermore, Nrf2 up-regulation has been shown to reduce influenza A viral entry and replication (60), and the inhibition of p53 expression increases influenza A viral growth (61), suggesting that up-regulation of Nrf2 and p53 would inhibit influenza A viral growth. It has been reported that oltipraz (4-methyl-5(pyrazinyl-2)-1-2-dithiole-3-thione) and D3T (3H-1, 2-dithiole-3thione), compounds that possess anti-cancer activities in multiple target organs (62), increase the Nrf2driven expression of NQO1 (52,63). Therefore, Nrf2 activation could represent one of the anti-influenza A virus H1N1 mechanisms of bakuchiol. However, because the direct target of bakuchiol remains unclear, further studies will be needed to explore this.
Based on the findings of this study and previous reports, as shown in Fig. 16, we suggest that anti-influenza virus activity by bakuchiol is involved in Nrf2 activation. In conclusion, the findings of the present study demonstrated that bakuchiol produced an enantiomer-selective anti-influenza A virus activity via a novel mechanism involving the host cell response. These NOVEMBER 13, 2015 • VOLUME 290 • NUMBER 46 JOURNAL OF BIOLOGICAL CHEMISTRY 28015 data will contribute to the development of novel approaches to the treatment of influenza.
Enantiomer-selective Anti-influenza Activity of Bakuchiol
Author Contributions-T. K. and M. S. designed the study and wrote the paper. T. E. and C. Y. synthesized and purified chemicals. M. S. and Y. A. performed anti-influenza virus assays. Y. S. performed next generation sequencing. S. Kohnomi and S. Konishi performed channel assays. E. T. and H. K. provided influenza viral strains. All authors reviewed the results and approved the final version of the manuscript. | 2018-04-03T00:46:57.057Z | 2015-10-07T00:00:00.000 | {
"year": 2015,
"sha1": "4ce19312c658ac96d3dfd2c641e9747bce14e27e",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/290/46/28001.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a43fb9ac9195cf5c5da374a341849285212b7c78",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
220698794 | pes2o/s2orc | v3-fos-license | Unique molecular signatures of microRNAs in ocular fluids and plasma in diabetic retinopathy
The main objective of this pilot study was to identify circulatory microRNAs in aqueous or plasma that were reflecting changes in vitreous of diabetic retinopathy patients. Aqueous, vitreous and plasma samples were collected from a total of 27 patients undergoing vitreoretinal surgery: 11 controls (macular pucker or macular hole patients) and 16 with diabetes mellitus(DM): DM-Type I with proliferative diabetic retinopathy(PDR) (DMI-PDR), DM Type II with PDR(DMII-PDR) and DM Type II with nonproliferative DR(DMII-NPDR). MicroRNAs were isolated using Qiagen microRNeasy kit, quantified on BioAnalyzer, and profiled on Affymetrix GeneChip miRNA 3.0 microarrays. Data were analyzed using Expression Console, Transcriptome Analysis Console, and Ingenuity Pathway Analysis. The comparison analysis of circulatory microRNAs showed that out of a total of 847 human microRNA probes on the microarrays, common microRNAs present both in aqueous and vitreous were identified, and a large number of unique microRNA, dependent on the DM type and severity of retinopathy. Most of the dysregulated microRNAs in aqueous and vitreous of DM patients were upregulated, while in plasma, they were downregulated. Dysregulation of miRNAs in aqueous did not appear to be a good representative of the miRNA abundance in vitreous, or plasma, although a few potential candidates for common biomarkers stood out: let-7b, miR-320b, miR-762 and miR-4488. Additionally, each of the DR subtypes showed miRNAs that were uniquely dysregulated in each fluid (i.e. aqueous: for DMII-NPDR was miR-455-3p; for DMII-PDR was miR-296, and for DMI-PDR it was miR-3202). Pathway analysis identified TGF-beta and VEGF pathways affected. The comparative profiling of circulatory miRNAs showed that a small number of them displayed differential presence in diabetic retinopathy vs. controls. A pattern is emerging of unique molecular microRNA signatures in bodily fluids of DR subtypes, offering promise for the use of ocular fluids and plasma for diagnostic and therapeutic purposes.
Introduction Diabetic retinopathy (DR), diabetic macular edema and associated conditions, are the leading and growing causes of vision impairment and blindness in the United States and throughout the world. Current management involves laser therapy, intravitreal injections using anti-VEGF, anti-inflammatory, steroid therapeutics and possible intraocular surgery. These interventions alleviate issues only temporarily and often require repeated invasive treatments. Furthermore, some patients do not respond well to current therapies. Thus, the need for new therapeutic targets and approaches is clear and compelling.
Recently, a novel type of RNA, microRNA (miRNAs), has been implicated in human diseases [1,2]. MiRNAs are a class of small non-coding RNAs that regulate gene expression at the posttranscriptional level by either degrading or blocking translation of messenger RNA targets [3]. Besides their presence in tissues, miRNAs circulate in the bloodstream in a highly stable, extracellular form and are being investigated as blood-based biomarkers for cancer and many other diseases [4]. Identification and characterization of DR on the level of miRNA and their target molecular pathways could lead to novel diagnostic tools as well as therapies to prevent and reverse vision loss for these patients.
Circulatory miRNAs have been shown to be differentially expressed in diabetics in serum and plasma studies [5], urine [6], retina and retinal endothelial cells (RECs) of streptozocin (STZ)induced diabetic rats [7]. Not enough is known about miRNA profiles of ocular fluids, and further research needs to be done. Therefore, we conducted a small scale preliminary study in order to evaluate feasibility of the key steps in a future, full-scale project. In this pilot study, we attempted to address the critical immediate problem-the identification of biological markers of retinal disease in diabetic retinopathy. Our goal was to identify biomarkers that are present in aqueous, vitreous and plasma, with the ultimate goal of identifying early biomarkers for progression from non-proliferative to proliferative DR. Ideally, identifying plasma biomarkers that correlate with ocular biomarkers and different stages of DR would allow the most accessible fluid (plasma) for sampling to follow changes in miRNA dynamics in the eye. A secondary goal would be to identify aqueous biomarkers that correlate with vitreous biomarkers that would allow sampling of the more accessible fluid (aqueous humor) that may correlate better with retinal pathology.
Human samples
The samples were from the Clinical trial "Study of Ocular Fluid, Serum for Biomarkers of Eye Disease in Patients," UC Davis Institutional Review Board (IRB) approved (IRB #216607), a single-site, investigator-initiated clinical study in the Department of Ophthalmology, UCD. The research was conducted in accordance with the 1964 Helsinki Declaration. Sample collections took place at surgery point-of-care and medical records were accessed 06/2011-04/2014. These were patients that had sought treatment at the clinic at UCD Ophthalmology for several years (01/2000-current). The dates of surgery were during the period of 3 years, 06/2011-04/ 2014. Very limited data from their medical record was used, and that use was covered by informed consent. These samples have a representation of both genders and various race groups. Age ranges were from 30-80yr old. To assure the anonymity and protection of human subjects, the samples were identified by acquisition number and a record of chronological age, gender, and a description of the case. Inclusion criteria: Patients undergoing vitrectomy surgery for retinal disorders (macular puckers, macular holes, tractional retinal detachments). Exclusion criteria: Prior vitreous or retinal detachment surgery, prior history of uveitis, endophthalmitis, prior intraocular injections with steroids or anti-VEGF agents, prior cataract surgery less than 6 months before inclusion, no open posterior capsules, no recent vitreous hemorrhage within 6 months, penetrating trauma, ruptured globe repair, intraocular tumor, systemic disease including cancer, connective tissue disease, current use of systemic steroids or immune-modulating agents.
Sample collection, isolation of miRNAs, and quality control
Aqueous and vitreous humor, as well as plasma samples, have been collected from DM and control patients during the standard-of-the-care eye surgery. Samples of 100-200ul were collected, aliquoted in 100ul aliquots, frozen on dry ice immediately and stored at -80˚C. RNA was isolated using Exiqons' modification of Qiagen's microRNeasy kit. Quantification and quality check of isolated miRNA was performed on BioAnalyzer (Agilent) with Small RNA microfluidics Chips. all arrays intended for comparison were included together in the summarization step [8]. The output files of the RMA analysis are.chp files. Differentially expressed miRNAs were identified using Transcriptome Analysis Console (TAC, Affymetrix). One way ANOVA was used to identify statistically significant genes at the significance level of p�0.05. To identify biologically relevant gene expression changes for each of the time point/treatment conditions, the standard approach was employed using a p-value (p�0.05) as the primary criterion followed by fold change (-1.5� FC �1.5) as the secondary criterion to select differentially expressed genes. This approach ensures control of false-positive error and preserves the desired biological significance [9]. Upon first analysis, criteria were relaxed to (-1.2� FC �1.2; p<0.05) to be able to capture trends of all the family members of miRNAs of interest in all the fluids. The analysis was done for each ocular fluid and plasma samples separately. Differential miRNA expression for each of the groups (DMII, DMI, and DMII-NPDR) was established by comparison against the control group. The data discussed in this publication have been deposited in NCBI's Gene Expression Omnibus [10,11] are accessible through NCBI GEO Series accession number # GSE140959 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE140959).
Pathway analysis
Pathway and gene network analysis of miRNAs and their target genes was performed using Ingenuity Pathway Analysis (IPA, Qiagen, Redwood City, CA). IPA is a web-based software application that enables one to analyze, integrate, and understand the significance of the data, in the context of larger biological systems. IPA is backed by the Ingenuity 1 Knowledge Database of highly structured, detailed biological findings manually curated by Ph.D. level scientists. MiRNA Target Filter combines filtering tools and miRNA-mRNA content to provide insight into the biological targets of candidate miRNAs. of (-1.5� FC �1.5, p<0.05). Only the most interesting differentially expressed miRNA species are presented here, while the full lists of the expressed molecules (p<0.05) are available in the (S2-S4 Tables).
Differentially expressed miRNAs in aqueous humor
The miRNAs that have been statistically significantly (p<0.05) differentially expressed in aqueous for each patients sample group compared to controls are presented in Fig 1 and listed in Table 1. The miRNA that has been the most dysregulated in aqueous in DMI-PDR and DMII-PDR is let-7b. In DRII-NPDR, it was miR-455 ( Table 1). Dysregulated miRNAs that have been found in multiple sub-categories are let-7b (DMI-PDR and DMII-PDR), miR-26a (DMII-PDR and DMII-NPDR); and miR-4314 and miR-518 (DMI-PDR and DMII-PDR) Table 1). Each of the aqueous DR subcategories had a set of 12-35 additional unique differentially expressed miRNAs, for example, miR-3202 (DMI-PDR), miR-296 (DMII-PDR), and miR-455 (DMII-NPDR) to mention just a few (S2 Table).
Differentially expressed miRNAs in the vitreous humor
The miRNAs that have been statistically significantly (p<0.05) differentially expressed in vitreous for each patients sample group compared to controls are presented in Table 2.
Differentially expressed miRNAs in plasma
The miRNAs that have been statistically significantly (p<0.05) differentially expressed in plasma for each patients sample group compared to controls are presented in Fig 3 and listed in Table 3. The miRNA that had been the most dysregulated in plasma in DMI-PDR was miR-106b, in DMII-PDR, it was miR-20b, while in DRII-NPDR, it was miR-20b (Fig 3). Dysregulated miRNAs that have been found in 2 subcategories were miR-455, miR-20a, and miR-20b (DMI-PDR and DMII-PDR). Additionally, there were 13 miRNAs that seemed to be common Table 1. Differentially expressed miRNA in aqueous in DRI-PDR, DRII-PDR and DRII-NPDR. biomarkers for DM (S4 Table). Each of the plasma subcategories had a set of additional 5-42 unique differentially expressed miRNAs that are present in only one of the subcategories, for example, miR-574-3p (DMI-PDR), miR-4695-5p (DMII-PDR), and miR-455-3p (DMII-NPDR) (S4 Table).
Comparison of differentially expressed miRNAs in all three fluids
When compared aqueous, vitreous, and plasma for all DR category, four miRNAs have been found dysregulated in all three compartments. Those miRNAs were let-7b, miR-320b, mir-762, and miR-4488. MiRNA let-7b was upregulated in aqueous and vitreous, while downregulated in plasma. MiR-320b was upregulated in all three fluidics compartments. MiR-762 and miR-4488 follow a similar pattern of expression-they have a category-specific expression in aqueous, upregulated in vitreous and category-specific expression in plasma (Fig 4). Table 4A-4C. When compared aqueous, vitreous, and plasma for DMI-PDR category (p<0.05), dysregulation of let-7b was present in aqueous and vitreous, while dysregulation of miR-194 was present in aqueous and plasma. The vitreous and plasma shared statistically significant dysregulation of 3 miRNA: let-7c, miR-486, and miR-16. Interestingly, miRNA candidate biomarkers were upregulated in ocular fluids and downregulated in plasma (Fig 5A and Table 4A). In S5 Table are listed some of the unique miRNAs for each fluid. The most dysregulated unique miRNA for aqueous was miR-3202, for vitreous miR-320b and plasma miR-15a (S5 Table). When compared aqueous, vitreous, and plasma for DMII-PDR category (p<0.05), downregulation of miR-4445 was present in aqueous and vitreous, while upregulation of miR-4695-5p and miR-425 was present in vitreous and plasma (Fig 5B and Table 4B). On the bottom panel of S6 Table are listed some of the unique miRNAs for each fluid. The most upregulated unique miRNA for aqueous was let-7b, for vitreous miR-320c and in plasma, it was downregulated miR-20b (S6 Table). When compared aqueous, vitreous and plasma for DMII-NPDR category (p<0.05), upregulation of miR-455-3p was present in aqueous and plasma, upregulation of miR-200b was shared by aqueous and vitreous, while upregulation of miR-4421 was present in vitreous and plasma (Fig 5C and Table 4C). On the bottom panel of S7 Table are listed some of the unique miRNAs for each fluid. The most dysregulated unique miRNA for aqueous was miR-3201, for vitreous miR-2861, and for plasma, it was downregulated miR-20b (S7 Table).
Validation of microarray results with qPCR
Independent quantitative PCR (qPCR) validation of representative differentially expressed genes was performed using commercial gene expression assays (TaqMan; Applied Biosystems), following manufacturers protocol (for detailed reaction conditions see Materials and Methods section on Quantitative polymerase chain reaction (qPCR) assays). The goal of the qPCR experiment was to validate microarray results by alternative technique, and the criteria to pick samples was a random pick within the DR groups. The fold changes (FC) and their direction were confirmed for the chosen sets of genes. Data are presented in Table 5.
IPA pathway analysis
Top Gene Network affected by ubiquitously expressed miRNA in aqueous and vitreous Ingenuity Pathways Analysis was done using 30 highest level expressing miRNAs from aqueous and vitreous. Circulating miRNAs that are ubiquitously present in ocular chambers target primarily genes regulated by p53 and TGF-beta. A partial interactome of the let-7 family of miR-NAs and some of the target genes and biological pathways that they regulate are TGF-β, Insulin Receptor, Apoptosis, and VEGF Receptor Signaling. These genes are key regulators of oxidative stress, angiogenesis, inflammation, and apoptosis. MiR-320 family is as part of the cellular response to glucose stimulus and plays a role in apoptosis, migration, cell death, Table 3. Differentially expressed miRNA in plasma in DRI-PDR, DRII-PDR and DRII-NPDR. proliferation, and signaling in several diseases including non-insulin-dependent diabetes mellitus (IPA and refs therein). According to IPA summary, miR-320 family regulates a multitude of genes, including IGF1, and TGF beta, and it is regulated by p53 and Smad2/3, which is downstream from TGF-beta. Further IPA analysis of pathways targeted by both let-7 and miR-320 family have identified pathways connecting VEGF and TGF-beta, as represented in Fig 6.
Discussion
One of the primary goals of this project was to make progress towards generating biomarker profiles that can be used to screen for DM stage and progression of DR. Once biomarkers have been identified, the role of these microRNAs as therapeutic targets could be studied. This pilot study showed dysregulation of many miRNAs, but just a few displayed dysregulations in multiple fluids. One such potential biomarker is the let-7 family of miRNAs. This family was upregulated in the aqueous and vitreous humor and downregulated in plasma of DMI-PDR. The let-7 family also showed dysregulation in more than one category of DR. It was upregulated in vitreous and downregulated in plasma of DMII-NPDR, and upregulated in aqueous, vitreous and downregulated in plasma of DMII-PDR and DMI-PDR. In aqueous upregulation occurred only for PDR categories, so in this compartment, let-7b has the potential to be a biomarker of PDR. The pathway analysis of let-7 targets has shown that let-7 targets proinflammatory cytokines, which regulate VEGF expression. It has been shown that let-7 family members are direct translational repressors of interleukin-13. Jiang et al. have been shown that serum levels, as well as IL-13, secreted from cultured skeletal muscle, are reduced in T2DM vs. normal glucose-tolerant (NGT) subjects, while let-7 is increased [12]. Additionally, the polymorphism in the let-7 targeted region of the Lin28 gene, which codes for long noncoding (lncRNA) a negative regulator of let-7, is associated with an increased risk of type 2 diabetes mellitus [13]. Upregulation of let-7 in ocular fluids might be a sign of nerve damage since it has been shown that the upregulation of let-7 in the extracellular space can lead to neurodegeneration [14]. In one study, the RNA sensing receptor Toll-like7 (TLR7) in cortical neurons of mice was shown to bind extracellular enriched let-7 released by degenerating neurons. Subsequently, the TLR 7 expressing cells undergo apoptosis. Injection of let-7b was also sufficient to activate downstream TLR7 signaling, which was shown by the increased phosphorylation state of IRAK4 [15].
Let-7 has also been implicated in post-transcriptional control of the innate immune response. Macrophages stimulated with live antigens downregulate the expression of several members of let-7 miRNA to relieve repression of immune-modulatory cytokine IL-6 and IL-10 [16,17]. Let-7 has been implicated in the negative regulation of TLR4, the major immune receptor of microbial lipopolysaccharide (LPS), and down-regulation of let-7 both upon microbial and protozoan infection might elevate TLR4 signaling and expression [18,19]. Let-7 is also a very attractive potential therapeutic that can prevent tumorigenesis and angiogenesis, so far in cancers, and possibly in DR [20].
Another family of miRNA, miR-320, appears in multiple fluids and DR categories. miR-320 family is upregulated in vitreous of both DRI-PDR and DRII-PDR, therefore it might be considered as a putative vitreous biomarker of PDR. MiR-320 regulates tumor angiogenesis driven by vascular endothelial cells in oral cancer by silencing neuropilin 1 [21]. Neuropilin 1 functions as a co-receptor with diverse ligands and receptors, including the vascular endothelial growth factor (VEGF) and VEGF-receptor (VEGFR). Wang et al. have shown that in type 2 diabetic Goto-Kakizaki (GK) rats myocardial microvascular endothelial cells (MMVEC) miR-320 impaired angiogenesis [22] and that one of the miR-320 targets is IGF-1. Eleven miRNAs were upregulated in MMVEC from GK rats compared with those in Wistar rats including let-7e, and miR-320. The results indicate that the upregulation of miR-320 in MMVEC from GK rats may be responsible for the inconsistency between the expression of IGF-1 protein and mRNA and therefore related to impaired angiogenesis in diabetes. Transfection of a miR-320 inhibitor was suggested as a therapeutic approach for the treatment of impaired angiogenesis in diabetes [22]. This miRNA was also found upregulated in insulin-resistant 3T3-L1 adipocytes. Anti-miR-320 oligo was found to regulate insulin resistance in adipocytes by improving insulin-PI3-K signaling pathways [23]. MiR-320 has been found to regulate glucose-induced gene expression in Diabetes [24]. High glucose exposure decreased the expression of miRNA 320 (miR-320) but increased the expression of endothelin 1 (ET-1), vascular endothelial growth factor (VEGF), and fibronectin (FN) in human umbilical vein endothelial cells (HUVECs). Data from this study indicate that miR-320 negatively regulates expression of ET-1, VEGF, and FN through ERK 1/2 in HUVECs. Increased expression of miR-320 family in our data could suggest that ocular tissue is attempting to downregulate a high level of VEGF production, which has been shown to occur in the diabetic eye.
PLOS ONE
One of the miRNAs that have shown up as a putative candidate biomarker in all three fluids is miR-762. Downregulation of this miRNA has been associated with increased plasma VEGF levels following ischemic preconditioning, and algorithm-based database searches suggested that this miRNA bind to the 3' UTR of VEGF mRNA, which was confirmed with in vitro knockdown of miRNA expression experiments in CD34-positive BM cells [25]. MiR-762 has also been identified as having the potential neuroprotective role in neurorestorative therapy for ischemic stroke [26].
The fourth potential biomarker miRNA is miR-4488. Microarray profiling of the overexpression of TGFβ2-OT1 indicates that miR-4488 is one of the three miRNAs whose overexpression in endothelial cells resulted in the repression of their downstream targets: ceramide synthase 1 (CERS1), N-acetyltransferase 8-like (NAT8L), and La-ribonucleoprotein-domainfamily-member 1 (LARP1). Their primary function is in endothelial cell autophagy, inflammation in endothelial injury, and regulation of angiogenesis [27].
According to IPA analysis, one of the main pathways that have been targeted by miRNAs in ocular fluids are VEGF and TGF-beta pathways. TGF-beta has an important role in angiogenesis, endothelial cell proliferation, adhesion, and deposition of extracellular matrix [28,29]. TGF-beta has been implicated in the development of diabetic retinopathy (DR) through disrupting angiogenesis and blood-retinal barrier breakdown [30]. TGF-beta is a highly polymorphic gene, and systematic review has been published that has evaluated TGF-beta1 gene polymorphism in association with Diabetic retinopathy susceptibility, which suggested that +869T/C(L10P) polymorphism in TGF-beta1 gene would be a potential protective factor for DR [31]. It is interesting to hypothesize that perhaps a polymorphism like that could be a target site of a particular miRNA that gets dysregulated in diabetic patient, and if a protective allele is present, complementarity of the binding site is disrupted and there is no effect on TGF-beta1 expression, as it would be if sensitive allele is present.
There are several miRNAs dysregulated in only one of the fluids that might be potential biomarkers for PDR. In aqueous, two miRNAs were downregulated miR-4314 and miR-518c in both DRI-PDR and DRII-PDR. Serum miRNA-4314 has been implicated in ovarian tumorigenesis via down-regulating GRWD1/IP6K1/NEGR1 [32]. MiRNA-518c, together with miR-638, are dual PTEN-and p53-targeting miRNAs that are upregulated in multiple human cancers [33]. MiR-518 also plays a role in the growth and metastasis of several cancers, where it has been identified as a downstream target of the SDF-1/CXCR4 system. It was also found in a cluster of miRNAs that are highly expressed in retinoblastoma [34]. In vitreous, there was upregulation of miR-320c, miR-4488, miR-4695, miR-512-3p and downregulation of miR-3201. In plasma, the upregulation of miR-425 might be a biomarker for PDR. MiRNA-425 has been already considered as therapeutic targets and biomarkers of cardiovascular disease because it binds to a polymorphic region of 3'UTR of its target atrial natriuretic peptide (ANP) mRNA. This A/G variant is contained within a binding site for miR-425, which binds to the A but not the G allele, and ANP expression is elevated in individuals with the G allele, correlating with reduced blood pressure (functional polymorphism). These findings raise the possibility that inhibitors of miR-425 might lower blood pressure by de-repressing atrial natriuretic peptide expression [35]. As far as unique miRNA candidate biomarkers go, upregulation of miR-4695-5p and downregulation of miR-569 was detected in DMII-PDR, and upregulation of miR-574-3p, miR-2115 and miR-28-3p, as well as downregulation of let-7c, miR-107, miR-532-5p and miR-222 were detected in DMI-PDR. MiR-4695-5p has been identified as one of the miRNA targeting TGF beta pathway genes, specifically TGFBR1 (rs6478974) and SMAD3 (rs12901071). The TGF-β signaling pathway is involved in the regulation of cell growth, angiogenesis, and metastasis. The authors have shown from a large study of colorectal cancer cases that genetic variation in the TGF-β signaling pathway is associated with various miRNA expression levels [36]. MiRNA-569 has been associated with a functional polymorphism in the 3'-untranslated region of SPI1 with systemic lupus erythematosus. The findings indicate that an SNP in the 3'-UTR of SPI1 is associated with elevated SPI1 mRNA level and with susceptibility to SLE. Transfection experiments demonstrated that miR-569 inhibits expression of a reporter construct with the 3'-UTR sequence containing the nonrisk allele but not the risk allele [37]. These and many more candidates are listed in the Tables and Appendixes. Our future research direction will be following up and confirming the candidate DR biomarkers on a larger number of individuals.
The limitations of our pilot study were that samples were taken at the surgery point. The decision to do that was guided by the impossibility to get vitreous samples in the clinic. Also, the amount of fluid needed to obtain a sufficient amount of miRNA in discovery study such as this was only obtainable from the surgery. Only the samples from the eyes with no previous surgery were taken into account, and the criterion applied was that no other treatments, such as anti-VEGF or steroid injections, were done at least a year before surgery. Another concern was a possible confounding effect of the PRP laser treatment. Therefore, we compared vitreous samples of DR Type II, PDR with and without PRP laser done, and we found that the history of PRP of retinal tissue was not a source of significant variability of miRNAs in vitreous at the collection time point. Another limitation of our study is that the number of patients in each group is rather small. As this was a pilot experiment, there was no previous information to calculate the minimal sample size. Similarly, before the experiment, we had no observations to check the distribution, so it seemed reasonable to make an assumption that the distribution of the pre-processed data is normal and hence two-sample t-tests and ANOVA are applicable. The same assumption is also made by other proposed methods to calculate sample size [38][39][40][41]. Liu and Hwang [42] describe a method for a quick sample size calculation for microarray experiments while controlling FDR, which we followed. We have conducted power analysis [43] to determine the minimal sample size needed to produce statistically significant data, based on the vitreous data for the DMI-PDR group. The recommended sample size was n = 7, or 2n = 14. Our total sample size for this study was 2n = 15, but for some of the groups in this pilot study e.g. DMI and DMII-NPDR groups had only 4 samples for aqueous and vitreous. Therefore, the limitation of this study is small sample size; the conclusion drawn from this study should be further validated in future studies.
Most of the miRNAs in this pilot study had a very moderate change of expression in patient samples. This can be understood in the light of miRNAs functioning as a catalyst, similar to an enzyme, so even the slightest change in their expression can have dramatic effects on the target gene expression levels and many different cellular functions, such as vascular remodeling and angiogenesis [44]. While the majority of miRNAs were upregulated in ocular fluids, the majority of miRNAs were downregulated in plasma. It is possible to hypothesize that there is a reorganization of miRNA prevalence in bodily fluids taking place in disease, but it is too early to speculate a possible mechanism of whole-body level regulation of circulatory miRNA.
There was an increase in studies in recent years identifying miRNAs from human samples as DR biomarkers [45][46][47][48]. Since our study was a small scale study, we wanted to compare/ contrast with some other studies, to see whether our pilot study brought out same candidates, or different sets. There are some plasma biomarkers common between those studies and this pilot study (let-7a, miR-126, miR-320a, miR-27, miR-126, miR-29, miR-150, miR-30, miR-221) [48], and refs therein. For example, this pilot study has identified miRNA-221 as a biomarker in plasma, downregulated in NPDR (FC = -5.03) and upregulated in DMI-PDR (FC = 2.73) and DMII-PDR (FC = 2.33). Same miRNA has been identified in serum as a biomarker for DR in DMII-PDR by Liu et al, 2018 [49]. It was increased in serum, together with Ang II and VEGF [49]. Finding from this work in plasma of let-7a and miR-151 as potential biomarkers have been confirmed by RNA seq in serum for late-stage and early-stage DMII-DR [47]. The lower number of studies of miRNAs in vitreous of human DR patients found common biomarkers with our pilot study such as let-7c, miR-16, miR-92a, and miR-320a,b [48], and refs therein. Some of the miRNA candidates identified in other studies are from the same families (let-7, miR-320) although not exactly the same member. It is very encouraging to see that our findings have been confirmed by different groups using different techniques. However, there are studies that identify completely different sets of biomarkers, such as RNA-seq study of non-proliferative DR biomarkers for DRII patients in Chinese Han ancestry in serum [46]. This kind of result poses questions of the variability between ethnic groups and redundancy of the miRNA family member function. Therefore, more research is needed, and perhaps a variety of ethnic groups should be studied to address those questions.
The strength of this pilot study is in the fact that the majority of samples are from the three different body compartments from the same set of patients. Therefore the correlations in biomarkers between the aqueous, vitreous and plasma compartments are more likely to be meaningful. The high relative stability of miRNA in clinical tissues and biofluids (e.g. plasma, serum, urine, saliva, etc.) and the ability of miRNA expression profiles to accurately classify discrete tissue types and disease states have positioned miRNA quantification as an up-andcoming new tool for a wide range of diagnostic applications [50][51][52].
The profiling of circulatory miRNAs identified four putative biomarkers let7b, miR320b, miR-4488, and miR-762 that have dysregulated expression in all three examined fluids, aqueous, vitreous and plasma at the onset of DM or DR. The biomarkers identified in aqueous and vitreous fluids are also differentially regulated in plasma. Several circulatory miRNAs have shown differential presence in normals vs. diabetic retinopathy fluid samples, offering promise for further study for diagnostic or therapeutic purposes. Of additional interest and clinical importance is that a high percentage of patients with eye diseases such as DR develop Alzheimer's disease (AD). Patients with recent DR (diagnosed within 0-5 years) and established DR (> 5 years) were found to be at a higher risk of AD by 67% and 50% compared to those without DR [53]. Identifying ophthalmic diseases by measurement of suitable biomarkers would enable better screening and treatment of those individuals at risk of AD [53,54].
As a future goal, we will validate the diagnostic validity of these biomarkers in plasma and aqueous humor on a more significant number of individuals. One of our goals in the followup study with a much higher number of samples is to have a separate biomarker discovery and confirmation groups. Also, we will conduct longitudinal studies by obtaining patient samples in the clinic settings at the earlier stages of the disease to determine the predictive value for DR progression and responses to treatment. Our results are the promising beginning of developing a fingerprint of biomarkers that might be able to be used as prognostic markers for PDR development in bodily fluids. Information discovered through these studies may lead to major advances in therapeutic management.
Supporting information S1 | 2020-07-23T09:01:59.602Z | 2020-07-21T00:00:00.000 | {
"year": 2020,
"sha1": "c35a78d34f038fc2be9fa1566e3a8bc951fc537a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0235541&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c698b7f7684cc9835b500477bb7cf73a3be6866d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258768867 | pes2o/s2orc | v3-fos-license | Kink Soliton Dynamic of the (2+1)-Dimensional Integro-Differential Jaulent–Miodek Equation via a Couple of Integration Techniques
: In this article, the aim was to obtain kink soliton solutions of the (2+1)-dimensional integro-differential Jaulent–Miodek equation (IDJME), which is a prominent model related to energy-dependent Schrödinger potential and is used in fluid dynamics, condensed matter physics, optics and many engineering systems. The IDJME is created depending on the parameters and with constant coefficients, and two efficient methods, the generalized Kudryashov and a sub-version of an auxiliary equation method, were applied for the first time. Initially, the traveling wave transform, which comes from Lie symmetry infinitesimals ∂∂ x , ∂∂ y and ∂∂ t , was applied, and a nonlinear ordinary differential equation (NODE) form was derived. In order to make physical interpretations, appropriate solution sets and soliton solutions were obtained by performing systematic operations in line with the algo-rithm of the proposed methods. Then, 3D, 2D and contour simulations were made. Interpretations of different kink soliton solutions were made by obtaining results that are consistent with previous studies in the literature. The obtained results contribute to the studies in this field, though the contribution is small.
The fractional forms of this equation are among the titles of work carried out in recent years regarding the Jaulent-Miodek equation.For example, Sahoo et al. investigated the JM system by using fractional Lie symmetry analysis unified with symmetry analysis and used the conservation laws of the system in order to derive new conserved vectors [35]; Zadeh et al. analyzed the fractional-order JME with the help of Laplace decomposition and Laplace variational iteration methods [36]; Veeresha et.al. investigated the numerical solution of the time-fractional JME with the help of thr coupled fractional reduced differential transform method (CFRDTM) and homotopy analysis transform method (HATM) [37]; and Alshammari et al. studied the numerical solution of the fractional JME with the help of the coupled fractional variational iteration transformation technique and the Adomian decomposition transformation technique [38].
One of the important NLPDEs is the Jaulent-Miodek equation (JME), which is used to model many important problems in optics, condensed matter physics and fluid dynamics [39].
The Jaulent-Miodek equation was first introduced by M. Jaulent and I. Miodek in 1976 [40] as a coupled Jaulent-Miodek equation by using inverse scattering transform with the help of energy-dependent Schrödinger potentials.Since the source of the JM equation is energy-dependent Schrödinger potentials [41,42], it has also been the subject of different studies as a coupled JM system [43,44].In particular, the (2+1)-dimensional JME gives information about the energy-dependent Schrödinger potential [45].In the literature, the following four models are called the Jaulent-Miodek hierarchy [46]: where When the literature is scanned, it is seen that there are many studies related to both the Jaulent-Miodek equation and the Jaulent-Miodek hierarchy depending on the importance of the Jaulent-Miodek equation: Ruan and Lou investigated the new symmetries of the JM hierarchy [47]; Feng and Li derived many explicit expressions by using the theory of the plane dynamic system for studying the existence of solitary and periodic waves of the coupled JME [48]; Gang et al. derived a hierarchy of generalized JM equations and their explicit solutions [49]; MA Hong-Cai et al. applied the Hereman-Nuseir method to the model in Equation ( 2) and obtained kink, multiple singular and multiple kink-singular solitons [46]; Wafaa M. Taha et al. applied the tanh method and (G /G) method to the model in Equation ( 3) and produced kink and bright solitons [50]; Kaplan et al. applied the generalized Kudryashov method to the model in Equation (3) and obtained singular and bright solitons [51]; Apranti et al. applied the extended simple equation method and produced a periodic soliton [52].For the model in Equation (4), Liu et al. obtained antibell-shape and two bell-shape solitons with the help of Bell polynomials [53].In previous studies, scientists investigated analytical solutions of Jaulent-Miodek equations in different forms and obtained a kink-type soliton, periodic-type soliton and bell-type soliton [54][55][56].In addition, the following recent studies should be mentioned: Mbusi et al. investigated the the exact solutions and conservation laws of a generalized (1+2)-dimensional JME with power-law nonlinearity [57]; Motsepa et al. investigated the conservation law and gained the traveling wave solutions of the (2+1)-JME [58]; Gu utilized the complex method in order to obtain the exact solutions of the (2+1)-dimensional JME [45]; Iqbal et al. studied the JM system with the modified exponential rational function method [59]; Guiping et al. derived the new solitary solutions to the time-fractional coupled JME [60]; Sadat and Kassem gained explicit solutions for the (2+1) JME using the integrating factors method in an unbounded domain [61]; Kaewta et al. studied the (2+1) conformable time partial integrodifferential JM equation using the exp-function [62] and transformed the (2+1)-dimensional JME into a fourth-order partial differential equation by having the exact solution [63]; Pei and Bai investigated the Lie symmetries, conservation laws and exact solutions for JME [64].Furthermore, the space-time fractional form of the coupled JME by Chao and Qilong [65], the JME with positive dispersion by Jing et al. [66] and dozens of other studies like these can be listed as studies emphasizing the importance of the JM equation.
The (2+1)-dimensional integro differential Jaulent-Miodek equation is given as follows [67]: where λ 1 , λ 2 , λ 3 , λ 4 and λ 5 are real constants and Here, the model in Equation ( 2) is obtained for In order to obtain the traveling wave solutions of the nonlinear integrable evolution equations, the decomposition of nonlinear partial differential equations has its own importance and difficulty.The decomposition method is basically based on transforming or reducing a nonlinear partial differential equation into a system of double ordinary differential equations, either from a theoretical or practical point of view.Therefore, with this approach, it is possible to obtain solutions of soliton equations by converting soliton equations to finite dimensional Hamiltonian systems, with the aim of integrating decomposition, or to make the calculations required for this purpose much easier.Li is among the first researchers to make these applications to prove the existence of kink, periodic and solitary wave solutions of different, singular nonlinear propagating soliton wave equations [68,69].Such approximations make it possible to obtain integrable equations such as the equation given by Equation (5).In Equation (5), by substituting u = v x and by getting rid of the integral term, we obtain the equivalent form of Equation ( 5), which we will study in this manuscript as follows: Exact solutions of NLEEs have crucial importance in adding an elite point of view.Numerical methods, calculations and simulations are important but they also always give a pictorial view and, generally, the results obtained are fuzzy for evaluation.At this point, analytical or exact solutions add extra flavor to this research.This is one of the main factors underlying the choice of an analytical method as a method in this study.
Although different forms of the kink soliton solutions have been obtained by various techniques related to the JM and IDJMEs before, there are a lack of studies that focus on kink soliton shapes (parabolic or smooth) and show that the utilized approaches are easily applicable and effective, which are positive aspects of this work.
The remainder of the article is structured as follows: Section 2 is devoted to obtaining the NODE form of Equation (6).In Section 3, basic algorithms of the generalized Kudryashov method and a sub-version of auxiliary methods are presented.Section 4 includes the soliton solutions and their interpretations, and Section 5 is the conclusions.
Mathematical Analysis of the Investigated Problem
Let us consider Equation ( 6) and follow the traveling wave transform, which comes from Lie symmetry infinitesimals ∂ ∂x , ∂ ∂y and ∂ ∂t : where x, y are spatial coordinates and t is temporal variable.In addition, β and w are nonzero arbitrary constants, where w stands for velocity.Inserting Equation ( 7) into Equation ( 6) presents the following equation: Substituting R(κ) = v (κ), we recast Equation ( 8) in the following form: where λ 1 , λ 2 , λ 3 , λ 4 and λ 5 are arbitrary real constants and Equation ( 9) is the nonlinear ordinary differential form of Equation ( 6).
Proposed Methods and Their Applications
In this section, the proposed methods are briefly explained and applied to Equation (9).
Generalized Kudryashov Method and Its Implementatiton
Step-1: Let us assume that Equation ( 9) has the solution in the following form [70]: where a i (i = 0, 1, ..., r) and b j (j = 0, 1, ..., s) are real constants such that a r , b s should not be zero simultaneously.Here, r and s are balancing constants that are positive integers and M(κ) is the solution of the following equation [70]: in which Equation (11) has the following well-known solution [70]: where δ is a nonzero constant.
A Sub-Version of Auxiliary Method and Its Implementatiton
Step-1: Let us assume that Equation ( 9) has a solution in the following form [70]: where A 0 , A 1 , ..., A r are real values, r is a balancing constant and M(κ) is the solution of the following formula: It is easy to ascertain that: Step-2: Applying the homogeneous balance principle between the highest-order derivative term R (κ) and the highest-degree R 3 (κ) term in Equation ( 9) by taking into account Equations ( 21) and ( 22), we calculate the balancing constant as r + 2 = 3r.The calculation of r as 1 generates the following structure of Equation ( 21): Step-3: Inserting Equations ( 22) and ( 24) into Equation ( 9), a polynomial in powers of M(κ) is formed.Collecting the terms that include the same power of M(κ) and setting each coefficient to zero, we obtain the following algebraic system of equations: Step-4: The solution of Equation ( 25) permits us to obtain the following solution sets: SET-6,7: where = −12 λ 2 λ 5 + 3(λ 3 + λ 4 ) 2 .
Results and Discussion
In this section, we illustrate some graphical simulations of the (2+1)-dimensional IDJME in Equations ( 18)-( 20), ( 28) and ( 29).We demonstrate 3D, contour and 2D graphics to present soliton models of the solution functions.In addition, we interpret the state of movement of solitons with respect to time via 2D graphics.
In Figure 1, v 1 (x, y, t) can be seen in Equation ( 18) for δ = 0.35, y = 5, This soliton is a kink soliton.We examine the behavior of this soliton with the help of 3D, contour and 2D graphics in Figure 1a, Figure 1b and Figure 1c, respectively.In Figure 1c, the direction of the parabolic kink soliton for the values of t = 0, 4, 8 is shown.The soliton moves to the right on the x-axis.
In Figure 2, we visualize v 4 (x, y, t) in Equation ( 28) for the λ 1 = 2, λ 2 = −1, λ 3 = λ 4 = λ 5 = 1 and y = 2 parameter values.This soliton model is a kink soliton.We investigate the physical orientation of the kink soliton via 3D, contour and 2D graphs in Figure 2a, Figure 2b and Figure 2c, respectively.In Figure 2c, we show the movement of the flat-kink soliton for the values of t = 0, 4, 8.It can be observed that this kink soliton maintains its form and goes to the left along the x-axis.In Figure 3, we plot the solution of v 7 (x, y, t) in Equation ( 29) by assigning λ 1 = 3, λ 2 = −0.5, λ 3 = λ 4 = λ 5 = 3 and y = 2 values to the parameters.This figure represents the smooth kink soliton model.We analyze the physical orientation of this soliton via 3D, contour and 2D graphs in Figure 3a, Figure 3b and Figure 3c, respectively.for δ = 2.25, λ 1 = 1.25, λ 2 = λ 3 = λ 5 = 1, λ 4 = 1.12 and y = 2. Figure 4a, Figure 4b and Figure 4c belong to 3D, contour and 2D scenarios, respectively.In Figure 4c, the direction of the soliton for the values of t = 1, 7, 13 is shown, where the soliton migrates to the right on the x-axis.If a little more attention is paid to the soliton graph presented in Figure 4, it will be seen that this presentation is different from the previous graphical simulations.The soliton, in general, is like a combination of two planar behaviors (the junction is curved).In a sense, it reflects the kink soliton appearance in terms of the general image, but not in terms of the lower skirt part of the soliton.It has a large flat area at the top.In Figure 4, there is a situation similar to the observation that we made beforehand in Figure 2; that is, the wave is below the neutral level.However, unlike Figure 2, it is seen that there is no skirt formation belonging to the lower part of the wave.In addition, as another difference, it is seen that the slope of the waterfall part of the wave occurs more.It is also possible to make a physical observation regarding Figure 4 as follows.If the graph represented by Figure 4 is considered as a water wave in the sea or ocean, then we can say that the wave representation is below the sea or ocean surface (if the sea or ocean surface is considered as a neutral or zero level).Therefore, in this respect, the entire wave is formed below the neutral level.While the entire wave is below its neutral level, the bottom skirt of the wave (bottom right), in a sense, forms or runs parallel to the bottom.Figure 5 also represents a behavior that draws our attention and needs to be emphasized.Here, the same soliton solution function is used (as in Equations ( 18) and ( 19)) but another solution set is used as in the previous graphs.The bottom and top skirts are not visible in the scenarios of Equation (20).In a sense, it can be taken into account as the form of the graph in Figure 4, in which the upper skirt also disappears.In general, such soliton behaviors are called plane solutions.If Figures 2, 4 and 5 are considered separately, these graphs are graphical representations of the solution functions obtained by applying the same solution method as the generalized Kudryashov scheme.Therefore, it is seen that solution functions with the same character represent different soliton behaviors with different solution sets.While there are lower and upper skirts of the soliton in Figure 2, the lower skirt of the soliton cannot be observed in Figure 4 and both the lower and upper skirts in Figure 5.In addition, except for the skirt parts of the soliton (i.e., the waterfall part), it turns into an additional inclined physical structure.Beyond the fact that this kind of behavior is presented as a rare case for IDJME, it is important in terms of showing how important and effective the solution sets and parameter selection obtained in such NLPDE solutions are.
Conclusions
In this article, the soliton solutions of the (2+1)-dimensional IDJME, which gives information about the energy-dependent Schrödinger potential, were investigated using two different efficient analytical methods: the generalized Kudryashov method and a sub-version of an auxiliary method.We derived the IDJME and different forms of kink solitons in accordance with the structure of the IDJME.Although different forms of the kink soliton type have been obtained by using different methods related to the JM and IDJM equations in the literature, there is a lack of studies that focus on the kink soliton types
Figure 4
Figure 4 is another scenario of v 2 (x, y, t) in Equation(19) for δ = 2.25, λ 1 = 1.25, λ 2 = λ 3 = λ 5 = 1, λ 4 = 1.12 and y = 2. Figure4a, Figure4band Figure4cbelong to 3D, contour and 2D scenarios, respectively.In Figure4c, the direction of the soliton for the values of t = 1, 7, 13 is shown, where the soliton migrates to the right on the x-axis.If a little more attention is paid to the soliton graph presented in Figure4, it will be seen that this presentation is different from the previous graphical simulations.The soliton, in general, is like a combination of two planar behaviors (the junction is curved).In a sense, it reflects the kink soliton appearance in terms of the general image, but not in terms of the lower skirt part of the soliton.It has a large flat area at the top.In Figure4, there is a situation similar to the observation that we made beforehand in Figure2; that is, the wave is below the neutral level.However, unlike Figure2, it is seen that there is no skirt formation belonging to the lower part of the wave.In addition, as another difference, it is seen that the slope of the waterfall part of the wave occurs more.It is also possible to make a physical observation regarding Figure4as follows.If the graph represented by Figure4is considered as a water wave in the sea or ocean, then we can say that the wave representation is below the sea or ocean surface (if the sea or ocean surface is considered as a neutral or zero level).Therefore, in this respect, the entire wave is formed below the neutral level.While the entire wave is below its neutral level, the bottom skirt of the wave (bottom right), in a sense, forms or runs parallel to the bottom. | 2023-05-19T15:06:19.834Z | 2023-05-16T00:00:00.000 | {
"year": 2023,
"sha1": "1689005aed447561f5a837bee4edacb57a90fe28",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/15/5/1090/pdf?version=1684220575",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "12caaaae2fc919164d4191733e6f29794d6b3cb4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
235449808 | pes2o/s2orc | v3-fos-license | MicroRNA‐199a‐5p aggravates angiotensin II–induced vascular smooth muscle cell senescence by targeting Sirtuin‐1 in abdominal aortic aneurysm
Abstract Vascular smooth muscle cells (VSMCs) senescence contributes to abdominal aortic aneurysm (AAA) formation although the underlying mechanisms remain unclear. This study aimed to investigate the role of miR‐199a‐5p in regulating VSMC senescence in AAA. VSMC senescence was determined by a senescence‐associated β‐galactosidase (SA‐β‐gal) assay. RT‐PCR and Western blotting were performed to measure miRNA and protein level, respectively. The generation of reactive oxygen species (ROS) was evaluated by H2DCFDA staining. Dual‐luciferase reporter assay was used to validate the target gene of miR‐199a‐5p. VSMCs exhibited increased senescence in AAA tissue relative to healthy aortic tissue from control donors. Compared with VSMCs isolated from control donors (control‐VSMCs), those derived from patients with AAA (AAA‐VSMCs) exhibited increased cellular senescence and ROS production. Angiotensin II (Ang II) induced VSMC senescence by promoting ROS generation. The level of miR‐199a‐5p expression was upregulated in the plasma from AAA patients and Ang II–treated VSMCs. Mechanistically, Ang II treatment significantly elevated miR‐199a‐5p level, thereby stimulating ROS generation by repressing Sirt1 and consequent VSMC senescence. Nevertheless, Ang II–induced VSMC senescence was partially attenuated by a miR‐199a‐5p inhibitor or Sirt1 activator. Our study revealed that miR‐199a‐5p aggravates Ang II–induced VSMC senescence by targeting Sirt1 and that miR‐199a‐5p is a potential therapeutic target for AAA.
| INTRODUC TI ON
Abdominal aortic aneurysm (AAA), characterized by degradation of the aortic wall and a progressively enlarged aorta that exceeds the normal diameter by > 50%, is a leading cause of morbidity and mortality worldwide. 1 It is an age-related vascular disease with an incidence as high as 8% in males aged > 60 years and females aged >65 years. 2 There is no effective pharmacological treatment to prevent, delay or reverse AAA 3 so a novel pharmacotherapy is urgently needed. The lack of effective drug therapy for AAA is partially due to a poor understanding of the molecular mechanisms that underlie its development. It has been well documented that senescence of vascular smooth muscle cells (VSMCs), the principal resident cells of the aortic wall, plays a critical role in the formation and progression of AAA. 4 Senescent VSMCs release a variety of pro-inflammatory cytokines and matrix-degrading molecules such as monocyte chemotactic protein-1, interleukin-6 and matrix metalloproteinase 2, that contribute to AAA formation. 5 Immunoglobulin E has been shown to activate the lincRNAp21-p21 signalling pathway to induce VSMC senescence and thus facilitate Angiotensin II (Ang II)-induced AAA formation in ApoE -/mice. 6 Nevertheless, the precise mechanism underlying VSMC senescence in AAA is not well understood.
MicroRNAs (miRNAs) are small non-coding RNAs (~21-23 nucleotides) that bind to the 3'untranslated region (UTR) of specific target mRNA, inducing target degradation or inhibiting translation. 7 miR-NAs are involved in a wide range of biological and pathophysiological processes within the vasculature. 8 Recently, it has been well demonstrated that miRNAs participate in the initiation and progression of AAA. 9 miR-712 has been shown to promote AAA development in Ang II-infused ApoE -/mice by repressing two matrix metalloproteinase inhibitors: TIMP3 and RECK. 10 miR-155-5p suppresses the viability of VSMCs by targeting FOS and ZIC3 to trigger progression of AAA. 11 Nonetheless whether and how miRNAs affect VSMC senescence in AAA remains unclear.
Sirtuins, a family of nicotinamide adenine dinucleotidedependent enzymes, are widely expressed in mammals. Seven sirtuins (Sirt1-7) have been reported in humans and serve multiple functions including cell proliferation, survival and homeostasis. 12 Sirt1, one of the best-studied sirtuins, plays an essential role in mediating both replicative and premature cellular senescence. 13 Previous studies revealed that Sirt1 is highly expressed in the vasculature, including by VSMCs, endothelial progenitor cells and endothelial cells, regulating cardiovascular functions. [14][15][16] Indeed, an aberrant Sirt1 level is closely associated with AAA formation and progression. 17,18 Despite this, whether specific miRNA affects VSMC senescence by regulating Sirt1 to regulate AAA formation remains to be determined. This study revealed that expression of miR-199a-5p was significantly increased in the plasma of AAA patients and Ang II-treated VSMCs and contributed to Ang II-induced VSMC senescence by targeting Sirt1. This may provide an alternative strategy for AAA disease.
| Isolation, culture and characterization of VSMCs
Abdominal aortic aneurysm tissue was collected from patients who underwent surgical repair. Healthy human abdominal aortic tissue was harvested from donors and served as the control group.
Written informed consent was obtained from all study patients. All procedures that involved human samples were approved by the research ethics board of Guangdong Provincial People's Hospital (No. GDREC2018060H). VSMCs were isolated from the abdominal aortic tissue as described in our previous study. 4 Briefly, after cleaning the fatty tissue, the medial tissue of aortic tissues was carefully dissected from the adventitia and intima and then cut into 1-2 mm 3 pieces. Pieces were then transferred to 10-cm poly-L-lysine-coated culture plates and incubated for adhesion at 37°C for 1 hour. After attaching to the plate, the medial pieces were gently cultured with Dulbecco's modified Eagle medium (DMEM; Sigma-Aldrich) containing 10% foetal bovine serum (FBS; Gibco) and 100 µg/mL penicillin and streptomycin (P/S, Thermo Fisher Scientific). Cell cultures were maintained at 37°C in a humidified 5% CO 2 atmosphere. The medial pieces were left undisturbed to prevent detachment for 4 days, and the medium refreshed approximately every 3 to 4 days. VSMCs migrated out from the medial pieces within 1-2 weeks. After removing the medial pieces, VSMCs were regularly collected and passaged. All VSMCs at passage 2−3 were used in this study. In the current study, we collected seven control-VSMC cell lines from healthy donors (51.71 ± 3.30 years old) and eight AAA-VSMC cell lines from AAA patients (55.25 ± 4.861 years old).
| HE staining
Abdominal aortic tissue from AAA patients and healthy aortic tissue from control donors were harvested. After fixation with 10% formalin, the tissue was embedded in paraffin and cut into 5μm-thick sections. Sections were stained with haematoxylin and eosin (HE) according to the protocol of our lab. Briefly, the sections were deparaffinized in xylene and then dehydrated in alcohol. Subsequently, the sections were stained with haematoxylin solution for 5 minutes and then rinsed in alcohol. The sections were then counterstained in the eosin solution for 30 seconds, then mounted and photographed.
K E Y W O R D S
abdominal aortic aneurysms, miR-199a-5p, senescence, Sirtuin1, vascular smooth muscle cells
| DHE staining
The reactive oxygen species (ROS) production in AAA tissue and control aneurysmal tissue was evaluated by dihydroethidium (DHE) staining (Thermo Fisher Scientific, D1168). Briefly, sections were hydrated and incubated at room temperature with 10 μM DHE in the dark for half an hour. After washing with PBS, randomly selected areas were photographed using a fluorescence microscope and fluorescence intensity calculated using Image J software.
Control-VSMCs and AAA-VSMCs were cultured on 6-well plates.
| Bromodeoxyuridine (Brdu) incorporation assay
The proliferation of VSMCs was evaluated using a BrdU incorporation kit according to the protocol (Roche, 11647229001). Briefly, 3 × 10 4 VSMCs were seeded in 96-well plates and incubated at 37℃with 10 μM BrdU labelling solution for 24 hours. After removing labelling solution, the VSMCs were treated with 200 μL FixDenat solution for 30 minutes. Next, the cells were incubated with anti-BrdU-POD working solution for 90 minutes. Finally, after washing with PBS three times, the cells were incubated with 100 μL substrate solution for 5 minutes and the absorbance at 450 nm measured.
| H 2 DCFDA staining
To detect the generation of ROS in VSMCs, H 2 DCFDA staining (D399, Invitrogen) was performed according to the protocol. Briefly, control-VSMCs were cultured in 24-well plates with collagen-coated glass coverslips, then treated with Ang II or Ang II + NAC. Cells were then incubated in the dark with 10 μM H 2 DCFDA for 15 minutes at 37°C. Five different view fields of each sample were photographed and fluorescence intensity calculated in three independent experiments using Image J software.
| Real-time PCR
Total RNA from VSMCs or serum was isolated with TRIzol reagent (Takara, RNAiso Plus, 9108 and RNAiso Blood, 9112). Reverse transcription was performed using a PrimeScript RT Reagent Kit (Takara, RR037A), and qRT-PCR of miR-199a-5p performed using a One-Step TB Green ® PrimeScript ™ RT-PCR Kit (Takara, RR820A). For miR-199a-5p, Bulge-Loop ™ miRNA RT primer (RiboBio) was used. U6 was the reference gene for miRNA expression analysis. The expression of miR-199a-5p was normalized to the expression of U6 using the 2 −ΔΔCt cycle threshold method. The experiments were repeated at least three times.
The quantification of Western blotting in three independent experiments was analysed using Image J software (National Institutes of Health).
| microRNA sequencing and data analysis
Total RNA was extracted from the serum of patients and purified We identified miRNAs with a fold change ≥2 and P-value < .05 in a comparison as significant DE miRNAs.
| Luciferase activity assay
Wild-type SIRT1 3′UTR firefly luciferase reporter plasmids and SIRT1 3′UTR firefly luciferase reporter plasmids with the potential miR-199a-5p-binding site mutated were used in this study. These plasmids were co-transfected, respectively, with miR-199a-5p mimic or miR-control to HEK293 cells, while renilla luciferase reporter plasmids were also transfected and served as an internal control.
After transfection, luciferase activity was detected using a Dual-Glo Luciferase Assay Kit (Promega).
| Viral vector construction and infection
The lentiviral plasmid constructs for Sirt1 in VSMCs were purchased from TranSheepBio (J0510-A9, TranSheepBio). The lentivirus was packaged as previously described. 19 VSMCs at a confluence of 70-80% were infected by lentivirus at a multiplicity of infection of 10 with polybrene (8 μg/mL). Transfection efficiency was determined after 72 hours by Western blotting.
| ELISA assay
The concentration of IL-6 and TNFα in aortic tissue was measured using a human IL-6 Quantikine ® ELISA Kit (R&D Systems, D6050) and human TNFα Quantikine ® ELISA Kit (R&D Systems, STA00D) according to the manufacturer's instructions.
| Statistical analysis
All values are expressed as mean ± SEM. Analysis was performed using GraphPad Prism Software. Statistical significance was determined by independent-samples T test between two groups or analysis of variance (ANOVA) followed by Bonferroni test between more than two groups. P < .05 was considered statistically significant.
| VSMCs exhibit cellular senescence in human AAA tissue
Our previous study showed increased VSMC senescence in aortic aneurysmal tissue from MFS patients. 4 We, therefore, examined cellular senescence in human AAA tissue. First, HE staining revealed the vasculopathy in the medial layer of the abdominal aorta in AAA patients ( Figure 1A). We next performed SAβ-gal assay to detect cellular senescence in AAA tissue. As shown in Figure 1B, SAβ-gal activity was significantly increased in AAA tissue relative to control tissue ( Figure 1B). The SAβ-gal stained area was mainly located in the medial layer of the abdominal aorta in AAA patients, suggesting a role of medial VSMC senescence ( Figure 1B). Western blotting also showed that the level of cellular senescence markers p-p53 and p21 was much higher in AAA tissue compared with control tissue Figure 1E). These data suggest that VSMCs in AAA tissue were senescent.
| VSMCs derived from AAA patients demonstrate cellular senescence
We successfully isolated VSMCs as evidenced by expression of α-SMA, Calponin, MYH11 and Smoothelin from AAA patients and control donors ( Figure S1). In the current study, we totally isolated seven control-VSMC cell lines from healthy donors and eight AAA-VSMC cell lines from AAA patients. Consistently, AAA-VSMCs exhibited a higher protein level of p-p53 and p21 than control-VSMCs ( Figure 2A). Next, we compared the cell growth rate via serial passaging: AAA-VSMCs arrested at passage 8, whereas control-VSMCs continued growing until passage 12, suggesting that AAA-VSMCs had a lower growth rate ( Figure 2B). BrdU assay also showed that compared with control-VSMCs, the absorbance at 450nm of AAA-VSMCs was decreased by 50%, indicating that their proliferation rate was reduced ( Figure 2C). We performed SAβ-gal assay to compare cellular senescence between AAA-VSMCs and control-VSMCs. As shown in Figure 2D, SAβ-gal activity was greatly enhanced in AAA-VSMCs compared with control-VSMCs ( Figure 2D).
In contrast, the number of ki-67 positive cells was decreased by 60% in AAA-VSMCs compared with control-VSMCs ( Figure 2E). We also examined DNA damage in AAA-VSMCs and control-VSMCs using γH2AX staining. Compared with control-VSMCs, the percentage of γH2AX positive cells had tripled in AAA-VSMCs ( Figure 2F).
Collectively, these results indicate that VSMCs isolated from AAA patients are senescent.
| Ang II induces VSMC senescence via upregulation of ROS generation
To the best of our knowledge, a high level of Ang II is a major cause of AAA formation. 20, 21 We measured the level of Ang II in the plasma of AAA patients. It has been documented that the circulating Ang II level is significantly increased in acute aortic dissection patients compared with healthy controls. 22 Consistently, compared with control donors, the concentration of Ang II was increased in AAA patients (27.3 ± 3.5 pg/mL vs. 11.6 ± 3.5 pg/mL, Figure 3A). To examine whether AngII induces VSMC senescence, we treated VSMCs with 1 nM, 10 nM, 20 nM and 50 nM AngII for 48 hours. As shown in Figure S2A, Ang II induced VSMC senescence at 20 nM but this plateaued at 50nM, indicating that the effect was dose-dependent ( Figure S2A). Next, we treated VSMCs with 20 nM AngII for 24, 48, 72 and 168 hours. AngII induced VSMC senescence at 48 hours but the effect plateaued at 72 and 168 hours ( Figure S2B), suggesting that Ang II-induced VSMC senescence was dose-and time-dependent.
Based on these results, 20nM of AngII-treated VSMCs for 48 hours was chosen for further studies. Since ROS generation contributes to cellular senescence, we determined ROS level in AAA tissue by DHE staining. As shown in Figure 3B, the level of ROS had tripled in AAA tissue ( Figure 3B). Furthermore, ROS generation was significantly increased as evidenced by H 2 DCFDA staining in AAA-VSMCs compared with control-VSMCs ( Figure S3A). Flow cytometry analysis also showed a higher level of intracellular ROS in AAA-VSMCs than control-VSMCs ( Figure S3B). We then analysed whether Ang II regulates VSMC senescence via ROS production. Ang II treatment significantly enhanced the percentage of SAβ-gal-positive cells among VSMCs and increased ROS generation ( Figure 3C,D), whereas ROS scavenger NAC remarkably inhibited VSMC senescence and ROS generation. Moreover, NAC inhibited the upregulation of p-p53 and p21 expression induced by Ang II ( Figure 3E). The above results suggest that Ang II mediates VSMC senescence via ROS production.
| Ang II induces VSMC senescence though regulation of Sirt1
Accumulating evidence has shown that Sirt1 plays a critical role in regulating ROS generation 23,24 with a complex interplay between the two. Increased ROS can directly or indirectly regulate Sirt1 activity, which in turn controls ROS level. 25,26 The exact relationship between Sirt1 activity and ROS generation may depend on the cell types and the cellular contexts. To establish the relationship between VSMC senescence and Sirt1 and ROS in the current study, we first detected the expression of Sirt1 in human AAA tissue. The level of Sirt1 in human AAA tissue was significantly lower than in control tissues ( Figure 4A), and likewise significantly reduced in AAA-VSMCs compared with control-VSMCs ( Figure 4B). We then investigated the involvement of Sirt1 in Ang II-induced VSMC senescence and established that Sirt1 was significantly decreased ( Figure 4C) in Ang IItreated VSMCs, accompanied by a significant increase in protein level of p-p53 and p21 ( Figure 4C) as well as cellular senescence ( Figure 4D) and ROS production ( Figure 4E). Nevertheless Resveratrol, a Sirt1 activator, significantly downregulated the increased level of p-p53 and p21 as well as cellular senescence and ROS production in Ang II-treated VSMCs ( Figure 4C-E). To further verify the role of Sirt1 in Ang II-induced VSMC senescence, we overexpressed Sirt1 in control-VSMCs and then treated the cells with Ang II. As shown in Figure S4, overexpressed Sirt1 rescued Ang II-induced VSMC senescence ( Figure S4A). Moreover, overexpressed Sirt1 inhibited Ang IIinduced ROS generation in VSMCs ( Figure S4B).
In addition to ROS, downregulation of Sirt1 is closely associated with inflammation that contributes to AAA formation. 27 We examined the expression of IL-6 and TNFα in AAA tissue. Compared with control tissue, the concentration of IL-6 and TNFα was significantly increased in AAA tissue compared with control tissue ( Figure S5).
These results indicate that Ang II induces VSMC senescence though downregulation of Sirt1.
| miR-199a-5p mediates Sirt1 expression
Increasing evidence has shown that miRNAs play a vital role in regulating AAA formation. To investigate whether miRNA(s) regulate the expression of Sirt1 to mediate VSMC senescence, we examined the expression of miRNAs collected from the plasma of AAA patients and control donors using miRNA-sequencing. The results revealed that AAA patients had a significantly different miRNA expression signature to control donors ( Figure 5A). The miR-199a-5p level was significantly upregulated in AAA patients ( Figure 5B) and also greatly enhanced in AAA-VSMCs compared with control-VSMCs ( Figure 5C). Furthermore, Ang II treatment upregulated the level of miR-199a-5p in VSMCs in a time-dependent manner ( Figure 5D).
Bioinformatic analysis using TargetScan (http://www.targe tscan. org/) showed that the 3′-UTR of Sirt1 has a potential-binding site for miR-199a-5p ( Figure 5E), suggesting that Sirt1 is a potential target of miR-199a-5p. Next, we examined whether overexpression of miR-199a-5p could alter the expression of Sirt1 in VSMCs. The results showed that miR-199a-5p mimic treatment significantly downregulated the expression of Sirt1 in VSMCs ( Figure 5F). Furthermore, a dual-luciferase reporter gene assay demonstrated that the miR- Sirt1 wild-type (WT) reporter but had no influence on that of the Sirt1 mutant reporter ( Figure 5G). In summary, these results indicate that miR-199a-5p regulates Sirt1 expression.
| Ang II induces VSMC senescence through targeting of Sirt1 by miR-199a-5p
We verified the functional role of miR-199a-5p in the regulation of Sirt1-mediated Ang II-induced VSMC senescence. Inhibition of miR-199a-5p using miR-199a-5p inhibitor greatly attenuated ROS generation ( Figure 6A) and cellular senescence ( Figure 6B Sirt1 is a target of miR-199a-5p, VSMCs were co-transfected with miR-199a-5p inhibitor and Sirt1-siRNA. The results showed that Sirt1-siRNA partially abrogated the effect of miR-199a-5p inhibition on ROS generation, cellular senescence and Sirt1, and p-p53/ p21 expression in Ang II-treated VSMCs ( Figure 6A-C). These F I G U R E 3 Ang II-induced cellular senescence in VSMCs via ROS generation. A, The concentration of Ang II in the serum from control donors and AAA patients was measured by ELISA. n = 7-8. B, Representative images of DHE staining and quantitative analysis of ROS generation in the abdominal aorta from control donors and AAA patients. n = 7-8. C, Representative images and quantitative analysis of SAβ-gal assay in control-VSMCs treated with Ang II or Ang II combined with NAC. D, Representative images of H 2 DCFDA staining and quantitative analysis of ROS generation in control-VSMCs treated with Ang II or Ang II combined with NAC. E, Western blotting and quantitative analysis of the expression of p-p53 and p21 in control-VSMCs treated with Ang II or Ang II combined with NAC. *P < .05, **P < .01, ***P < .001 Control-VSMCs
| D ISCUSS I ON
There are several major findings in the current study ( Figure 6D).
First, VSMCs in patients with AAA exhibited cellular senescence.
Second, Sirt1 expression was downregulated in human AAA tissue. inhibitor + Sirt1-siRNA. B, Quantitative analysis of SAβ-gal positive cells in control-VSMCs treated with Ang II, Ang II + miR-control, Ang II + miR-199a-5p inhibitor, Ang II + miR-199a-5p inhibitor + siRNA-control and Ang II + miR-199a-5p inhibitor + Sirt1-siRNA. C, Western blotting and quantitative analysis of the expression of Sirt1, p-p53 and p21 in control-VSMCs treated with Ang II, Ang II + miR-control, Ang II + miR-199a-5p inhibitor, Ang II + miR-199a-5p inhibitor + siRNA-control and Ang II + miR-199a-5p inhibitor + Sirt1-siRNA. D, Proposed mechanisms for miR-199a-5p regulation of VSMC senescence in AAA. ***P < .001 infused Apoe-/-mice. 39 Since Sirt1 expression was greatly reduced in AAA tissue, we presume the presence of a potential miRNA that targets Sirt1. Aging-associated increased miR-34a expression promotes VSMC senescence by inhibiting Sirt1, leading to arterial injury. 40 Compared with VSMCs isolated from miR-34a +/+ mice, those from miR-34a -/mice are less prone to senescence. 41 We analysed our miRNA sequencing data and found that miR-34 was also upregulated in AAA patients compared with control donors. Nonetheless its expression was not high among those miRNAs with increased expression in AAA patients. This may be due to the different miRNA expression signatures of AAA patients and AAA mice. In addition to miR-34a, whether other miRNAs induce VSMC senescence in AAA via targeting of Sirt1 needs to be determined. We found that miR-199a-5p was greatly enhanced in AAA patients, AAA-VSMCs and Ang II-treated VSMCs. Bioinformatic analysis showed that miR-199a-5p can bind to Sirt1 via 3′-UTR. In this study, we showed that Sirt1 is regulated by miR-199a-5p. MiR-199a-5p mimic treatment significantly reduced Sirt1 expression in VSMCs. Luciferase assay verified this observation. We also observed that miR-199a-5p inhibitor treatment significantly reduced Ang II-induced VSMC senescence via upregulation of Sirt1 and this effect was abrogated by Sirt1-siRNA.
There are some limitations that we should acknowledge. First, we investigated the role of only miR-199a-5p in regulation of VSMC senescence. Whether other miRNAs that were significantly upregulated in AAA patients including miR-455-3p or miR-125b-5p mediate VSMC senescence requires further investigation. Second, whether miR-199a-5p contributes to AAA formation via regulation of VSMC senescence needs to be further verified in Ang II-infused Apoe -/mice. Third, it has been reported that SHNG12 targeting miR-199a-5p/HIF-1α contributed to atherosclerosis formation by mediating the phenotypes of VSMCs. 42 In addition to Sirt1, whether miR-199a-5p mediates VSMC senescence via regulation of other targets needs to be addressed in future study. Fourth, whether miR-199a-5p can injure other cell types including endothelial cells and fibroblasts to promote AAA formation has not been demonstrated.
In summary, our study shows that Ang II activates the miR-199a-5p/Sirt1 pathway to induce VSMC senescence and this contributes to AAA formation. This study explored a novel molecular mechanism of AAA formation and provides a new therapeutic strategy for AAA. | 2021-06-17T06:16:25.731Z | 2021-06-15T00:00:00.000 | {
"year": 2021,
"sha1": "1da9f27102461a6fc5769a2e533e61a01d1a7a6d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.16485",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd8e60f9df1dff7a2e39cb4231f71a6fc5bd1875",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257838039 | pes2o/s2orc | v3-fos-license | Recent advances of long non-coding RNAs in control of hepatic gluconeogenesis
Gluconeogenesis is the main process for endogenous glucose production during prolonged fasting, or certain pathological conditions, which occurs primarily in the liver. Hepatic gluconeogenesis is a biochemical process that is finely controlled by hormones such as insulin and glucagon, and it is of great importance for maintaining normal physiological blood glucose levels. Dysregulated gluconeogenesis induced by obesity is often associated with hyperglycemia, hyperinsulinemia, and type 2 diabetes (T2D). Long noncoding RNAs (lncRNAs) are involved in various cellular events, from gene transcription to protein translation, stability, and function. In recent years, a growing number of evidences has shown that lncRNAs play a key role in hepatic gluconeogenesis and thereby, affect the pathogenesis of T2D. Here we summarized the recent progress in lncRNAs and hepatic gluconeogenesis.
Introduction
Gluconeogenesis is an important biochemical process for maintaining glucose homeostasis in mammals, in which glucose is produced from non-carbohydrate substrates including lactate, pyruvate, propionate, glycerol, and amino acids. In general, gluconeogenesis will be activated when the blood glucose level is very low, often under fasting or starvation conditions. In the body of mammals, gluconeogenesis mainly occurs in the liver, though it may also take place in fewer amounts in the kidney and small intestine. In normal conditions, hepatic gluconeogenesis is finely tuned by hormones including insulin and glucagon and thus keeps blood glucose within physiological concentrations (1,2). Insulin is a negative regulator of gluconeogenesis, while glucagon is a positive regulator (1,2). Chronic ectopic increased gluconeogenesis is often associated with metabolic syndromes such as hyperglycemia, hyperinsulinemia, insulin resistance, and type 2 diabetes (T2D) (3). On the contrary, impaired gluconeogenesis may cause hypoglycemia and a shortage of energy supply, leading to dizziness, memory loss, or coma (4).
Human genomics research data revealed that 84% of the human genome could be transcribed, only 2% of which can encode proteins after transcription (5). Therefore, RNAs not only act as carriers of genetic information but also play a variety of regulatory functions. RNAs without protein coding capacity are called non-coding RNAs (ncRNAs), including microRNAs, long-chain noncoding RNAs (lncRNAs), small nucleolar RNAs (snoRNAs), and circular RNAs (circRNAs) (6,7). Traditionally, noncoding RNA molecules with a length greater than 200 nt have been defined as lncRNAs (8). Accumulating evidence has shown that lncRNAs play a pivotal role in many cellular events, such as cell division, differentiation, migration, and apoptosis (9,10). Each lncRNA has its own tissuespecific expression pattern, which defines its respective unique function (10, 11). For example, lncRNA-Bvht is a heart-associated lncRNA in mouse, which is enriched in embryonic stem cells (ESCs) and plays a key role in cardiomyocyte differentiation (12). Upon starvation or cold stimulation, some lncRNAs in adipose tissues are transcribed and participate in lipolysis and thermogenesis, which may hold therapeutic potential for treating metabolic diseases such as obesity and diabetes (13). Similarly, hepatocytes also have a unique lncRNA expression profile, which changes during liver development and regulates liver maturation (14). RNA-seq revealed 104 differentially expressed lncRNAs in the liver of T2D rats and bioinformatics analysis suggest these lncRNAs may correlate with the pathogenesis of T2D by affecting lipid metabolism, gluconeogenesis, inflammation, and/or endoplasmic reticulum stress (15). Moreover, metformin treatment induced a number of differentially expressed lncRNAs in the liver of mice, implying lncRNAs are involved in hepatic gluconeogenesis since the promising inhibitory effect of metformin on gluconeogenesis (16). LincIRS2 is an obesity-repressed lncRNA in the liver, its deficiency elevated blood glucose, promoted insulin resistance, and induced glucose output in mice (17). All these data strongly indicate the potential impacts of lncRNAs on hepatic gluconeogenesis. In this review, therefore, we summarized the recent progress regarding the roles of lncRNAs on hepatic gluconeogenesis. Meanwhile, we also discussed the therapeutic potentials of lncRNAs in ectopic gluconeogenesis-associated metabolic disorders such as insulin resistance and T2D.
Hepatic gluconeogenesis in health and diseases
As the main energy source of mammals, glucose is necessary for maintaining normal physiological functions for the central nervous system (CNS), retina, and red blood cells (18)(19)(20). Adults with normal body weight consume 160 g of glucose per day. Of them, the most glucose (∼120 g) was consumed by the brain (21). In addition to exogenous glucose obtained from food, most the endogenous glucose is stored as glycogen in various organs such as the liver and muscle. In humans, only approximately 15 g of glucose is available for consumption in extracellular fluid, termed blood glucose. The balance between endogenous glucose production and peripheral glucose uptake helps to maintain systemic glucose homeostasis (22). Under normal physiological conditions, the concentration of blood glucose is dynamically constant. Low blood glucose concentration causes fatigue, dizziness, and irreversible damage to the CNS (23, 24). Likewise, long-term high blood glucose is often associated with metabolic dysfunctions, including hyperglycemia, insulin resistance, and T2D. Therefore, blood glucose concentration is an essential indicator of glucose homeostasis, and the dynamic balance of exogenous glucose supply and endogenous glucose production is a key node for maintaining normal physiological blood glucose (25). The liver is an essential organ for keeping blood sugar balance. Hepatic glucose metabolism includes multiple pathways of decomposition and anabolism. Glycogen synthesis, glycogenolysis, gluconeogenesis, and glycolysis jointly determine the stability of blood glucose. Hepatic cells regulate the dynamic balance of glucose metabolism in response to environmental and nutritional changes in an autonomous or involuntary manner. Hepatic gluconeogenesis, as the main way of endogenous glucose production during starvation, plays an important role in maintaining blood glucose balance.
Biogenesis and classification of lncRNAs
LncRNAs are mainly transcribed by RNA polymerase II and generally contain a 5'-m7G cap and a 3'-poly (A) tail (26,27). lncRNAs were traditionally thought not to possess the ability to encode proteins; however, some lncRNAs, such as LINC00998, LINC00961, LINC00467, and LINC-PINT, have been found to encode specific small polypeptides (28)(29)(30)(31)(32). Human GENCODE database shows that more than 173,000 lncRNA transcripts were identified in the human genome (33). Of them, only a small part of lncRNAs has been functionally annotated, while a large number of lncRNAs remain to be determined (34,35). Most lncRNAs are localized in the nucleus, while some lncRNAs are located in the cytoplasm or other sub-organelles, such as ribosomes and mitochondria (10). Compared with protein-coding genes, lncRNA genes are generally less conserved with lower expression profiles (36,37). Despite sharing similar patterns of splicing, export, and quality control with mRNAs, most lncRNAs are trapped in the nucleus. In comparison to mRNAs, lncRNAs have fewer and longer exons, and for this reason, lncRNAs prefer the NXF1/NXT1 pathway for nuclear export (38). Furthermore, lncRNAs have lower splicing efficiency but higher splicing frequencies to increase their numbers (39).
Based on genomic location and functioning mechanism, lncRNAs are divided into five groups, intergenic lncRNAs (lincRNAs), intronic lncRNAs, sense lncRNAs, antisense lncRNAs, and bidirectional lncRNAs (40). A large number of non-coding regions are distributed among the coding regions of the human genome, accounting for 98-99%. The lncRNAs transcribed from these non-coding regions are called intergenic lncRNAs. LncRNAs transcribed from introns in the coding region are named intronic lncRNAs. Sense and antisense lncRNAs are transcribed from sense and antisense strands coding proteins, respectively. Currently, most studies are focused on lincRNAs and antisense lncRNAs. LincRNA shows functional importance due to its high active transcription, a certain degree of domain conservation, tissue-specific expression, and stability, whereas antisense lncRNA accounts for a large amount of human lncRNA (40).
Functioning mechanisms for lncRNAs
By various regulation models, lncRNAs can positively or negatively control coding gene expression, which could be occurred at different stages of eukaryotic gene expression (41). At the chromatin level, lncRNAs induce chromatin epigenetic modification to affect conformational structures of chromatin and thereby control gene expression. LncRNAs can regulate DNA methylation by recruiting DNMTs/TETs, sequestering DNMTs, or regulating the expression of DNMTs/TETs (42). Alternatively, lncRNAs act as decoys to sequester chromatin modifiers from specific genomic sites to induce chromatin remodeling (43). At the transcriptional level, lncRNAs can mediate gene silence or activation. For example, Airn, an antisense transcript of the Iinsulin-like growth factor 2 receptor (gf2r) gene, whose transcription causes Pol II to detach from the Igf2r promoter, resulting in transcriptional pause and gene silencing (44). At the post-transcriptional level, LncRNAs are involved in the posttranscriptional splicing of mRNAs. LncRNA Ctcflos mediates the selective splicing of PRDM16 to generate short isomers with a preference for thermogenesis, thereby promoting fat thermogenesis (45). Moreover, lncRNAs may also regulate gene expression by other means. For example, lncRNAs fold into higher-order structure to bind nucleoprotein and assemble ribonucleoprotein complex to participate in protein nuclear localization, or lncRNAs pair with other RNAs to recruit protein complexes or adsorb microRNAs to regulate gene silencing (46).
LncH19
H19 is the first lncRNA originally found in the liver extract, which is 2.3 kb in length and located on chromosome 11. After transcription, lncH19 is exported to the cytoplasm after a similar modification process as mRNAs, such as splicing, capping, and polyadenylation (47). It is enriched in embryonic stem cells and remains highly expressed in the adrenal gland, liver, and adipose tissue after birth (48). LncH19 loses the ability to translate into small peptides due to the special structure of the 5'-terminal (49). Therefore, lncH19 plays a role as an independent functional unit. A clinical study of obese women showed that human linH19 transcription levels were negatively correlated with body mass index (BMI) and homeostatic model assessment of insulin resistance (HOMA-IR) (50). Moreover, H19 has been shown to regulate glucose homeostasis and b cell function (51).
By RNA-seq, Goyal N et al. found that H19 was largely decreased in the liver of diabetic db/db mice, suggesting its potential role in glucose metabolism. In the following experiments, their functional studies showed that the knockdown of H19 stimulates gluconeogenic gene expression and hepatic glucose output in HepG2 cells and primary mouse hepatocytes (52). In vivo studies have shown that, in healthy mice, H19 absence results in dysregulated glucose metabolism including hyperglycemia, hyperinsulinemia, and intolerant insulin, glucose, and pyruvate tests (53). Mechanistically, H19 silencing increases the occupancy of P53 in the promoter of Foxo1, which promotes the transcription of Foxo1, a master regulator of gluconeogenic gene expression (52,53). However, this view regarding the roles of H19 in gluconeogenesis has been challenged. Deng J et al. have shown that overexpression of H19 in a human liver cell line activates the gluconeogenic program, which is likely due to increased expression of HNF4a (54). Most recently, one report confirmed that overexpression of H19 in Hepa1-6 cells increases Pck1 expression and gluconeogenesis by inducing the nuclear retention of FOXO1 (55). Also, in this study, H19 was identified as an imprinted gene for transducing hyperglycemia from paternal obesity to female offspring (55). Therefore, due to these inconsistent findings, the precise functions of H19 on gluconeogenesis are not clear. More studies are required to clarify this issue.
LncSHGL
Mouse lncSHGL is located on chromosome 17, and its homologous in humans is lncRNA B4GALT1-AS1. LncSHGL was low expressed in the liver of obese mice, and similarly, lncRNA B4GALT1-AS1 was significantly decreased in patients with nonalcoholic fatty liver disease (56). It has been shown that restoration of hepatic lncSHGL plays a beneficial role against hyperglycemia, insulin resistance, and hepatic steatosis in diabetic mice, while inhibition of lncSHGL worsens hyperglycemia and lipid deposition in livers (56). Mechanistic studies revealed that lncSHGL increases calmodulin (CaM) mRNA translation by recruiting heterogeneous nuclear ribonucleoprotein A1 (hnRNPA1). As a result, increased CaM suppresses gluconeogenic and lipogenic pathways in hepatocytes (56).
LncMEG3
Maternal expression gene 3 (MEG3) is an imprinted gene located on the human chromosome 14q32. It is the ortholog of the gene trap locus 2 (Gtl2) on mouse chromosome 12. LncMEG3 is generally considered to be a tumor suppressor, which is expressed in a variety of tissues and encodes lncRNAs associated with liver disease. Different from lncH19, the transcript of MEG3 is positively correlated with obesity index and HOMA-IR in humans. In accordance, lncMEG3 is highly expressed in high-fat diet-induced obese mice and ob/ob mice (57). In primary hepatocytes, overexpression of lncMEG3 results in increased expression of Foxo1, G6pc, Pck1; meanwhile, insulin-stimulated glycogen synthesis was suppressed by lncMEG3 (57). These alterations could be reversed by lncMEG3 interference (57). In another study, lncMEG3 is found to be a glucagon-inducible lncRNA in mouse primary hepatocytes, where it interacts with miR-302a-3p as a competing endogenous RNA (ceRNA) (58). By this way, lncMEG3 increases CREB-regulated transcriptional coactivator 2 (CRTC2), which is a target of miR-302a-3p. Consequently, upregulated CRTC2 stimulates gluconeogenesis by activating the axis of PGC-1a/Pck1/G6pc in hepatocytes (58). Furthermore, as a ceRNA, miR-214 is another substrate of lncMEG3. In hepatocytes, lncMEG3 sequesters miR-214 to favor transcription factor 4 (ATF4) expression (59). ATF4 is capable of inducing the gluconeogenic program by affecting the transcriptional activity of FOXO1 (60). Therefore, lncMEG3 promotes gluconeogenesis in hepatocytes by targeting the axis of miR-302a-3p/CRTC2 or miR-214/ATF4.
LncBhmt-AS
Betaine homocysteine methyltransferase (BHMT) is an enzyme that catalyzes the synthesis of methionine from homocysteine and is associated with insulin resistance and diabetes (61,62). BHMT is highly expressed in the liver of rodents, which may play a role in gluconeogenesis by interacting with L-serine dehydratase/Lthreonine deaminase to affect the use of the amino acid for gluconeogenesis (63). Recently, a new lncRNA was discovered during fasting, which is an antisense RNA of Bhmt, therefore, named lncBhmt-AS (64). LncBhmt-AS is located on chromosome 13 in mice with 1464 bp in length. Deficiency of lncBhmt-AS restricts gluconeogenesis in primary hepatocytes and inhibits liver glucose production and gluconeogenic gene expression in vivo (64). In contrast, Bhmt overexpression restores gluconeogenesis induced by lncBhmt-AS knockdown (64). These evidences indicate that lncBhmt-AS plays an important role in regulating hepatic gluconeogenesis by targeting Bhmt.
LncGm10768
Gm10768 is a lncRNA specifically enriched in the liver. Cui et al. found an abnormal increase of lncGm10768 in mouse livers after fasting by RNA-seq (65). In addition, lncGm10768 is positively correlated with glucose production in mouse primary hepatocytes (65). Liver-specific knockout of lncGm10768 alleviates hyperglycemia and insulin resistance in db/db mice (65). LncGm10768 is localized in the nucleus and cytoplasm, therefore, lncGm10768 may regulate gene expression at both transcriptional and post-transcriptional levels. As endogenous competitive suppressors of microRNAs, LncRNAs can reverse gene silencing induced by microRNAs. miR-214 has a high affinity binding site with lncGm10768, and it decreases in response to lncGm10768 overexpression (65). As mentioned above, miR-214 can target and activate transcription factor 4 (ATF4) to inhibit the expression of G6pc and Pck1 (59). Therefore, the positive impact of lncGm10768 on hepatic gluconeogenesis is likely due to the interaction with miR-214 (65). In this regard, lncGm10768 and lcnMEG3 play a similar role in hepatic gluconeogenesis by targeting miR-214, indicating different lncRNAs may have synergistic effects to regulate gluconeogenesis jointly.
LncGomafu
LncGomafu is a conserved lncRNA in mammalian species, which was localized in the nucleus in most cases. It has been well documented that lncGomafu plays a key role in neuronal development and involves in the pathogenesis of neuropsychiatric disorders (66,67). Similar to lncMEG3, lncGomafu is highly expressed in the livers of ob/ob mice and mice on a high-fat diet (HFD) (68). Knockdown of lncGomafu in the liver inhibits hepatic glucose production and improves insulin sensitivity in obese mice. On the contrary, overexpression of lncGomafu increases blood glucose levels in lean mice. Mechanistically, lncGomafu competitively sponge miR-139 to increase Foxo1 expression, increasing gluconeogenic gene expression and hepatic gluconeogenesis (68).
LncMALAT1
Metastasis associated lung adenocarcinoma transcript 1 (MALAT1) is a conserved lncRNA located on human chromosome 11q13 with a length of 8.5 kb (26). LncMALAT1 is considered a biomarker for tumor diagnosis and has been proven to be involved in the regulation of several signaling pathways, including PI3K/AKT, NF-kB, MAPK/ERK (69). Knockdown of lncMALAT1 in HepG2 and FLC4 cells leads to increased glucose secretion and expression of gluconeogenic genes such as G6pc and Pck1 (70). Meanwhile, this study revealed that the negative regulation of lncMALAT1 on gluconeogenesis is due to the upregulation of TCF7L2 (70). TCF7L2 has been shown to interact with the promoters of G6pc and Pck1, this interaction impedes the transcriptional activities of CREB/CRTC2 and FOXO1, thereby repressing gluconeogenic gene expression (71).
LncGm10804
LncGm10804 is highly enriched in high glucose-treated hepatocytes and livers of non-alcoholic fatty liver disease (NAFLD) model mice. Both in vitro and in vivo studies have shown that the knockdown of lncGm10804 reduces the expression of Pck1 and G6pc in cultured hepatocytes and NAFLD mice. Meanwhile, lncGm10804 silencing alleviates hepatic steatosis and lipid accumulation by decreasing the expression of sterol regulatory element-binding protein-1c (SREBP-1c) and fatty acid synthase (FAS) in NAFLD mouse livers (72).
LincIRS2
LncRNA 4833411C07Rik was named LincIRS2 by Marta Pradas-Juni et al. for its location at 80 kb of 5' Irs2 (17). LincIRS2 is induced upon fasting or glucagon stimulation and responds to cAMP signaling (17), suggesting lincIRS2 might be involved in hepatic gluconeogenesis. Indeed, in lean mice, the knockdown of lincIRS2 in the liver induces enhanced blood glucose, insulin resistance and ectopic glucose output. Meanwhile, deficiency of Mafg in hepatocytes evokes a fasting-like gene expression profile as evidenced by elevated expression of Fbp1, G6pc and Pck1 (17). Later, they found that MAFG controls the expression of lincIRS2 and thereby regulates glucose metabolism in the liver (17).
Conclusion and perspectives
Hepatic gluconeogenesis is an essential bio-process to keep blood glucose in normal physiological scope. Dysregulated hepatic gluconeogenesis may course various metabolic disorders. For instance, ectopic upregulated gluconeogenesis is a causative factor for inducing hyperglycemia, hyperinsulinemia, insulin resistance, and T2D. In this regard, gluconeogenesis also is a target for developing anti-T2D drugs. Metformin is such an anti-T2D drug which has achieved great success in clinical. Therefore, in-depth studies on hepatic gluconeogenesis regulation and its molecular mechanisms are important for developing novel strategies for treating disorders induced by malfunctioned gluconeogenesis. The current evidences have shown that lncRNAs play a crucial role in hepatic gluconeogenesis, although just 9 lncRNAs have been examined in this field to date. Of note, these lncRNAs exhibited different functions on hepatic gluconeogenesis, lncH19, lncMEG3, lncGm10768, lncGomafu, and lncBhmt-AS function as positive regulators, whereas lncMALAT1 and lncSHGL act as negative regulators ( Table 1). As for involved mechanisms, each lncRNA has its own working model (Figure 1).
It should be mentioned that these gluconeogenesis-associated lncRNAs are mainly enriched in the liver. It is reasonable to predict that other tissues-derived lncRNAs may also be involved in hepatic gluconeogenesis, although no direct evidences support this prediction. It has been shown that lncRNAs are often carried by exosomes (73). Hence, lncRNAs derived from other tissues, such as muscle, pancreas, and fat, could be transferred into the liver via exosomes, where they might affect the gluconeogenic program. For this reason, exosomal lncRNA-mediated crosstalk between other tissues and hepatic gluconeogenesis could be investigated as a future direction. Therefore, we predicted that more lncRNAs potentially involved in the hepatic gluconeogenesis will be identified by RNA- seq technologies in non-liver tissues under stress, such as fasting or a high-fat diet. Engineered exosomes with specific lncRNAs with inhibitory effects on hepatic gluconeogenesis might hold great therapeutic potential for treating T2D. In addition, exosomal lncRNAs in blood might be diagnostic markers of dysregulated hepatic gluconeogenesis. | 2023-03-31T13:07:39.411Z | 2023-03-31T00:00:00.000 | {
"year": 2023,
"sha1": "2392f02ea809001e6792e6b94ddeea604a01c7ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2392f02ea809001e6792e6b94ddeea604a01c7ca",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210892579 | pes2o/s2orc | v3-fos-license | Male Infertility: Shining a Light on Lipids and Lipid-Modulating Enzymes in the Male Germline
Despite the prevalence of male factor infertility, most cases are defined as idiopathic, thus limiting treatment options and driving increased rates of recourse to assisted reproductive technologies (ARTs). Regrettably, our current armory of ARTs does not constitute therapeutic treatments for male infertility, thus highlighting an urgent need for novel intervention strategies. In our attempts to fill this void, we have come to appreciate that the production of pathological levels of oxygen radicals within the male germline are a defining etiology of many idiopathic infertility cases. Indeed, an imbalance of reactive oxygen species can precipitate a cascade of deleterious sequelae, beginning with the peroxidation of membrane lipids and culminating in cellular dysfunction and death. Here, we shine light on the importance of lipid homeostasis, and the impact of lipid stress in the demise of the male germ cell. We also seek to highlight the utility of emerging lipidomic technologies to enhance our understanding of the diverse roles that lipids play in sperm function, and to identify biomarkers capable of tracking infertility in patient cohorts. Such information should improve our fundamental understanding of the mechanistic causes of male infertility and find application in the development of efficacious treatment options.
Introduction
Infertility is defined by the World Health Organization as the inability for a couple to conceive naturally following 12 months of unprotected intercourse. While research is now increasingly targeted towards improving our mechanistic understanding of the causes of infertility, a traditional lack of emphasis on the contribution of men to conception, embryo health, and early childhood development has left the field of male reproductive biology trailing behind that of the equivalent female field. Nevertheless, what we have come to appreciate is that male infertility is associated with numerous factors, including environmental and occupational exposures [1]; genetic mutations [2]; and diseases, such as testicular cancer [3]; and obesity [4]. Despite this knowledge, the vast majority (>70%) of male infertility cases are deemed idiopathic [5], a situation that severely limits therapeutic treatment
Physiological and Pathophysiological Roles of Lipids
Lipids are amphiphilic molecules that fulfil a diversity of roles within the body. They are essential structural components of cell plasma (and organelle) membranes and in this capacity serve as key regulators of cellular homeostasis [38]. Among the most abundant and important classes of cell membrane lipids are the polyunsaturated fatty acids (PUFAs) of arachidonic (20:4), linoleic (18:2), and docosahexanoic acid (22:6), and the saturated fatty acids of myristic acid (14:0) and palmitic acid (16:0) [39]. Together with a glycerol backbone and a phosphate head group, these fatty acids are assembled into acyl tails to form phospholipids, such as phosphatidylethanolamine, phosphatidylcholine, and phosphatidylserine [40]. Phospholipids are the predominant lipid entity that delineate the plasma membrane, although this cellular barrier is also supplemented with numerous other structurally important lipids. These include sphingolipids, which are commonly found in regions of the outer membrane [40] and are involved in signaling [41], and sterols, such as cholesterol, which restrict membrane fluidity and impart structural support. Together, membrane lipids hold fundamental roles in signal transduction, membrane and organelle protection, and molecular trafficking in and out of cells [42,43]. It follows that disruption to membrane lipids, and the homeostatic influence they exert, can have profound downstream effects on human health and disease.
In numerous research fields, but particularly in cancer biology, the impact of cellular stress on lipid membranes and the resulting consequences for cell function has become a key focus for understanding cell death and disease [44]. Lipid peroxidation is a process that commonly occurs following the production of high levels of reactive oxygen species (ROS). ROS can activate phospholipase proteins, leading to the cleavage and liberation of PUFAs from membrane phospholipids [45,46]. The free PUFAs can, in turn, be broken down via a combination of non-enzymatic Fenton reactions [47], autoxidation [48], or enzymatic metabolism involving the action of lipoxygenases and/or cyclooxygenases [49]. Importantly, in many cell types, these combined catabolic pathways lead to a recently characterized cell death modality termed ferroptosis, so named on the basis of its iron dependency [21]. The ferroptotic pathway is initiated by the inactivation of the glutathione peroxidase 4 (GPX4), an antioxidant enzyme that affords membrane protection via the active reduction of lipid hydroperoxides [50]. It follows that GPX4 inactivation and/or depletion of its substrate glutathione (GSH) enables the accumulation of lipid hydroperoxides, the production of which is catalyzed by enzymes, such as lipoxygenases [51]. Recently, an alternative defense system based on the activity of ferroptosis-suppressor-protein 1 (FSP1) has been reported, which offers additional protection against lipid peroxidation and the ferroptosis cascade, even after GPX4 ablation [52]. Specifically, ubiquinone (CoQ10) targets and suppresses lipid peroxidation while FSP1, in conjunction with nitrate reductase (NAD(P)H), is responsible for replenishment of CoQ10 [52]. Ultimately, however, elevated levels of lipid hydroperoxides and their highly reactive metabolites (the lipid aldehydes) overwhelm the cellular defenses and result in ferroptosis, a form of caspase-independent cell death characterized by a unique gene expression profile, decreased mitochondrial size, and outer mitochondrial membrane rupture [21] ( Figure 1). J. Clin. Med. 2020, 9,327 3 of 26 In numerous research fields, but particularly in cancer biology, the impact of cellular stress on lipid membranes and the resulting consequences for cell function has become a key focus for understanding cell death and disease [44]. Lipid peroxidation is a process that commonly occurs following the production of high levels of reactive oxygen species (ROS). ROS can activate phospholipase proteins, leading to the cleavage and liberation of PUFAs from membrane phospholipids [45,46]. The free PUFAs can, in turn, be broken down via a combination of nonenzymatic Fenton reactions [47], autoxidation [48], or enzymatic metabolism involving the action of lipoxygenases and/or cyclooxygenases [49]. Importantly, in many cell types, these combined catabolic pathways lead to a recently characterized cell death modality termed ferroptosis, so named on the basis of its iron dependency [21]. The ferroptotic pathway is initiated by the inactivation of the glutathione peroxidase 4 (GPX4), an antioxidant enzyme that affords membrane protection via the active reduction of lipid hydroperoxides [50]. It follows that GPX4 inactivation and/or depletion of its substrate glutathione (GSH) enables the accumulation of lipid hydroperoxides, the production of which is catalyzed by enzymes, such as lipoxygenases [51]. Recently, an alternative defense system based on the activity of ferroptosis-suppressor-protein 1 (FSP1) has been reported, which offers additional protection against lipid peroxidation and the ferroptosis cascade, even after GPX4 ablation [52]. Specifically, ubiquinone (CoQ10) targets and suppresses lipid peroxidation while FSP1, in conjunction with nitrate reductase (NAD(P)H), is responsible for replenishment of CoQ10 [52]. Ultimately, however, elevated levels of lipid hydroperoxides and their highly reactive metabolites (the lipid aldehydes) overwhelm the cellular defenses and result in ferroptosis, a form of caspaseindependent cell death characterized by a unique gene expression profile, decreased mitochondrial size, and outer mitochondrial membrane rupture [21] (Figure 1). Lipid peroxidation (LP) commonly occurs following the excessive production of mitochondrial reactive oxygen species (ROS). ROS activate phospholipase (PLA) enzymes, which then assist in the cleavage of poly-unsaturated fatty acids (PUFAs) from membrane phospholipids. Liberated PUFAs are catabolized via the enzymatic action of lipoxygenase proteins, such as arachidonate 15-lipoxygenase (ALOX15), non-enzymatic Fenton reactions, or autoxidation. Acyl-coenzyme A (Acyl-CoA) synthetase long-chain family member 4 (ACSL4) assists in sensitizing cells to ferroptosis by virtue of its role in lipid biosynthesis. Together with lysophosphatidylcholine acyltransferase 3 (LPCAT3), these lipid remodeling enzymes can generate and incorporate, respectively, long chain PUFAs in cell membranes, the key substrates for peroxidation. Transferrin transports iron into the cell, which promotes lipid peroxidation, while System X c − imports/exports cystine and glutamate amino acids in and out of the cell, respectively. Once within the cell, cystine is converted to cysteine and together with nuclear factor erythroid 2-related factor 2 (NRF2) supports glutathione (GSH) synthesis. Glutathione peroxidase 4 (GPX4) activity is regulated by GSH and offers a first line of protection against ferroptosis by virtue of its reduction of both ROS and lipid peroxides. Interactions between ferroptosis-suppressor-protein 1 (FSP1), ubiquinone (CoQ 10 ), and nitrate reductase (NAD(P)H) provide additional protection against ferroptosis by reducing lipid peroxidation. However, in the event that lipid peroxidation exceeds that of the intrinsic cellular defenses, a ferroptotic cell death ensues. Ferroptosis can be experimentally induced by erastin, which interferes with mitochondrial function and accentuates ROS production, or by disrupting system X c − function.
Alternatively, ferroptosis can be promoted by RSL3, a selective GPX4 inhibitor. Ferroptosis can also be inhibited with deferoxamine which disrupts iron activity. Image created using BioRender.com.
Notably, ferroptosis has now been described as a characteristic feature of many diseases, including neurodegenerative disorders [53][54][55], ischemia [56,57], stroke [58,59], and numerous cancers [60,61]. Indeed, the extreme sensitivity of neurons to ferroptosis has been demonstrated by the targeted elimination of GPX4 in a mouse model, a strategy that gave rise to both neurodegeneration and paralysis [62]. Accordingly, pharmacological agents capable of modulating the activity of the ferroptosis pathway are gaining significant interest as potential targets to prevent disease progression [63]. Illustrative of this promise, recent studies have shown that the direct inhibition of ferroptosis can protect against cardiac injury [34,64]. Conversely, the sensitization of cancerous cells via the suppression of intrinsic ferroptotic inhibitors is proving an effective strategy to drive these cells toward a ferroptotic demise [65]. In one such study, pharmacological inhibition of nuclear factor erythroid 2-related factor 2 (NRF2; a protein that affords protection against ferroptosis owing to its role in GSH synthesis) effectively ameliorated the resistance of hepatocellular carcinoma cells to ferroptosis induced by either erastin or sorafenib [66]. Similarly, suppression of the miR-9, a microRNA implicated in the suppression of ferroptosis, significantly increased the potency of ferroptotic stimuli (i.e., RSL3 and erastin) in melanoma cells [67]. Thus, the ability to sensitize cancerous cells to programmed cell death through the induction of ferroptosis may provide an effective strategy to mitigate the risk posed by tumor growth and metastasis. Aside from their central role in the regulation of cellular death via ferroptosis, lipids also play key roles in alternative cell death modalities, such as those of necroptosis, pyroptosis, NETosis (as reviewed by [68,69]), and apoptosis (as reviewed [70]).
In addition to being implicated in multiple forms of cellular degeneration, it is well known that many diseases possess an altered lipid signature and one that may be unique to each condition. As reviewed by Long et al., altered lipid metabolism has been observed in a wide variety of cancers, including breast cancer, prostate cancer, leukemia, pancreatic cancer, and glioblastoma [37]. Furthermore, elevated levels of fatty acids, such as docosahexanoic acid, in blood plasma have been associated with a reduced risk of neurodegenerative disorders [71] whilst conversely, increased levels of sphingolipids have been reported in diabetic patients [35]. Similarly, changes in lipids docosahexanoic acid, eicosapentaenoic acid, docosapentanoic acid, and palmitoleic acid have been linked to fatty liver disease [72]. Notwithstanding these important observations, in many cases, a detailed understanding of the mechanisms underpinning lipid profile changes is lacking. However, for diseases, such as cardiovascular disease, where the knowledge of lipid homeostasis is more advanced, lipid biomarkers are now being utilized to predict the risk of atherosclerotic cardiovascular disease (ASCVD) [73]. Specifically, both observational and genetic evidence strongly support a causal relationship between high plasma concentrations of lipoprotein(a) and an increase in disease-related events, such as myocardial infarction, stroke, and valvular aortic stenosis [73][74][75]. Clinically, lipoprotein(a) levels of >100 nmol/L are considered indicative of an increased risk of ASCVD [73]. However, some discrepancies remain in the standardization of lipoprotein(a) assays and in the units used to report the levels of this lipid. Despite these challenges, the discovery of lipoprotein(a) as a predictive tool for ASCVD has led to the initiation of a randomized double-blind trial using antisense oligonucleotides to block the production of lipoprotein(a), as well as the development of other promising lipoprotein(a)-lowering therapies focused on small interfering RNA inhibitors [73]. This is just one example of the utility of lipid biology in informing novel diagnostics and interventions to prevent disease progression [36,[76][77][78].
Among numerous other promising treatment strategies, one that is gaining substantial interest is the deuteration of fatty acids that reside within lipid membranes. The rationale for this approach rests with recent evidence that the sensitivity of fatty acids to lipid peroxidation reactions is primarily attributed to the presence of bis-allylic hydrogen sites, which are ideal targets for oxygen radicals to initiate lipid peroxidation [79]. Thus, the process of deuteration is achieved by the substitution of deuterium atoms in place of hydrogen atoms in the bis-allylic hydrogen sites of PUFAs (D-PUFAs), thus increasing the stability of these sites and mitigating the risk posed by lipid peroxidation. Importantly, this process has shown some promising proof-of-principle results in the prevention of diseases linked to neurodegenerative disorders and aging. Thus, the incorporation of D-PUFAs into the diet of caenorhabditis elegans (c. elegans) led to reductions in cellular stress and improvements in overall life span [79]. Furthermore, mouse studies have revealed memory improvements and decreased cellular stress following D-PUFA diet supplementation for models of Alzheimer's and Huntington's disease [80][81][82]. Thus, the treatment of diseases through the fortification of membranes within the cell is a promising approach as are the various strategies for lipid-based therapeutics summarized in Table 1. Importantly, this growth in understanding, technology, and appreciation for the role of lipids in health and disease has paved the way for a new chapter in fertility research targeted towards understanding the contribution of lipids to reproductive disorders, such as infertility [83,84]. Hereafter, we shall discuss our current understanding of lipid biology in male reproduction and highlight several areas for the continued growth of this field.
Manipulation of ferroptosis
• Deletion of GPX4 in an AD mouse model led to memory and learning deficits [54].
• Significant improvements to locomotive activity in mice and decreases in ferroptotic cell death were observed following use of ferrostatin-1 in a mouse model for Parkinson's disease [85].
Deuteration
• Initial studies of C. elegans supplemented with D-PUFAs show reduced cellular stress as measured by ROS and lipid peroxidation levels. This treatment subsequently improved the lifespan, highlighting the promise for the prevention of age-related disorders [79].
• A diet supplemented with D-PUFAs significantly improved memory performance in an AD mouse model [80].
• A D-PUFA diet in a Huntington's disease mouse model resulted in improvements to memory recognition and reduction in lipid peroxidation markers [81].
• A mouse model for AD confirmed D-PUFA supplementation as a promising strategy to lower amyloid β-peptide production but did not improve learning deficits [82].
Lipidomics and novel biomarkers
• An extensive lipidomic approach has identified 35 potential lipid biomarkers that varied between healthy controls and AD blood samples [27].
• Blood lipidomics between aged healthy individuals and those with AD has identified 24 biomarkers that could be used to confirm AD with >70% accuracy [28]. • The levels of six lipid peroxidation markers were monitored between healthy and AD blood samples to provide a promising model for AD diagnosis [29].
Manipulation of ferroptosis
• A COPD mouse model induced via cigarette exposure demonstrated that GPX4 gene deletion resulted in an exacerbation of hallmark features of COPD and increased lipid peroxidation and ferroptotic cell death [86].
• Using a radiation-induced lung fibrosis (RILF) mouse model, GPX4 levels were shown to be significantly reduced compared to healthy controls. Further, the addition of the ferroptosis inhibitor liproxstatin-1 lowered levels of cellular stress and improved the GPX4 concentration [87].
• A study of 220 individuals highlighted unique differences in the lipid profiles between unstable and stable coronary heart disease [89].
• A lipidomic study completed on 685 blood samples highlighted that the relative risk of cardiovascular disease was associated with increased levels of cholesterol esters and triacylglycerols [90].
• The identification of lipoprotein(a) as a risk factor for ASCVD has led to a clinical trial set to begin in 2020, which will examine the possibility of targeting lipoprotein(a) production to protect against the disease [73].
Lipidomics and novel biomarkers
• Screening of almost 20,000 individuals found that colorectal adenomas (advanced and non-advanced) were associated with increased levels of triglycerides while ApoA-1 and HDL cholesterol were linked to non-advanced adenomas [91].
• A positive relationship has been observed between phosphatidylserine and lyso-phosphatidylserine and lung cancer prevalence and a negative correlation with lyso-phosphatidylethanolamine and phosphatidylethanolamine and lung cancer. Furthermore, this study identified that the lipidomic profile varied between different subtypes of lung cancer [92].
• A lipidomic analysis identified 64 potential lipid biomarkers that were either up or downregulated in the presence of colorectal cancer [93].
• A lipidomic analysis comparing prostate cancer patients with healthy controls identified 35 potential lipid biomarkers for diagnostic use [94].
Manipulation of ferroptosis
• A recent study confirmed SKBr3 breast cancer cells as sensitive to ferroptosis using the ferroptosis inhibitors deferoxamine and ferrostatin-1 [95].
• A study confirmed the sensitivity of acute lymphoblastic leukemia cells to ferroptosis induced through RSL3 treatment. Furthermore, ferroptosis and lipid peroxidation were prevented through Ferrostatin-1 treatment and lipoxygenase inhibition [96].
The Changing Profile of Lipids during Sperm Maturation
Spermatozoa are highly specialized cells that are formed in the testes through a process known as spermatogenesis [97]. During spermatogenesis, spermatogonial stem cells undergo multiple phases of mitotic and meiotic divisions before entering a complex remodeling process known as spermiogenesis. Collectively, the processes culminate in the production of morphologically mature spermatozoa, with a head domain containing the paternal genome and a flagellum responsible for the propagation of motility ( Figure 2). Following testicular development, spermatozoa enter the male reproductive tract (epididymis) and begin an important phase of post-testicular maturation, during which they acquire the capacity for forward progressive motility [98] and shed their cytoplasmic droplets [99] before being stored in the distal epididymis in preparation for ejaculation [100]. Critically, a final stage of sperm maturation, termed capacitation, occurs in the female reproductive tract and is responsible for endowing the sperm cell with the ability to fertilize the ovulated oocyte [101,102]. Studies completed in numerous animal models have highlighted marked changes in the lipid composition of male germ cells during all stages of their development. Key among these changes are alterations in the PUFA content as germ cells complete meiosis [103,104]. Specifically, the transformation of germ cells during their transition from spermatocytes to spermatids ( Figure 2) is accompanied by a significant enrichment in the PUFA docosapentanoic acid [104]. This change occurs in concert with increases in the abundance of several alternative long chain PUFAs among the lipid content of round spermatids (Oresti et al., 2010). Whilst the physiological consequences of these changes have yet to be fully elucidated, their importance is alluded to by studies in mice, which have shown that a complete knockout of the delta-6 desaturase enzyme responsible for PUFA synthesis results in an infertility phenotype associated with spermatogenic failure [105]. Notably, however, supplementation with DHA, but not AA, to the diet of the mice was effective in rescuing this phenotype, leading to significant improvements in sperm concentration and morphology [105]. Similarly, male mice deficient in leptin receptors, key regulators of lipolysis, suffer from infertility and dysregulated spermatogenesis [106,107]. Moreover, dietary supplementation with medium chain triglycerides improved the fidelity of spermatogenesis, such that these animals showed improvements in epididymal sperm concentration and motility compared to their leptin deficient counterparts fed a control diet [106].
On their release from the testes, spermatozoa have been rendered both transcriptionally and translationally silent [108], yet still require substantial additional remodeling before gaining the functional competence to engage in oocyte interactions. It has long been known that this functional transformation is accompanied by pronounced changes in the lipid architecture of the cell [109,110], suggesting that dynamic lipid remodeling is an important facet of both epididymal maturation and capacitation. In early studies of the ram, it was demonstrated that epididymal spermatozoa possess a significant enrichment in ω−3 fatty acids, such as DHA, compared to that of their testicular counterparts while the opposite trend was observed for AA, the levels of which were instead significantly reduced during post-testicular maturation [109]. More recent work has concluded that the overall fatty acid content is increased in canine spermatozoa during their passage from the proximal (caput) to distal (cauda) segments of the epididymis [111]. The nature of this increase included enrichment of saturated fatty acids, mono-, and poly-unsaturated fatty acids (e.g., DHA) [111]. Key among these changes are alterations in the PUFA content as germ cells complete meiosis [103,104]. Specifically, the transformation of germ cells during their transition from spermatocytes to spermatids ( Figure 2) is accompanied by a significant enrichment in the PUFA docosapentanoic acid [104]. This change occurs in concert with increases in the abundance of several alternative long chain PUFAs among the lipid content of round spermatids (Oresti et al., 2010). Whilst the physiological consequences of these changes have yet to be fully elucidated, their importance is alluded to by studies in mice, which have shown that a complete knockout of the delta-6 desaturase enzyme responsible for PUFA synthesis results in an infertility phenotype associated with spermatogenic failure [105]. Notably, however, supplementation with DHA, but not AA, to the diet of the mice was effective in rescuing this phenotype, leading to significant improvements in sperm concentration and morphology [105]. Similarly, male mice deficient in leptin receptors, key regulators of lipolysis, suffer from infertility and dysregulated spermatogenesis [106,107]. Moreover, dietary supplementation with medium chain triglycerides improved the fidelity of spermatogenesis, such that these animals showed improvements in epididymal sperm concentration and motility compared to their leptin deficient counterparts fed a control diet [106].
On their release from the testes, spermatozoa have been rendered both transcriptionally and translationally silent [108], yet still require substantial additional remodeling before gaining the functional competence to engage in oocyte interactions. It has long been known that this functional transformation is accompanied by pronounced changes in the lipid architecture of the cell [109,110], suggesting that dynamic lipid remodeling is an important facet of both epididymal maturation and capacitation. In early studies of the ram, it was demonstrated that epididymal spermatozoa possess a significant enrichment in ω−3 fatty acids, such as DHA, compared to that of their testicular counterparts while the opposite trend was observed for AA, the levels of which were instead significantly reduced during post-testicular maturation [109]. More recent work has concluded that the overall fatty acid content is increased in canine spermatozoa during their passage from the proximal (caput) to distal (cauda) segments of the epididymis [111]. The nature of this increase included enrichment of saturated fatty acids, mono-, and poly-unsaturated fatty acids (e.g., DHA) [111].
While the precise mechanisms responsible for promoting changes in the sperm lipid composition remain to be established, mounting interest has focused on the potential involvement of extracellular lipid vesicles or 'epididymosomes' [112], which are capable of delivering alternative cargo (e.g., proteins and small non-coding RNAs (sRNA)) to epididymal spermatozoa [113][114][115]. These extracellular vesicles possess high levels of cholesterol and sphingomyelin, which promote the formation of ordered membrane subdomains known as lipid rafts [112] and may play a role in coordinating their interaction with compatible sperm membrane domains [115]. While it is well known that extracellular vesicles are often enriched in lipids that differ from those of their parent cells [116], little is currently known regarding the lipid content of either the parent epididymal epithelial cells from which they originate or the epididymosomes themselves. Despite this, it is intriguing that the phospholipid content of epididymosomes differs based on the epididymal segment from which they are secreted [117]. Indeed, mouse epididymosomes isolated from the cauda epididymis are characterized by significantly lower proportions of phospholipids (such as phosphatidylcholine and phosphatidylethanolamine) but higher sphingomyelin than equivalent epididymosomes collected from the upstream caput segment [117]. Such changes coincide with alterations in the cholesterol to phospholipid ratio of epididymosomes [117], which mirror those recorded in epididymal spermatozoa. These findings encourage speculation that epididymosomes may regulate the lipid composition of epididymal spermatozoa in preparation for their extended storage in the male reproductive tract and their encounter with the female reproductive tract after ejaculation [113].
It is well established that the sperm ascent of the female reproductive tract is accompanied by a further wave of dynamic changes in their membrane lipid composition. Chief among these changes are the efflux of cholesterol and resultant increase in membrane fluidity, permeability, and fusibility characteristics, which signal the onset of capacitation [118][119][120]. Cholesterol removal is also permissive of membrane remodeling, including the repositioning of receptors and fusion machinery needed to prime the sperm cell for acrosomal exocytosis and downstream oocyte interactions [121][122][123][124]. Although the mechanisms by which sterols are depleted during capacitation is not established for all species, in porcine and mouse spermatozoa, bicarbonate-induced ROS formation appears to promote the oxidation of sterols at the sperm surface. The increased hydrophilicity of the oxysterol products so formed enhances their transfer to albumin acceptors [124]. Additionally, studies of human spermatozoa have reported the oxysterol, 25-hydroxycholesterol, as a potential biomarker of sperm function [125]. Indeed, in a lipidomic analysis of oxysterols, 25-hydroxycholesterol was found in the highest concentrations in normozoospermic sperm. Furthermore, 25-hydroxycholesterol levels positively correlated with sperm concentration [125].
Another consequence of cholesterol depletion from capacitating spermatozoa is the redistribution of lipid raft microdomains [126]. This redistribution appears to follow an anterior gradient such that lipid rafts, and their encapsulated cargo, tend to accumulate in the sperm head following the induction of capacitation [127,128]. It has been argued that this phenomenon positions sperm receptors appropriately for their interaction with cognate oocyte ligands during fertilization [129]. It follows that the tracking of key components of raft microdomains, such as the G M1 ganglioside, can provide important insight into the capacitation status of spermatozoa and potentially distinguish between fertile and infertile samples [130]. Furthermore, the application of high-resolution atomic force microscopy has allowed for visualization and tracking of key lipid components, such as those elements associated with membrane rafts, on the sperm surface during key stages of their functional maturation [131]. These collective studies demonstrate that lipids play an essential role in the development and maturation of the male gamete and accordingly, we shall next discuss established links between lipids and male fertility and review literature pertaining to the role of lipid-modulating enzymes in effecting changes in the sperm lipidome.
The Role of Lipids and Lipid-Associated Proteins in Spermatozoa and Infertility
Alongside the changing profile of lipids during sperm maturation, numerous studies have begun to highlight the important impact of lipid and lipid-modulating enzymes on fertility. As previously mentioned, lipids play an essential role in the cellular stress pathway that culminates in membrane breakdown and the production of highly reactive and cytotoxic lipid peroxidation products, such as aldehydes. Further, oxidative stress has long been established as a contributing factor to male infertility issues [8,[132][133][134][135][136]. Recently, links have been drawn between the action of lipoxygenase enzymes and lipid peroxidation cascades within the male germline. Moreover, the targeted inhibition of arachidonate 15-lipoxygenase (ALOX15), with PD146176, has proven successful in reducing lipid peroxidation and cellular stress in both human and mouse germ cell models [137,138]. Additionally, PD146176 treatment can afford protection to human sperm functionality under conditions of oxidative stress, with notable improvements having been recorded in sperm motility, acrosome reaction rates, and adherence to the zonae pellucidae post-treatment [138].
Alternatively, round spermatids have been found to display acute sensitivity to ferroptosis induced by either erastin treatment or RSL3-mediated inhibition of GPX4 activity [139]. Importantly, this study also highlighted that the targeted inhibition of ACSL4 and ALOX15 (with rosiglitazone and PD146176, respectively) successfully protected round spermatids against lipid peroxidation and ferroptotic cell death [139]. Lipoxygenases have also been linked to infertility pathologies, such as asthenozoospermia (defined as low levels of sperm motility) [140,141]. Interestingly, levels of arachidonic acid, a dominant lipoxygenase substrate, have been reported as being 1.2-fold higher in asthenozoospermic spermatozoa compared to levels recorded in the sperm of healthy individuals [142]. Furthermore, the increased arachidonic acid in these samples was accompanied by an attendant l.5-fold increase in the ALOX15 metabolite, 15-HETE (15-hydroxyeicosatetraenoic acid), thus alluding to an important role for lipoxygenase-catalyzed metabolism of arachidonic acid within infertile patient samples [142].
Within the sperm cell itself, the distribution of PUFAs has been reported to vary between the head and the principle piece of the tail. Work completed on primate spermatozoa has uncovered dramatically increased levels of PUFAs in the sperm tail compared to the head, leading to the proposal that these lipids may modulate sperm motility via improved membrane fluidity [143]. In extrapolating this model, correlative links have been established between lipid profiles and sperm motility in porcine models, wherein PUFAs (docosahexanoic acid and docosapentanoic acid) were detected in significantly higher levels in spermatozoa with normal motility than those with poor motility [144]. Additionally, in human sperm cells, a large accumulation of PUFAs, such as DHA, was found to be present in the sperm head and is predicted to be involved in sperm maturation or interactions with the oocyte [145]. Moreover, patients presenting with idiopathic infertility had significantly lower levels of DHA as a proportion of total sperm lipids compared to high quality control cells pelleted by density gradient fractionation [145]. Notably, however, lower quality sperm partitioning within the density gradient were reported to have higher levels of both ω-3 and ω-6 fatty acids, irrespective of whether they originated from healthy or infertile donor samples [145]. This may be indicative of retention of the cytoplasmic droplet (found in immature spermatozoa) in these samples. Additionally, altered levels of stearic and polyunsaturated fatty acids within spermatozoa and seminal plasma samples have been reported in infertile patients [146]. Although additional work is clearly needed to establish reference values, this collective evidence suggests the utility of assessing lipid profiles as a potential strategy by which to screen the quality of an individual's spermatozoa.
An additional dividend of this strategy is that altered lipid profiles associated with sperm dysfunction and infertility are likely influenced by the overall health status of an individual. By way of example, a study of patients suffering from the vision impairment, retinitis pigmentosa, were found to exhibit lower levels of DHA within their erythrocytes [147]. Furthermore, these patients also presented with significant reductions in DHA content in their spermatozoa and concomitant abnormal semen parameters, such as lowered sperm count and motility [147]. Other studies have drawn intriguing links between the interplay of environmental factors, lipid stress, and male fertility. Thus, a study focusing on patients with varicocele-induced infertility found the severity of this condition was exacerbated by exposure to cigarette smoke [148]. Specifically, the burden of DNA damage and lipid peroxidation was found to be increased in the spermatozoa of varicocele patients that smoked at moderate to heavy levels [148]. In a similar context, epidemiological studies have raised the prospect of a causative link between the lipid composition of human spermatozoa, overall semen quality, and a male's body mass index (BMI) [149]. Thus, increased levels of sperm DHA were found to be positively correlated with normal sperm morphology while conversely, negative correlations were identified between sperm DHA and various sperm defects (including DNA damage) and between sperm DHA and BMI [149]. Such analyses are entirely consistent with an extensive body of literature highlighting the negative impacts of obesity on male fertility [150][151][152]. Furthermore, they also accord with data that dietary supplementation with different lipid formulations can influence sperm quality in both animal and human studies [153,154]. Such compelling evidence emphasizes the value of understanding body-wide lipid homeostasis in order to provide new insight into the dysfunction of sperm development and maturation that gives rise to idiopathic infertility. An important focus for such research may be the PUFA family, and in particular DHA, which are not only instrumental in the development of the male germline but are also commonly differentially accumulated in the dysfunctional gametes of male infertility patients (Table 2). ↑ Increased levels of DHA in ram spermatozoa collected from the epididymis compared to the testes [109].
↑ Increased levels of DHA in dog spermatozoa isolated from the distal versus proximal epididymis [111].
↑ Extremely high levels of DHA found in the monkey sperm tail compared to the sperm head [143].
↓ Reduced levels of DHA correlated with low motility in boar spermatozoa compared to normal motility controls [144].
↓ Significantly lower levels of DHA present in patients with asthenozoospermia and oligozoospermia compared to normozoospermic controls [145].
↓ Significantly lower levels of DHA reported in infertile human semen samples compared to healthy controls [146].
↓ Patients presenting with retinitis pigmentosa had significantly lower levels of DHA within their sperm [147].
↓ A negative correlation was reported between human sperm DHA levels, DNA damage, and BMI; DHA content was correlated positively with normal semen parameters such as sperm count, vitality and motility [149].
↓ Reduced levels of DPA correlated with boar spermatozoa presenting with low motility compared to normal motility controls [144].
Eicosapentaenoic acid (20:5) ↓ Significantly lower levels reported in human semen samples from infertile individuals compared to healthy controls [146] Arachidonic acid (20:4) ↑ Increased levels of AA were observed in the seminal plasma of human patient samples with asthenozoospermia compared to healthy controls [142].
↑ Increased levels of AA found in the tails compared to the heads of monkey spermatozoa [143].
↓ Reduced AA levels were observed in ram spermatozoa from the epididymis compared to the testes [109].
↓ Patients presenting with retinitis pigmentosa had significantly lower levels of AA within their sperm [147].
PUFA Temporal Accumulation and Consequences of Dysregulation 1
Dihomo-γ-linolenic acid (20:3) ↑ Higher levels of DGLA found in the tails compared to the heads of monkey spermatozoa [143].
↑ Significantly higher levels of DGLA were reported in infertile human semen samples compared to healthy controls [146].
↓ Patients presenting with retinitis pigmentosa had significantly lower levels of DGLA within their spermatozoa [147].
γ-Linolenic acid (18:3) ↑ Higher levels of LA found in the tails compared to the heads of monkey spermatozoa [143].
Analytical Lipid Technologies and Their Potential Application to Infertility Research
A common theme to emerge from our preceding summary of the contribution of lipids to male germ cell biology is that the application of lipid-based technologies to aid in the diagnosis, prevention, and understanding of male infertility is lagging far behind that of other heath disciplines. In this final section, we shall briefly outline how a resolute focus on lipids may enhance our understanding of idiopathic male infertility and reproductive health more broadly.
The structural and functional competence of biological membranes is achieved, in part, due to the astonishing diversification of phospholipids and their major components. Only through the advent of contemporary liquid chromatography mass spectrometry (LC-MS)-based lipidomics and the characterization of oxidatively modified lipids has this diversity been able to be appreciated. Glycerophospholipids are the major class of phospholipids, where one or two fatty acids are attached at the sn-1 (for saturated and mono-unsaturated fatty acids) or sn-2 positions (for PUFAs) of the glycerol backbone, with a polar group at the sn-3 position. Glycerophospholipids can then be further classified based on the nature of this polar group [155]. Most eukaryotic cells synthesize PUFAs from saturated fatty acids through the action of elongases, which add an ethylene group, or desaturases, which insert a double bond in the fatty acids (as reviewed by [155,156]). PUFAs have received considerable attention in reproductive biology due to the understanding that sperm cells become enriched in these easily oxidized substrates during maturation while simultaneously losing cytoplasmic antioxidant content during spermiogenesis. Here, it is the weak C-H bond at the bis-allylic position in PUFAs that is susceptible to hydrogen abstraction and forms the first intermediate of both enzymatic and non-enzymatic lipid peroxidation, the lipid radical [157]. Specific roles for oxygenated derivatives of PUFA, including leukotrienes and lipoxins, in the recruitment of immune cells and the resolution of inflammation have been able to be assigned through LC-MS protocols that use reverse phase LC and electrospray MS. Moreover, the oxidation of lipids also produces secondary products with shortened hydrocarbon chains (such as reactive electrophiles like 4HNE) that can be detected, despite their low abundance, using enrichment methods coupled to MS, such as biotin hydrazide affinity capture [158] and various click chemistry approaches [159]. While these approaches have not been widely used in reproductive biology, the burgeoning interest in lipid aldehydes and their ability to modify essential macromolecules in both the male [7,160,161] and female germline [162,163] will likely see the uptake of these technologies to answer questions surrounding germ cell aging and other reproductive pathologies.
While various types of MS have become the most accurate and sensitive quantitative methods for studies of lipid composition, the analysis of oxidized lipids remains a formidable task. The reasons for this are extensively discussed in excellent recent reviews by [155,164], but in short, this is due to the heterogeneity of the oxidized products, their susceptibility to degradation, the incredibly large number of isobaric oxidized lipid species, and until recently, a lack of accurate and available internal standards. The soft ionization techniques, electrospray ionization (ESI) and matrix-assisted laser desorption ionization (MALDI), have both been used extensively for the analysis of phospholipids and their oxidation products [165,166]. Coupled with LC and using a targeted approach to analysis and additional confirmation with fragmentation, ESI techniques have allowed for high specificity and sensitivity, though some oxidized isobaric species require further fragmentation or hydrolysis. Additionally, analysis of the hydrolyzed fatty acids may also be required to accurately determine the position of the oxygenated groups [167,168].
Often the level of structural detail required to map oxidation sites and the nature of isobaric oxidized lipid species is still refractory to high throughput or automated analysis. However, elegant two-dimensional chromatography approaches whereby lipids are separated first by class under the normal phase of hydrophilic interaction liquid chromatography (HILIC) conditions, and then further separated by their hydrophobicity by reverse-phase analysis in the second dimension has provided a powerful strategy to identify low-level oxidized lipid species [169]. Moreover, the development of higher resolution orbitrap instruments, such as the ThermoFisher Fusion Lumos, is highly permissive for the detection, unequivocal identification, and quantitation of oxidized phospholipids in cells, an example of which is described by [155].
Ultimately, the effectiveness and ease of uptake of these technologies relies on both the translation of the data collected into biologically relevant findings and the ability to integrate these data with those obtained of the lipid-modulating proteome, the metabolome, and the transcriptome. The complexity of redox modifications in the lipidome necessitates the need for more detailed systems biology approaches for lipid oxidation then those already well established for proteomics and transcriptomics. Bioinformatics packages, including LipidMatch [170] and LipidPioneer [171], for the interpretation of redox lipid data focus on pre-processing for peak alignment and integration, building an oxidized lipids database and automatically identifying them in LC-MS data. These tools (summarized in [164]), and importantly the development and updates of LipidMaps for the standardization of lipid analysis [168], have greatly aided data processing in many fields but still require some progress before they can be used effectively across all research fields. A great step forward has been attempts to integrate redox biology data into pathway analysis, with the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway now containing information on the role of oxidized lipids in lipid peroxidation, inflammation, linoleic and arachidonic acid metabolism, and ferroptosis, amongst others [155]. An example of the power of integrating lipidomic, proteomic, and transcriptomic data lies in a recent study by Parker et al., where lipid regulatory networks were examined in a large cohort of genetically distinct mouse strains to unveil new insight into the control and structure of mammalian lipid metabolism. This study established protein and genetic variants that are predicted to alter lipid abundance and has provided an important resource for probing lipid networks, especially in relation to hepatic lipotoxicity [172].
Despite the labor involved in ensuring accuracy and deriving meaning from complex lipidomic data, the potential advance in our understanding of reproductive biology and fertility warrants the growth of expertise necessary for the uptake of lipid technologies in our field. Furthermore, we are now in a unique position where the groundwork for these complex studies has largely been conducted in neighboring biological fields and permits the introduction of redox lipidomics and other lipid technologies to germ cells and reproductive tissues. Herein, we will deliberate on the potential use of these technologies to answer long-standing questions in reproduction (summarized in Figure 3). potential use of these technologies to answer long-standing questions in reproduction (summarized in Figure 3). Multi-omics approaches are yet to be used to understand key differences between fertile and infertile sperm samples. Here, we propose the combination of redox lipidomics and proteomics will yield important insights into the lipid changes that form the basis of infertility and the enzymes that may be responsible for these changes. (B) While the influence of paternal diet on perturbations in small non-coding RNA (sRNA) cargo of epididymosomes has been the subject of several recent studies, the lipid cargo of these exosomes and of the parent cells following changes in paternal diet have not been examined. Lipidomics approaches may guide our understanding of exosome biogenesis and cargo loading into extracellular vesicles. (C) Mass spectrometry imaging has recently been coupled with lipidomics approaches, allowing a spatial understanding of quantitative lipid changes within tissue. This could be applied to testis tissue to understand region-specific lipid peroxidation or to track the localization of metabolites, hormones, and drugs across the blood-testis barrier to aid in the design of testis-targeted therapies. Image created with BioRender.com.
Redox Lipidomics, Lipid-Targeted Antioxidants, and Male Infertility
The knowledge to be gained from performing a detailed study using redox lipidomics across developing germ cells and spermatozoa is immense considering the substantial proportion of infertility cases that display an oxidative mechanism. Moreover, the field of male reproductive health has been overwrought with antioxidant trials that are yet to yield substantial breakthroughs in therapeutics for infertile individuals [8]. While there are current efforts to manipulate enzymes, such as the lipoxygenases, to prevent widespread lipid peroxidation in germ cells and human spermatozoa [137,138], another attractive strategy is to use redox lipidomics to inform the development of targeted lipid-based antioxidants and/or to strategically fortify redox-sensitive sites via the deuteration of PUFAs. Interestingly, molecules that break the autoxidation of peroxyl radicals have proven capable of averting iron-dependent lipid peroxidation in other cell types. These radical trapping antioxidants (RTAs) inhibit phospholipid hydroperoxide formation and may hold promise for preventing Multi-omics approaches are yet to be used to understand key differences between fertile and infertile sperm samples. Here, we propose the combination of redox lipidomics and proteomics will yield important insights into the lipid changes that form the basis of infertility and the enzymes that may be responsible for these changes. (B) While the influence of paternal diet on perturbations in small non-coding RNA (sRNA) cargo of epididymosomes has been the subject of several recent studies, the lipid cargo of these exosomes and of the parent cells following changes in paternal diet have not been examined. Lipidomics approaches may guide our understanding of exosome biogenesis and cargo loading into extracellular vesicles. (C) Mass spectrometry imaging has recently been coupled with lipidomics approaches, allowing a spatial understanding of quantitative lipid changes within tissue. This could be applied to testis tissue to understand region-specific lipid peroxidation or to track the localization of metabolites, hormones, and drugs across the blood-testis barrier to aid in the design of testis-targeted therapies. Image created with BioRender.com.
Redox Lipidomics, Lipid-Targeted Antioxidants, and Male Infertility
The knowledge to be gained from performing a detailed study using redox lipidomics across developing germ cells and spermatozoa is immense considering the substantial proportion of infertility cases that display an oxidative mechanism. Moreover, the field of male reproductive health has been overwrought with antioxidant trials that are yet to yield substantial breakthroughs in therapeutics for infertile individuals [8]. While there are current efforts to manipulate enzymes, such as the lipoxygenases, to prevent widespread lipid peroxidation in germ cells and human spermatozoa [137,138], another attractive strategy is to use redox lipidomics to inform the development of targeted lipid-based antioxidants and/or to strategically fortify redox-sensitive sites via the deuteration of PUFAs. Interestingly, molecules that break the autoxidation of peroxyl radicals have proven capable of averting iron-dependent lipid peroxidation in other cell types. These radical trapping antioxidants (RTAs) inhibit phospholipid hydroperoxide formation and may hold promise for preventing membrane damage in germ cells. Although vitamin E is a natural RTA, a recent high-throughput screen has identified two novel RTAs, ferrostatin-1 and liproxstatin-1, which are comparatively more potent. These two RTAs have also proven highly effective within bilayer structures and are known to alleviate ferroptosis [173]. Additionally, synthetic compounds, such as nitroxide (Tempo), are also capable of inhibiting the production of hydroxyl radicals by blocking the Fenton reaction in mice [174] and warrant further analysis in male germ cells.
An important consideration in developing lipid-based antioxidants for infertility treatment is that the field is still lacking a comprehensive understanding of the lipid composition of germ cell and somatic cell membranes within the testis and how this changes both during spermatogenesis and throughout the course of life. While this is a difficult aim to achieve in humans due to limitations in our access to testis material, even mouse studies of the germ cell lipidome remain incomplete or entirely absent. Targeting both the germ cells and somatic cells of the testis will aid our understanding of the interconnectivity of germ cells and Sertoli cells during development and may also allow us to identify germ cell stage-specific markers of stress and how these change with the age of men or under redox stress conditions. Furthermore, applying lipidomics to a range of model species will assist in expanding our understanding of species-specific membrane composition to tailor ARTs, such as sperm cryopreservation or in vitro fertilisation (IVF), to species that do not respond well to these therapies.
Finally, the sensitivity of redox lipidomics technology is such that it has now been used to generate robust signatures of diverse cell death pathways, such as the oxidation of cardiolipins in apoptosis [175] and the presence of oxidized arachidonic and adrenic PE in ferroptosis [176]. This new knowledge provides essential leads to better understand cell death modalities in sperm cells and may allow us to rapidly characterize mechanisms that contribute to sperm cell death in infertile patients ( Figure 3A). Moreover, an advanced understanding of cell death pathways under the control of lipid hydroperoxides may eventually be exploited to develop male-targeted contraceptives that are specific to meiotic or post-meiotic germ cells. While these goals will require extensive investigation and validation, the application of redox lipidomics has the potential to shine new light on many key issues of male reproductive health.
The Involvement of Lipids in the Biogenesis and Cargo Loading of Extracellular Vesicles
In the new literature surrounding somatic cell exosomes, it is appreciated that several lipid-related pathways are involved in the biogenesis of exosomes and contribute to the diverse contents of this class of extracellular vesicle. These topics are eloquently reviewed by [116,177,178]. However, it is notable that exosomes are enriched in desaturated molecular species of phospholipids, which account, in part, for their increased membrane rigidity compared to parent cell membranes. Accordingly, exosomes are known to be more resistant to detergent treatment than micro-vesicles, indicative of a higher membrane lipid order [177]. It has been known for some time that the disruption of plasma membrane lipid organization is critical to allow vesicle formation and the modification of the outer membrane leaflet by a cholesterol/sphingomyelin-binding protein promotes the formation of microvesicles. Interestingly, the translocation of phosphatidylserine [130] is also a prerequisite for the biogenesis of these microvesicles. New data regarding mechanisms known to enhance exosome production highlight the role of lipid transporters, such as ATP-binding cassette sub-family A member 3 (ABCA3), and the activity of the phospholipase D2 (PLD2), and diglyceride kinase. Conversely, the inhibition of phosphoinositide kinases, such as PI3 kinase, has a negative effect on exosome production. In focusing on the contribution of phospholipases, studies of the budding of micro-vesicles have given credence to the idea that the production of both exosomes and microvesicles could be coordinated by the phospholipases PLD1 and PLD2 [177,179,180]. In the organization of cargo into extracellular vesicles, raft-based microdomains appear important for the lateral segregation of cargo from within the endosomal membrane. Such microdomains are known to be enriched in sphingomyelinases from which ceramides can be formed through hydrolytic removal of the phosphocholine moiety. It is thought that the structure of ceramide may induce membrane curvature that in turn promotes domain-induced budding, implicating ceramide-dependent mechanisms in exosome biogenesis [177].
While there is a paucity of mechanistic knowledge regarding the biogenesis of exosomes from the male reproductive tract, the contribution of lipids to such biogenesis processes in reproductive cells is an untouched area of research. Despite this, several studies have contributed to an understanding of the lipid composition of human prostasomes and those of other species [181,182]. Moreover, intriguing effects of the paternal diet on the regulation of the sperm epigenome have been observed, highlighting clear alterations in the exosome cargo that are driven by diet [183]. What is missing from these important analyses is the impact of paternal diet on the lipid regulation of exosome biogenesis and cargo loading, where changes in the lipidome of the parent cells, their respective extracellular vesicles, and their vesicular lipid cargo are likely to provide a critical link between dietary perturbation and exosome content. Moreover, it will be critical to examine how dietary lipids may drive changes in exosome production, composition, and membrane fusion that are yet to be examined in the reproductive field ( Figure 3B). This is an exciting area of research and one that has been made possible through the novel use of lipidomics and multiomics in the study of exosomes in many pioneering studies, including those of [182,184,185].
Mass Spectrometry Imaging of Lipids and Potential Applications for Reproductive Tissues
Having the tools to accurately detect oxidized phospholipids has led to a better understanding of their roles in both health and disease. However, it is not only the structure of these modified lipids but their concentration and tissue-specific localization that determine their function. Strikingly, this quest for a spatial understanding of lipids and metabolites within tissues has resulted in the rapid adoption of mass spectrometry imaging (MSI) techniques to lipidomics. MALDI-MSI is a well-established label-free technique that can be used to generate a highly specific, sensitive, and quantitative map of a broad range of biomolecules in cells and tissues [186,187]. The tissue used in these experiments is usually cryo-sectioned, mounted, and subsequently coated with a matrix to extract analytes from the tissue to form co-crystallization. MALDI MS is then used to scan specific regions of the tissue in an array of discrete points or 'pixels' and images are reconstructed to MS charts from these points [155]. These pixels can range from microns to nanometers depending on the instrument type, sample preparation, and analyte abundance [188]. MALDI-MSI has already been demonstrated to be a powerful technique for the spatial localization of phospholipids across many tissue types, and its range of clinical applications is expanding to fill an important gap between high throughput '-omics' technologies and classic histology [189]. A very recent example of its use in a discovery/pre-clinical setting is the identification of lipid markers for traumatic brain injury, where the acylcarnitines (often indicative of mitochondrial damage) were revealed to be key markers and co-localized with microglia in the brain [190]. Importantly, these authors also identified that an increase in acylcarnitine lipids could be found in the region of the brain affected by Parkinson's disease [190].
While protocols are still largely under development for the imaging of oxidized lipids, contemporary MALDI and gas cluster ion beam-secondary ion mass spectrometry (GCIB-SIMS) imaging may permit a high enough spatial resolution for this to be a distinct reality [155]. Additionally, these technologies are now being used to image lipids in single cells and to achieve the subcellular imaging of individual lipids. Indeed, recent work in single cells has captured the 3D spatial distribution of phospholipid classes, including PC, PE, and PI, in newly fertilized zebrafish across various stages of embryo development [191]. This was made possible through high spatial resolution MALDI protocols that can achieve a resolution of 5 µm [191]. In the context of the testis, it is easy to envisage many applications of MSI to understand the spatial distribution of lipids following invasive surgical procedures or to monitor lipid damage following torsion events or varicocele in men ( Figure 3C). Moreover, single cell MSI and the monitoring of lipid peroxidation products in testis tissue would provide incredible insight into germ cell pathologies and could potentially provide a novel means to track molecules across the blood-testis barrier. MSI could also be used to monitor the effectiveness of new therapies that are targeted towards lipid stability or the prevention of lipid hydroperoxide production. While some of these experiments rely on new developments in technique sensitivity and the progress of oxidized lipid imaging, MSI has already been used for the spatial localization and quantitation of androgens in the mouse testis in a proof-of-concept experiment [192]. Similar to current applications of MSI for the localization of drugs across the blood-brain barrier, visualization of molecules and metabolites in the testis may become a new way to understand either the penetrability of the blood-testis barrier, or the ability of new drugs to reach their target sites. Finally, the use of lipid and metabolite MSI may provide an early indication of damage to reproductive tissues following the administration of novel cancer therapies to patients where there is currently no application to monitor membrane or lipid health following such procedures.
Conclusions
In summary, here, we have sought to highlight the diversity of important physiological roles that lipids fulfil in the maintenance of cellular homeostasis. These include fueling the bioenergetics of germ cell metabolism and the dynamic remodeling of germ cell architecture during their functional maturation. We also described the pathological consequences arising from dysregulation of lipid homeostasis and the prospect of utilizing lipid signatures as biomarkers of male factor infertility. In view of these roles, we propose that tangible benefits will flow from increased attention being devoted to the study of sperm lipid composition and the mechanisms responsible for promoting lipidomic changes in the spermatozoa of infertile patients. Indeed, driven by technological advances in lipid-based analytical tools, we are now presented with an exciting window of opportunity to refine our understanding of sperm cell biology. Such knowledge should equip us with rational strategies to diagnose and make progress towards preventing male factor infertility.
Conflicts of Interest:
The authors declare that there is no conflict of interest. | 2020-01-26T14:04:54.302Z | 2020-01-23T00:00:00.000 | {
"year": 2020,
"sha1": "7cf8715dea00b98f7f477d9ee840b6f0069608a6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/9/2/327/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75f0b5b91cc84fcdbc2956681845f8b91e51f992",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261488021 | pes2o/s2orc | v3-fos-license | Electrostatics-Based Computational Design and Experimental Analysis of Buforin II Antimicrobial Peptide Variants with Increased DNA Affinities
Antimicrobial peptides (AMPs) are promising alternatives to traditional antibiotics in the treatment of bacterial infections in part due to their targeting of generic bacterial structures that make it more difficult to develop drug resistance. In this study, we introduce and implement a design workflow to develop more potent AMPs by improving their electrostatic interactions with DNA, which is a putative intracellular target. Using the existing membrane-translocating AMP buforin II (BF2) as a starting point, we use a computational workflow that integrates electrostatic charge optimization, continuum electrostatics, and molecular dynamics simulations to suggest peptide positions at which a neutral BF2 residue could be substituted with arginine to increase DNA-binding affinity either significantly or minimally, with the latter choice done to determine whether AMP binding affinity depends on charge distribution and not just overall monopole. Our analyses predicted that T1R and L8R BF2 variants would yield substantial and minimal increases in DNA-binding affinity, respectively. These predictions were validated with experimental peptide-DNA binding assays with additional computational analyses providing structural insights. Additionally, experimental measurements of antimicrobial potency showed that a design to increase DNA binding can also yield greater potency. As a whole, this study takes initial steps to support the idea that (i) a design strategy aimed to increase AMP binding affinity to DNA by focusing only on electrostatic interactions can improve AMP potency and (ii) the effect on DNA binding of increasing the overall peptide monopole via arginine substitution depends on the position of the substitution. More broadly, this design strategy is a novel way to increase the potency of other membrane-translocating AMPs that target nucleic acids.
■ INTRODUCTION
Although the development of new antibiotics has dramatically slowed in recent decades, there is an acute need for developing new and improved therapeutic approaches for treating bacterial infections. 1,2Recognizing that need, many researchers have noted the potential promise of antimicrobial peptides (AMPs) as a potential alternative to conventional antibiotics. 3,4These AMPs are small, typically cationic proteins that can show a broad range of activities against bacterial strains.Many AMPs primarily target the cell membrane in their mechanism of action, causing bacterial cell death through membrane permeabilization. 5,6−19 Despite the potential therapeutic promise of AMPs, one challenge to their broader use in therapeutic applications has been the difficulty in rationally designing more active peptides in part due to the heterogeneous and amorphous nature of the bacterial cell membrane targeted by many peptides.However, AMPs with intracellular targets may provide more ready opportunities for rational design.
In this work, we present a series of initial design studies focused on the AMP buforin II (BF2) (Figure 1).BF2 is one of the most thoroughly studied AMPs believed to have an intracellular target. 20−23 In previous work, we used a combination of molecular dynamics (MD) simulations and electrostatics calculations to characterize the interactions between BF2 and its potential nucleic acid targets, 23−25 an approach similar to others used to analyze DNA-binding affinity for various AMPs. 13,26,27ere, we focus on developing a computational framework to predict BF2 arginine mutations that increase BF2 affinity to nucleic acids.Our framework builds on the assumption that the affinity between BF2 and nucleic acids is governed primarily by electrostatic interactions and that the rational substitution of a neutral residue with arginine at particular positions may predictably modulate BF2/DNA electrostatic binding affinity through changes in both monopole and charge distribution.The choice to consider arginine rather than lysine substitutions to alter monopole is driven by prior work demonstrating that BF2 variants with Lys → Arg substitutions generally showed increased binding affinity. 29Our workflow combines MD simulations and continuum electrostatics calculations integrated with charge optimization protocols and experimental measurements (Figure 2).We implement this approach to predict and experimentally validate the binding affinities of two BF2 variants, T1R and L8R, with the ultimate goal of employing it to rationally design more active versions of AMPs with nucleic acid targets.Furthermore, by predicting BF2 mutations that lead to differing DNA affinity� with T1R predicted to improve binding robustly while L8R predicted to minimally, if at all, do so---these studies allow us to further evaluate the hypothesis that the enhanced DNA affinity of designed BF2 variants is peptide sequence-specific and not merely due to increasing the overall monopole charge on the peptide.
■ METHODS
As described above, Figure 2 shows the overall design workflow used in this work.For MD simulations used in steps 1 and 3, the GROMACS software package 30−33 (versions 5.0.5, 2016.3, and 2018.3) was used.For continuum electrostatics calculations used in steps 2 and 4, a single-grid red-black successive over-relaxation finite difference solver of the linearized Poisson−Boltzmann 34 was used (with Delphi version 8.4 35 additionally used for validation).Throughout the process, structures were visualized using PyMol 36 and VMD, 37 and data analyses and plots were done using Matlab (The Mathworks, Inc., Natick, MA) and Microsoft Excel (Microsoft, Inc., Redmond, WA).Other software specific to a given step in the workflow is described below.
Here, we describe the specific methods used within each step: Step 1: MD Simulations To Create Initial WT BF2/DNA Ensembles.The MD simulations used to generate WT conformational ensembles were carried out as part of previously published work. 24To briefly summarize, the starting model for BF2 (TRSSRAGLQWPVGRVHRLLRK, with an F10W mutation included in past studies for spectroscopic experimental assays) complexed with a 21-base pair DNA duplex was obtained via homology modeling from a segment of a nucleosome core particle complexed with a fragment of DNA (PDB ID 1AOI 38 ).The histidine at position 16 was modeled as the neutral delta tautomer, and all titratable groups were modeled in standard titration states at physiological pH, yielding an overall BF2 monopole of +6.After 100 steps of steepest descent minimization, three replicate simulations were carried out for 250 ns each using GROMACS 30−33 with AMBER03 force field parameters 39 and the TIP4P-EW water model. 40These simulations had a cubic box with an edge length of 8.9 nm (yielding roughly 91,000 atoms including the fourth fictitious water site), a 0.1 M NaCl concentration beyond initial charge neutralization, and an equilibrium temperature and pressure of 310 K (maintained by a velocity-rescaling thermostat 41 ) and 1 atm (maintained by a Berendsen barostat), respectively.A 2 fs time step was used, with all bonds constrained via the LINCS algorithm. 42Cutoffs for van der Waals and for switching to long-range particle mesh Ewald electrostatics 43 were set to 10 Å.
Step 2: Charge Optimization.From each simulation in step 1, snapshots were extracted every 3.5 ns beginning at 200 ns, for a total of 15 snapshots per replicate or 45 snapshots total in the ensemble used for charge optimization.For each extracted snapshot, constrained, partial charge optimization was carried out to serve as a coarse filter to efficiently estimate the effect on BF2/DNA electrostatic binding free energy if each neutral BF2 side chain was, in turn, to bear a positive charge.
Electrostatic charge optimization 44,45 uses a continuum electrostatics framework to determine the theoretical charge distribution on a molecule or molecular moiety that maximizes its affinity to a given binding partner.Conceptually, it optimally balances unfavorable polar desolvation costs with favorable electrostatic intermolecular interactions to yield a minimum in the electrostatic binding affinity.The theory has been comprehensively studied, 46,47 applied, 48−54 and reviewed 55 elsewhere, and here we provide a very brief overview.
Using a rigid binding model within the continuum electrostatic framework, in which binding partners are represented by low-dielectric cavities with (atom-centered) embedded point charges within a high-dielectric solvent, the electrostatic binding free energy can be written as a sum of three matrix/vector products: where q p and q D are vectors containing the peptide and DNA point charges, respectively, and P, C, and D are the unit potential difference matrices for the peptide desolvation, solvent-screened complex interaction, and DNA desolvation upon binding, respectively.In this work, we wished to optimize only the atoms within a single peptide side chain at a time, leaving all other peptide charges and DNA charges fixed.In such a case, q p can be partitioned into a variable portion, q s , and a fixed portion, q t , and eq 1 can be equivalently expressed as follows.
G q P q q P q q P q q C q q C q q Dq 2 T T T t Once the matrix elements have been determined, eq 2 provides an analytical expression for calculating the electrostatic binding free energy as a function of a particular side chain's charge distribution.This expression can be minimized via standard constrained or unconstrained optimization procedures to determine optimal charges for the corresponding side chain.Matrix elements were computed through solving the LPBE via a single-grid red-black successive over-relaxation finite difference solver. 34Elements for the unit potential difference matrices were computed by charging each side chain atom to +1 in turn and solving for potentials in the bound and unbound states.When charge distributions were constant (i.e., q t or q D ), potentials were premultiplied by these distributions to yield vector elements.To solve for each matrix or vector element, a 401 × 401 × 401 grid was used with a two-stage focusing procedure in which the longest dimension of the box containing the system occupied 23% and then 92% of the grid, yielding a grid spacing of ∼2.2 grids/Å at the highest focusing.This resolution has been shown in past 24 and current work to provide reasonable estimates of ΔΔG's that agreed well with those calculated at higher resolutions and/or using the Delphi solver. 35Inner and outer dielectric constants were set to 4 and 80, respectively, with Bondi radii 56 used to generate the molecular surface with a probe radius of 1.4 Å.The ionic strength was set to 0.1 M. Though this system is highly charged and therefore may require modeling nonlinear ionic strength effects for quantitative accuracy, previous work 24 demonstrated that nonlinear effects were somewhat systematic for these binding partners in a dilute environment and they were not considered in this design workflow, in which coarse, qualitative predictions were sufficient.
Once the matrix elements were obtained for each side chain, constrained convex optimization was carried out using the CONOPT solver 57 (version 3.15G) within the GAMS software package (version 23.9.5, GAMS Development Corporation, Fairfax, VA).In order to roughly assess the potential for binding affinity improvement upon mutation to an arginine, the optimal (minimal) BF2/DNA electrostatic binding free energy was computed upon allowing each originally neutral side chain's charges in turn to vary but constraining the total peptide monopole to either +7 (i.e., increasing that side chain's charge by +1) or +6 (keeping that side chain neutral).Additionally, each varied point charge was constrained to lie within a range of −1e to +1e.We then used the difference between the optimal binding free energies between the +7 and +6 theoretical variants of a given side chain as the metric to roughly estimate the extent to which binding affinity may improve upon a residue's mutation to arginine.We note that this is one of many ways to estimate such a quantity, especially because this method is inherently coarse in part due to its not accounting for clear steric and conformational changes that would occur upon explicit mutation, but here, we chose to use it as our initial filter prior to more explicit models in subsequent steps.
Step 3: Replicate MD Simulations To Generate Ensembles of Mutant Variants.Promising arginine variants of BF2 suggested by charge optimization were explicitly modeled and simulated in complex with DNA.Mutant starting structures were generated from our initial starting structure in step 1 using PyMol 36 to create the arginine substitution.For each considered mutant, short replicate MD simulations of 50−100 ns each were carried out using the same parameters described in step 1, except the box size was roughly 9.3 nm with roughly ∼107,000 atoms including the fictitious fourth water site.A summary of all simulations is provided in Table 1.
Step 4: Estimation of Relative Binding Energetics via Continuum Electrostatic Calculations.For each mutant and for the WT, snapshots were extracted every 500 ps from either the last 25 ns (in the case of 50 ns simulations) or the last 50 ns (in the case of 100 ns or longer simulations) of each replicate to carry forward for continuum electrostatics calculations.The BF2/DNA electrostatic binding free energy was computed for each WT and mutant snapshot via solving the LPBE using the same methods described in step 2.
Step 5: Experimental Characterization of Peptide/ DNA Binding Affinity and Antimicrobial Potency.Wildtype and variant T1R and L8R BF2 peptides selected from our computational process were then tested experimentally for their DNA binding and antimicrobial activity.For those studies, all peptides were synthesized, purified to >95% by GenScript (Piscataway, NJ), and obtained in their salt form with trifluoroacetic acid counterions.Peptide concentrations were determined using the absorbance at 280 nm.Peptide-DNA binding was determined using a fluorescence intercalator displacement (FID) assay employed in previous studies of BF2. 23,25,29,58Quartz fluorescence cuvettes were rinsed with STE buffer (10 mM Tris, 50 mM NaCl, and 1 mM EDTA, pH 8.0) followed by a thiazole orange (TO) solution (0.55 μM) in STE.2.5 mL of the 0.55 μM TO solution in STE and 5.80 μL of DNA (31.63μM) were added to each cuvette.Fluorescence was measured using a Cary Eclipse spectrofluorimeter (excitation 509 nm; emission 527 nm) after letting the sample sit for 5 min at 25 °C.This DNA sample was titrated with 20 μL of peptide solution (82 ± 4 μM), allowing 5 min after each titration for equilibration before fluorescence readings.Titration with the peptide was continued until fluorescence had decreased to 50% of its original value.A minimum of four independent titrations were performed for each peptide.Temperature was controlled at 25 °C throughout the titration.
TOP10 Escherichia coli (Novagen) containing an ampicillinresistant plasmid (pET45b) was used for both radial diffusion and microbroth dilution assays of antibacterial activity.All cultures were grown in the presence of 25 μg/mL ampicillin to minimize contamination.Bacterial cultures were grown overnight for 18−24 h in 3% w/v tryptic soy broth (TSB) in a shaking incubator at 37 °C and 150 rpm.The overnight cultures were then diluted 1:100 in 3% w/v TSB.These refresher cultures were incubated in the shaking incubator at the same settings for 2.5−3 h to reach mid log growth.The bacteria were then harvested by centrifugation at 1500 × g at 4 °C for 10 min and then resuspended in the buffer for the radial diffusion or microbroth dilution assays.
For radial diffusion assays, 10 mL of molten underlay agar (10 mM Na 3 PO 4 , 100 mM NaCl, 1% TSB v/v, 1% agarose w/ v, pH 7.4) was mixed with 4 × 10 6 CFU/mL of resuspended bacteria.This solution was vortexed, poured into a Petri dish, and left to solidify.Wells were then formed in this underlay using a bleach trap glass pipet, and each well was filled with 2 μL of 1 × 10 −4 M of peptide or DI water.The Petri dishes were incubated gel side up for 3 h in a 37 °C incubator before being covered with 10 mL of molten overlay gel (2.4% w/v TSB, 1% w/v agarose).Once the overlay became solid, Petri dishes were incubated overnight (16−18 h) at 37 °C.The diameter of clearance was measured after 24 h to evaluate the relative antibacterial activity of the peptides.Data for each peptide were collected from at least two independent cultures grown on different days.
Microbroth dilution assays were performed in 96-well plates.A series of twofold peptide dilutions were performed across rows of the plates.100 μL of liquid testing medium (LTM) and bacteria solution and 10 μL of diluted peptide were added to wells.The LTM used was STE buffer (10 mM Tris, 50 mM NaCl, 1 mM EDTA, pH 8.0) inoculated with 4 × 10 6 CFU/ mL resuspended E. coli bacteria.Final peptide concentrations ranged from 0.0883 to 93.6 μM.Each concentration was tested in triplicate on each plate.The plates were incubated in a nonshaking 37 °C incubator for 1 h.Then, 100 μL of fresh TSB was added to each well.The plate was incubated in the nonshaking incubator for the next 24 h.The OD 600 of the wells after incubation was measured using a SpectraMax M3 microplate reader (Molecular Devices).A threshold OD 600 of 0.1 was used to determine the minimal inhibitory concentrations.Data for each peptide were collected from at least three independent cultures grown on different days.
■ RESULTS
The aim of our workflow is to create BF2 variants, each with an arginine substituted for a neutral residue, with differing DNAbinding affinities, with the ultimate goals of (1) determining whether a computational workflow based on electrostatics can be effective in designing tighter DNA-binding peptides, (2) beginning to determine relationships between DNA-binding affinity and potency, and (3) testing the hypothesis that binding affinity depends on not only the overall monopole but also the specific sequence giving rise to an overall monopole.Here, we describe the results obtained from each step of the design workflow shown in Figure 2.
Step 1: MD Simulations To Create Initial WT BF2/DNA Ensembles.An initial wild-type BF2/DNA conformational ensemble used as a starting point for design predictions was generated using the equilibrated portions of three replicate 250 ns MD simulations from previously published work. 24The ensemble consisted of 15 snapshots sampled from the last 50 ns of each simulation for a total of 45 snapshots.Figure 3 shows a sample snapshot (200 ns) from one of the replicate simulations, with water and ions removed for clarity.
Step 2: Charge Optimization.Constrained, partial, electrostatic charge optimization was carried out as described in the Methods section for each of the 45 snapshots within the ensemble.Each side chain with a neutral formal charge on the peptide was considered in turn, except for TRP10 and PRO11 due to the roles that these residues play in experimental quantification and peptide translocation and activity, respectively. 21,22,58For glycine residues, both hydrogens off the α carbon were simultaneously optimized to provide multiple feasible solutions by considering more than one atomic center.For each snapshot, a rough prediction of the effect of an arginine substitution at a given position was estimated by calculating ΔΔG opt, elec 0 , the difference between the optimal peptide/DNA binding free energy in each conformation when the side chain's monopole increased by +1 and the optimal binding free energy when the side chain retained its wild-type monopole as seen in eq 3 in the Methods section.
Figure 4 shows the average ΔΔG opt, elec 0 for the substitution of each residue considered across all snapshots.The positions with the most negative ΔΔG opt, elec 0 are predicted to improve affinity the most upon becoming positively charged, assuming both optimal interactions and retention of shape and conformation.This coarse model suggests that a single positively charged substitution at any position would increase binding affinity, which is reasonable given the substantial negative monopole of DNA.However, the extent to which the binding improves appears to be position-dependent, with positions closer to the N-terminus (e.g., THR1 and SER3) predicted to yield greater affinity gains than central positions (e.g., ALA6, GLY7, LEU8, and GLN9) or C-terminal positions (e.g., LEU18 and LEU19).This result predicts that while a peptide's overall monopole affects DNA binding affinity, the precise distribution of charges for a given peptide monopole would significantly modulate it as well.
Steps 3 and 4: Replicate MD Simulations To Generate Ensembles of Explicit Mutant Variants and Estimation of Relative Binding Energetics via Continuum Electrostatic Calculations.Steps 3 and 4 in the workflow were iterated until sufficiently robust mutant candidates were identified for experimental testing.In order to explicitly test whether DNA-binding affinity depended on the overall sequence monopole or more subtly on the sequence giving rise to that monopole, we wished to identify two mutants for experimental testing with differing binding affinities but with the same +7 monopole�one that was predicted to more substantially improve binding and another that was predicted to improve binding to a minimal extent.Replicate simulations of explicit mutants that appeared promising for either of these outcomes based on charge optimization were carried out, and their electrostatic binding free energies were computed for snapshots extracted every 500 ps from the last 25 or 50 ns of each replicate by solving the LPBE.Typical RMSD values in the considered timeframes calculated over protein alpha carbons and DNA phosphate atoms ranged from roughly 2− 7 Å, which was roughly comparable to the range observed in WT simulations of 5−10 Å. 24 To enable comparison with the WT binding free energy, snapshots were also extracted for computing electrostatic binding free energies from the last 50 ns of the three WT simulations used for charge optimization as described in the Methods section, and from the last 50 ns of an additional 100 ns WT simulation that was carried out subsequently to validate the robustness of the estimate of the WT binding free energy.
Candidate for Substantial Improvement in Binding Affinity.We first sought to predict positions that, when mutated to arginine, would substantially improve binding affinity, starting with THR1, as the charge optimization results shown in Figure 4 indicated that it had the most negative ΔΔG opt,elec 0 .Replicate simulations (N = 6) of T1R BF2 were conducted, and the average resulting electrostatic binding free energy, shown in Table 2, is indeed more negative than that calculated from replicate simulations of WT BF2 (−23.5 vs −17 kcal/mol).Though this improvement is statistically significant to within p ≤ 0.05, the standard error for the WT binding free energies is substantial.However, previous work 25 used similar methods to compute electrostatic binding free energies on five replicate simulations (with 11 snapshots each) of WT BF2/DNA and yielded replicate average values that were within the range found here for the WT system, providing additional evidence for the robustness of this difference.Based on these results, T1R remained a promising mutant to substantially increase affinity.
Candidate for Minimal Improvement at Best in Binding Affinity.We next sought to determine a mutant whose binding affinity improvement would be minimal relative to WT. Promising candidates would be those positions with the least negative ΔΔG opt, elec 0 and included central and C-terminal positions as shown in Figure 4. L19R was discarded as a candidate after continuum electrostatic calculations on snapshots from an initial preliminary simulation showed a predicted electrostatic binding free energy similar to that of T1R.A6R's predicted binding free energy (Table 2) was similar to that of T1R, also demonstrating the importance of explicit mutant simulations to follow up on the coarse charge optimization predictions.However, L8R and Q9R retained similar binding free energies relative to wild type with modest improvements on average that were not statistically significant and were pursued further as promising candidates for minimal binding improvement.
Considering the Effects of Histidine Protonation on Design Candidates.To increase confidence in our predictions, we carried out additional simulations and free energy calculations for WT and candidate mutant (T1R, Q9R, and L8R) peptide/DNA complexes in which we protonated the histidine at position 16.Although previous work on BF2 24 has assumed that histidine to be in a neutral state, binding to polyanionic DNA could shift its protonation state, which could impact our binding predictions.The effects of HIS16 protonation on A6R were also considered to see if it remained similar to that of T1R across HIS16 protonation states.These results are also shown in Table 2. Despite some shift in the predicted absolute binding affinity of WT BF2 upon histidine protonation, enhanced binding of T1R is predicted regardless of histidine protonation state (significant to p ≤ 0.05 for the protonated case as well).A6R showed similar binding affinities to T1R across both protonation states but with greater variability, and therefore, it was not chosen for experimental testing.For the candidates for minimal improvement, only L8R showed consistently modest binding enhancement at best, regardless of the potential histidine protonation state.In particular, the effect of Q9R mutations appeared to be especially sensitive to the histidine charge.Therefore, L8R was put forward as the candidate mutant predicted to yield minimal improvement in DNA binding relative to WT for experimental validation in this work.Interestingly, the effect of histidine protonation on DNA binding was not systematic.For example, protonating HIS16 appeared to have no effect on the binding of T1R BF2, but it was predicted to greatly improve the binding of Q9R BF2.This suggests that there may be some residue-coupled effects on DNA binding.We explore possible structural explanations for this coupling in the Discussion section.
In summary, our mutant candidates put forth for experimental testing were T1R and L8R, with the order of predicted DNA binding affinities ranging from T1R (strongest), L8R,WT (weakest).
Step 5: Experimental Characterization of Peptide/ DNA Binding Affinity and Antimicrobial Potency.In order to evaluate our computational predictions, we used a fluorescent intercalator displacement (FID) assay to experimentally compare the DNA binding of WT, T1R, and L8R BF2.This approach has been previously used to characterize the DNA binding of BF2 variants. 23,25,29,58In these experiments, we measured the ability of each peptide to displace the fluorescent intercalator thiazole orange (TO) from DNA.The C 50 , or the average concentration of peptide needed to reduce TO fluorescence by half, is given for each BF2 variant in Figure 5; a decreased C 50 implies stronger peptide-DNA interactions.Both T1R and L8R show significantly increased DNA binding (p ≤ 0.05) relative to wild-type BF2, though the L8R increase is indeed more modest, in line with the predictions; T1R also showed significantly increased binding compared to L8R (p ≤ 0.05).
While FID measurements confirmed the ability of our approach to successfully design BF2 variants with different increased DNA binding affinities, our long-term interest is to develop AMPs with enhanced antimicrobial activity.To the best of our knowledge, previous work has not attempted to engineer more active AMPs by increasing their nucleic acid interactions.However, BF2 mutations with decreased DNA binding typically show decreased activity. 23We have also noted that the overall activity of BF2 variants is based on a balance of their membrane translocation, membrane permeabilization, and DNA binding. 58hus, we also compared the activity of WT, T1R, and L8R peptides against E. coli using radial diffusion (Figure 6) and microbroth dilution (Figure 7) assays.In both of these experiments, the T1R mutation led to a clear increase in activity relative to that of the WT peptide, showing an increased radius of clearance in radial diffusion measurements (Figure 6) and a decreased minimum inhibitory concentration (Figure 7).Interestingly, the results for the L8R mutation showed some difference between these two methods as it showed increased activity relative to WT in the radial diffusion assay but no significant change in activity in the microbroth assays.This variation could be due to inherent differences in the setup of these experiments as a radial diffusion experiment can be impacted by unexpected differences in the diffusion rates of peptides through solid media.In fact, as noted in the Discussion section, it is feasible that the L8R peptide could show equivalent or even decreased activity relative to wild type, as observed in our microbroth assays, despite its increased nucleic acid affinity; such a phenomenon may be due to potential changes in its other properties.
Structural Analyses of Variant Peptides from Simulations.Our experimental results indicate that the BF2 T1R mutant had greater improvement in DNA binding affinity relative to WT than BF2 L8R, in line with our computational predictions.To better understand this observation, we carried out additional analyses on the simulations used in the prediction process.First, Figure 8 shows the average minimum distance between each side chain and DNA in the original WT simulations used to generate the ensemble for charge optimization.As the figure shows, the THR1 side chain is closer to the DNA on average than L8, indicating a greater opportunity for short-range electrostatic interactions.Interestingly, the qualitative visual pattern seen in Figure 8 is similar to that of Figure 4, in that the side chains that are closer on average also tend to have more negative predicted ΔΔG opt, elec 0 , although the correlation is not perfect.This similarity suggests that while it may not replace the utility of charge optimization, distance analyses may be a complementary "filter" in the design workflow to highlight potentially promising candidates for mutation.
Figure 9 shows the final snapshot extracted for computed binding free energies (step 4) from a replicate of the WT, T1R, and L8R simulations.In each case, residues 1 and 8 are highlighted.Interestingly, in observed snapshots of nearly all simulations performed, residue 1 is generally localized within the DNA minor groove, enabling it to interact electrostatically with both flanking phosphate backbones.This observation might help to explain why increasing the monopole of this residue�already at +1 due to its being at the N-terminus�to +2 may provide favorable interactions since it is "sandwiched" between charges in the minor groove that compensate for lost solvation.Residue 8, on the other hand, remains relatively far from the DNA.As seen in Figure 9c, even though that residue approaches the DNA in the L8R simulation, it appears more water exposed and further from the phosphate backbones in the major groove.These visual data provided from MD simulations coupled with the distance analyses provide additional insight into why the T1R mutation yields greater improvement in binding than the L8R mutant.
To better understand peptide conformational flexibility for each variant as it binds to DNA, we carried out alpha-carbon root-mean-square-fluctuation (RMSF) analyses for all replicate simulations of WT, T1R, and L8R.As a control, we also show RMSF analyses for two free, unbound BF2 simulations carried out and described as part of previous work. 24The results of these analyses are shown in Figure 10.Interestingly, although the N-terminal region of the unbound peptide showed the greatest movement, the N-terminus of all bound peptides was the least mobile region.Thus, binding to the DNA appears to "lock" the N-terminus in place perhaps due to the minor groove interactions noted above.This effect is most pronounced for the T1R mutant, suggesting a more rigid conformation and perhaps indicative of tighter interactions.
Finally, Figure 11 shows the average number of peptide/ DNA hydrogen bonds made for the considered portions of all WT and variant simulations.There appears to be no significant difference in the number of hydrogen bonds between the WT and T1R mutants even though the T1R mutant was experimentally shown to bind with greater affinity to DNA.Additionally, the L8R variant actually makes fewer hydrogen bonds with DNA on average than the WT (p ≤ 0.05) though it binds with slightly higher affinity.These data support the idea that predictions based solely on hydrogen bond quantification may not always correlate with affinity in this system and highlight potential value in our prediction workflow, which accounts more thoroughly for both short-and longer-range polar and electrostatic interactions.
■ DISCUSSION
This work provides an initial demonstration of an approach to use MD sampling with electrostatics calculations to design AMPs with an increased DNA-binding affinity.Additionally, we showed that DNA-binding affinity is dependent not only on the overall monopole of the peptide but also on the precise location of positively charged residues.Antimicrobial measurements also implied that this could be an effective approach toward designing more active AMPs, and we plan future work to better understand the relationship between peptide charge distribution, DNA-binding affinity, and antimicrobial potency for variants of BF2 and other DNA-binding peptides.
Here, we focused solely on the electrostatic component of binding when predicting mutant candidates using the hypothesis that a simplified model would suffice in a system that is clearly dominated by polar solvation and charge−charge interactions.Future work could consider the best way to account for other, nonpolar components of the binding free energy to provide more holistic insights into binding determinants.Similarly, future studies could use a broader set of starting structures beyond the homology-model-based BF2/DNA structure employed here to provide a more robust sampling of the design conformational landscape.That said, although accounting for other components of binding free energy and broader conformational sampling may be useful, particularly for more quantitative predictions, the current work demonstrates that focusing on the electrostatic components of energies with the type of ensembles collected here can be successful for design purposes in this system.
In particular, our approach was successful in predicting BF2 variants with different levels of DNA affinity.Moreover, the different affinities of the T1R and L8R variants emphasize that BF2−nucleic acid binding is peptide sequence-specific and not solely driven by the overall positive charge.A similar sequence specificity also was observed in BF2 mutants designed to have decreased DNA binding upon monopole decreases from singlesite arginine or lysine to alanine mutations in previous work. 23e also found that our arginine mutation identified for the maximum increase in DNA binding (T1R) did have clearly enhanced activity in two different antimicrobial assays, giving us confidence in its increased potency.The L8R mutation with more moderate enhanced DNA binding had inconsistent results between the two activity assays that may have stemmed from different assumptions embedded in those methods related to the relative diffusion of peptides through solid agar or bacterial exposure to peptides in phosphate buffer.Regardless, we would not expect a perfect correlation between DNA binding and antimicrobial activity since our past work showed that the activity of BF2 variants was related to their membrane translocation and permeabilization in addition to their DNA binding. 58To this end, we are working toward integrating predicted membrane interactions into our AMP design strategy.Nonetheless, the activity of the T1R peptide provides evidence that an AMP design approach can produce promising candidates by focusing on one of these characteristics.
Our computational analyses suggested that the effect of histidine (HIS16) protonation on the binding free energies of arginine mutants was not systematic.For example, HIS16 protonation was predicted to improve the binding affinity far more for the Q9R mutant than for the T1R or L8R mutant.This observation suggests that the protonation of HIS16 may alter the manner in which other peptide residues interact with the DNA in a sequence-dependent manner.Indeed, preliminary distance and energetic component analyses suggest that in our simulations with protonated HIS16, the distance and binding free energy between the arginine at position 20 (ARG20) and the DNA are decreased more for the Q9R variant than for the T1R variant (data not shown).In other words, protonating H16 indirectly affects the contributions of a different residue, ARG20, in a mutant-specific manner.Future experimental work can measure the extent of histidine protonation in various complexes, and ongoing computational work can continue to understand and incorporate the effects of the histidine titration state on conformational sampling into the design framework.Additionally, H16F variants can be designed 59 and experimentally tested as controls that are not titratable at this position.
In conclusion, this work has demonstrated a design workflow that combines MD simulations, charge optimization, continuum electrostatics, and experimental measurements to create mutants of a membrane translocating AMP with altered DNA-binding affinity and potency.In ongoing work, we aim to ultimately build on this approach to generate additional mutants of BF2 and other DNA-binding AMPs to better understand the relationship between DNA binding and antimicrobial potency.We recognize the relatively high MIC values for the parent BF2 peptide and our designed variants, at least under the conditions used for the antibacterial activity measurements presented here.However, the design approach utilized in this study would likely be equally effective in designing more potent versions of other DNA-binding AMPs with higher basal activity.Moreover, our proposed design framework is not specific to DNA and can therefore be applied to any peptide/target pair.Indeed, while the translocation mechanism is not fully characterized, membrane translocating peptides also must interact with the cell membrane prior to translocation.To that end, this design framework could also be adapted to generate mutant peptides to better understand the relationship between the peptide-membrane affinity and antimicrobial potency.Taken together, a holistic design of AMP-membrane and AMP-DNA interactions that provide
Figure 1 .
Figure 1.Representative structure of the BF2 peptide taken from a previous molecular dynamics simulation of the peptide interacting with a lipid membrane; 28 positively charged residues are shown in blue.
Figure 2 .
Figure 2. Workflow used to predict and evaluate BF2 variants with differential binding affinities to DNA.
Figure 3 .
Figure 3. Sample snapshot (replicate 1, 200 ns) from a WT BF2/ DNA complex simulation carried out as part of prior work.24Peptide shown in atom colors, with the backbone highlighted in blue.DNA is shown in yellow, and water and ions have been omitted for clarity.
Figure 4 .
Figure 4. ΔΔG opt, elec 0 , in kcal/mol for each BF2 side chain considered in electrostatic charge optimization calculations.A more negative value for ΔΔG opt, elec 0 indicates a greater improvement upon increasing peptide monopole by +1 in the hypothetical optimal BF2/DNA electrostatic binding free energy.Error bars show standard error over three replicates.
Figure 5 .
Figure 5. (A) Representative data from FID peptide-DNA binding experiments, showing changes in relative fluorescence (F/F 0 ) with increasing peptide concentration.(B) Average C 50 of T1R and L8R BF2 variants and WT BF2 in FID assay.C 50 was calculated by averaging the peptide concentrations at which initial fluorescence was halved (n ≥ 4 for each peptide).Error bars represent the standard error.Asterisks indicate a significant difference from WT (p ≤ 0.05).
Figure 6 .
Figure 6.(A) Images of representative wells from a radial diffusion assay plate containing water and WT BF2 and T1R and L8R BF2 variants.Note that contrast and brightness were altered in identical ways for all four well images to improve image quality.(B) Mean clearance diameter in mm of T1R and L8R BF2 variants and WT BF2 in RDA.Mean clearance diameter was calculated by averaging the clearance diameters of each sample (n = 6 for each peptide).The error bars represent the standard error.Asterisks indicate a significant difference from WT (p ≤ 0.05).
Figure 7 .
Figure 7. Minimum inhibitory concentrations of T1R and L8R BF2 variants and WT BF2 in microbroth dilution assay (n = 3 for each peptide).Threshold concentrations were determined over nine replicates performed on three independent cultures.The error bars represent the standard error.Asterisks indicate a significant difference from WT (p ≤ 0.05).
Figure 8 .
Figure 8.Average minimum distance (Å) between each considered side chain and DNA for snapshots extracted every 3.5 ns over the last 50 ns of each WT simulation.Error bars show standard error over the three replicates considered for charge optimization.
Figure 9 .
Figure 9. Final extracted snapshot from the first replicate of each of (A) WT (from prior work), 24 (B) T1R, and (C) L8R variants.BF2 is shown in atom coloring, and DNA is shown in yellow.Residues 1 and 8 are colored orange and green, respectively, in each case.Water and ions have been omitted for clarity.
Figure 10 .
Figure 10.Average root-mean-square-fluctuation (RMSF), in Å, of the α carbon of each residue of simulated DNA-bound BF2 variants that were experimentally tested and unbound WT BF2.Values were calculated using the initial minimized structure for each replicate via sampling every 500 ps for the last 50 ns (in the case of simulations lasting 100 ns or more) or 25 ns (in the case of 50 ns simulations) of each replicate, and error bars show standard error over the number of replicates done (Table1).
Figure 11 .
Figure 11.Average number of BF2−DNA hydrogen bonds across simulations of experimentally tested BF2 variants.Values were calculated every 500 ps for the last 50 ns (in the case of simulations lasting 100 ns or more) or 25 ns (in the case of 50 ns simulations) of each replicate, and error bars show the standard error over the number of replicates done (see Table1).Nitrogen atoms were not considered acceptors in this analysis.Asterisks indicate a significant difference from WT (p ≤ 0.05).
Table 1 .
Summary of Each Variant Simulated, Including Number and Length of Replicate Simulations.Starred Simulations Are from Prior Work 24 and Were Used in Steps 1 and 2
Table 2 .
Average Computed Electrostatic Binding Free Energies (kcal/mol) of BF2 Variants with DNA Using Snapshots from MD Simulations of WT or Mutant Peptide a a Uncertainty is the standard error across the corresponding number of replicates shown in Table1.Variants selected for experimental testing are highlighted in bold.
Table
). Nitrogen atoms were not considered acceptors in this analysis.Asterisks indicate a significant difference from WT (p ≤ 0.05).maximalpotency may serve as a useful strategy to engineer AMPs with optimal potency. | 2023-09-03T15:03:53.134Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "ede99c412c2241bcce72d6596482775df2e1728c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acsomega.3c04023",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e88d7e382078380c3e3dd10a5780866f54152b39",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
259093416 | pes2o/s2orc | v3-fos-license | Acute pancreatitis associated with diabetic ketoacidosis in a child with COVID-19 infection
Background There is a mutual influence between COVID-19, diabetes ketoacidosis, and acute pancreatitis, with clinical manifestations overlapping each other, which can lead to misdiagnosis and delayed treatment that could aggravate the condition and affect the prognosis. COVID-19-induced diabetes ketoacidosis and acute pancreatitis are extremely rare, with only four case reports in adults and no cases yet reported in children. Case presentation We reported a case of acute pancreatitis associated with diabetic ketoacidosis in a 12-year-old female child post novel coronavirus infection. The patient presented with vomiting, abdominal pain, shortness of breath, and confusion. Laboratory findings showed elevated levels of inflammatory markers, hypertriglyceridemia, and high blood glucose. The patient was treated with fluid resuscitation, insulin, anti-infection treatments, somatostatin, omeprazole, low-molecular-weight heparin, and nutritional support. Blood purification was administered to remove inflammatory mediators. The patient's symptoms improved, and blood glucose levels stabilized after 20 days of admission. Conclusion The case highlights the need for greater awareness and understanding of the interrelated and mutually promoting conditions of COVID-19, diabetes ketoacidosis, and acute pancreatitis among clinicians, to reduce misdiagnosis and missed diagnoses.
Case report A 12-year-old female patient was admitted to the hospital due to vomiting, abdominal pain for 1 day, and shortness of breath with consciousness disorder for half a day. She had been vomiting six to seven times a day for the past day, with stomach contents, accompanied by difficult-to-relieve abdominal pain, poor mental state, which gradually worsened to a state of confusion. Moreover, she exhibited deep and heavy breathing, dizziness and chest tightness. No fever, cough, diarrhea or other discomforts were present. She received treatment at a local hospital where blood glucose and serum electrolyte levels were not measured. After about 4 h of receiving "fluid replacement, anti-infection, and omeprazole gastroprotection" treatment, however her symptoms did not improve. The patient was then transferred to our emergency department and had a rapid blood glucose test, which showed a result of 29.8 mmol/L. She was then admitted to our PICU on December 29, 2022. Since the onset of her symptoms, the patient has had poor mental status and appetite, no bowel movement, increased urine output, and no significant change in weight. The patient has had a history of close contact with a confirmed COVID-19 patient who had fever. The patient has received two doses of the Sinovac COVID-19 vaccine and has no history of prior COVID-19 infection. After admission, further inquiry into the medical history revealed that the patient had experienced symptoms of polydipsia (> 2000 ml/d), polyuria (> 2500 ml/d, with 3-4 nighttime urinations), and weight loss (total decrease of 4 kg) over the past six months, but had not received any medical treatment. The patient has no history of pancreatitis. Her father is in good health, while her mother died in 2021. Both her mother and grandmother had a history of diabetes but denied having a history of hyperlipidemia or coronary heart disease.
Admission Physical Examination: Body temperature was 38℃, heart rate was 155 beats/min, respiratory rate was 38 breaths/min, blood pressure was 145/96 mmHg, body weight was 48 kg, oxygen saturation was 97%, and Glasgow Coma Scale score was 12 with a state of confusion and mental fatigue. There were deep and large respirations, dry skin, bilateral pupils of equal size and round shape at approximately 3.0 mm, with sensitive responses to light. The lips were not cyanotic, and there was congestion in the throat. There was no resistance in the neck, coarse breath sounds were heard in both lungs without rales. Heart rate was 155 beats/min, regular, with strong heart sounds and no murmurs. The abdomen was distended, without evidence of abdominal wall varices, and had tenderness throughout the entire abdomen. The liver and spleen were not palpable under the ribs, and bowel sounds were normal at 5 times/min. Muscle tone in all four limbs was normal, and no pathological signs were observed. Limb extremities were warm with a capillary refill time of less than 2 s. Laboratory Findings: The patient's rapid blood glucose level was measured to be 32.7mmol/L, while the blood gas analysis revealed a pH of 7.01, HCO3 of 4.00mmol/L, BEecf of -27mmol/L, sodium level of 132.0mmol/L, and chloride level of 94.0mmol/L. The urine analysis showed a milky appearance with 3 + levels of ketone bodies and 3 + levels of protein and 4 + levels of glucose. Blood tests showed elevated levels of inflammatory markers such as a white blood cell count of 38.78 × 10^9/L, 80.4% neutrophils, hemoglobin level of 269 g/L, platelets count of 530 × 10^9/L, and a C-reactive protein level of 123.00 mg/L. Additionally, the patient's calcitonin level was 6.35ng/mL, amylase level was 666U/L in blood, 2123U/L in urine, and lipase level was 1053U/L. The patient had a high total cholesterol level of 9.44mmol/L, triglycerides level of 25.49mmol/L, low-density lipoprotein level of 3.92mmol/L, and a low high-density lipoprotein level of 0.53mmol/L. The glycated hemoglobin level was 16.4%, and the fibrinogen level was 9.4 g/L. The patient's D-dimer level was 1.8ug/mL, while liver and thyroid function test results were normal. The patient's uric acid level was 579µmol/L. The patient tested positive for the novel coronavirus nucleic acid (N gene CT value: 25.67; ORF1ab gene CT value: 27.25), but antigen tests for influenza A and B, CMV DNA, EBV DNA, HBV antigen, adenovirus nucleic acid, rotavirus antigen, norovirus nucleic acid, blood culture and stool culture were all negative. The patient tested negative for all five autoantibodies associated with type 1 diabetes and genetic sequencing for diabetes-related genes also returned negative.The C-peptide release test showed a fasting C-peptide level of 0.215nmol/L, 1-hour postprandial C-peptide level of 0.527nmol/L, and 2-hour postprandial C-peptide level of 0.44nmol/L. A chest computed tomography (CT) scan revealed no apparent abnormalities, while the abdominal CT scan showed acute pancreatitis, multidirectional spleen syndrome with partially circular pancreas, and fatty liver (Fig. 1).
Treatment Process and Prognosis: Upon admission, the severity of dehydration in the patient was immediately evaluated and vital signs, blood glucose, consciousness level, pupils, urine output were monitored. The patient was kept on a fasting regimen and received fluid resuscitation. Small doses of insulin were continuously administered through intravenous drip to control blood glucose levels, with an initial dose range of 0.05-0.1 U/ (kg·h). Blood glucose levels were monitored hourly, and the rate of blood glucose reduction was controlled at 2-5 mmol/L per hour. The intravenous drip rate was adjusted according to blood glucose and urine output. In response to acute pancreatitis, the patient was given fasting, anti-infection treatments, continuous infusion of somatostatin to suppress pancreatic fluid and enzyme secretion, omeprazole to inhibit acid production, lowmolecular-weight heparin to prevent coagulation, nutritional support, and maintenance of organ function with the goal of comprehensive treatment. On the second day after admission, due to the patient's systemic inflammatory response syndrome, poor correction of acidosis, fever, significantly elevated CRP and procalcitonin, and concurrent novel coronavirus infection, continuous venovenous hemodiafiltration was administered to remove inflammatory mediators and maintain internal stability. After 12 h of treatment, the patient's consciousness became clear, and acidosis was corrected. On the fourth day after admission, blood β-hydroxybutyrate levels returned to normal, urine ketones turned negative, and serum lipase, amylase, and triglyceride levels decreased significantly. The patient was given low-fat enteral nutrition powder via gastrostomy tube. On the eighth day of admission, the nucleic acid test for the novel coronavirus has turned negative, the patient's abdominal pain symptoms improved, and blood lipid, amylase, and lipase levels returned to normal. The patient was switched to a self-administered diet for diabetes and given subcutaneous insulin injections while adjusting the insulin dose based on blood glucose levels. On the twentieth day of admission, the patient's blood glucose levels stabilized, and re-examination of the abdomen with CT showed significant improvement in pancreatic edema with the presence of a slight encapsulated necrosis in the pancreatic gastric space (Fig. 1). The patient was discharged with a diagnosis of severe diabetic ketoacidosis, type 1 diabetes, moderate to severe acute pancreatitis, hypertriglyceridemia, systemic inflammatory response syndrome, novel coronavirus infection, and multisplenosis. The patient continued subcutaneous insulin therapy at home and was scheduled for regular follow-up visits. Changes in laboratory indicators of the patient upon admission are shown in Table 1.
Discussion
The interrelatedness between COVID-19, DKA, and AP remains unclear, and their mechanisms of onset are yet to be determined. Evidence suggests that the pancreas is one of the targeted organs for the SARS-CoV-2 virus, as it can enter cells through ACE2 receptors on the surface of pancreatic islet cells, leading to impaired pancreatic function [5]. In addition to direct damage, SARS-CoV-2 can also induce autoimmune reactions, resulting in β-cell dysfunction and reduced insulin release, and increased insulin resistance in peripheral tissues, leading to the onset or exacerbation of diabetes mellitus (DM) [6]. Therefore, COVID-19 is likely to cause abnormal blood Fig. 1 CT of the child's abdomen. (A) Pre-treatment abdominal CT scan with contrast enhancement showed pancreatic swelling with a relatively small pancreatic tail. The pancreatic head partially encircled the descending part of the duodenum, and no abnormal high-density shadows were observed in the substance of the pancreas. The fat gap around the pancreas was murky, and there were scattered exudative shadows. The liver density was decreased, suggesting fatty liver. Multiple nodular shadows with spleen-like density were observed in the splenic area, and they enhanced uniformly after contrast enhancement. (B) After 18 days of treatment, the abdominal CT scan with contrast enhancement showed a partial reduction in pancreatic swelling. Irregular strip-like low-density shadows were observed in the pancreatic and gastric interstitial space, and there was no obvious enhancement after contrast. The scattered exudative shadows in the surrounding area were reduced compared to before treatment glucose fluctuations in children with DM, leading to the onset of DKA and even life-threatening conditions. There is still controversy about whether new-onset diabetes triggered by COVID-19 infection can gradually recover with the elimination of the infection. Cromer et al. [7] found that about 41% of newly diagnosed DM in the context of COVID-19 infection regressed to normoglycemia within a year of DM diagnosis, and Patients with newly diagnosed DM had, on average, lower glucose levels and HbA1c on admission than those with pre-existing DM. This phenomenon may be related to acute inflammation and insulin resistance,rather than autoimmunity or direct injury to beta cells. However, in many studies, the incidence of DKA was high in patients with both newly diagnosed DM and pre-existing DM in the context of COVID-19 infection, suggesting that the mechanisms of acute insulin deficiency may also be involved [8,9]. T1DM is an organ-specific autoimmune disease mediated by T lymphocytes that is induced by environmental factors on a genetic basis. Some viruses, such as enteroviruses, CMV, mumps virus, hepatitis virus, rubella virus, etc., play a critical role as triggers of autoimmunity in the occurrence and development of T1DM. Among them, enteroviruses like Coxsackie virus B have been identified as the prime viral candidates for causing T1DM in humans [10]. Various studies found a significant link between COVID-19 disease and the overexpression of Interleukin-6 (IL-6) [11], while the immune response mediated by IL-6 can induce insulin resistance and injury and apoptosis of pancreatic β-cells [12]. Conversely, diabetes is a risk factor that can exacerbate COVID-19. Studies have shown that the expression levels of ACE2 in the lungs, kidneys, liver, heart and pancreas of diabetic patients are significantly higher than those of healthy individuals, leading to these tissues being more vulnerable to attack by SARS-CoV-2. As a result, diabetic patients face a higher risk of contracting SARS-CoV-2 infection, and are more likely to progress to multi-organ damage [5]. Therefore, there is a bidirectional relationship between COVID-19 and diabetes. The child in this case had symptoms of fever and tested positive for SARS-CoV-2 nucleic acid, and combined with their history of close contact with COVID-19, the diagnosis of COVID-19 was confirmed. The patient had a history of polyuria and polydipsia for half a year, and a high HbA1c level of 16.4% which means she was diabetic but not diagnosed. Therefore, we believe COVID-19 infection may have triggered DKA, and the patient was diagnosed with T1DM according to diagnostic criteria [13]. COVID-19 can cause digestive symptoms such as vomiting, abdominal pain, diarrhea, and anorexia, which are also common initial symptoms of DKA. Thus, for children with COVID-19, blood glucose levels should be actively monitored, and DKA should be quickly identified and treated appropriately.
The occurrence of acute pancreatitis (AP) in this patient may be caused by multiple factors. Firstly, diabetic ketoacidosis (DKA) can induce hypertriglyceridemia (HTG), which in turn can lead to AP. DKA is a state of insulin deficiency, often accompanied by abnormalities in lipid metabolism, which can lead to HTG. Nair et al. [14] found that 11% of adults with DKA had concurrent AP, while only 2% of pediatric DKA patients had AP, possibly due to the significantly higher incidence of HTG in adults compared to children. The risk of AP occurrence is closely related to the level of triglycerides (TG), with a significantly increased risk of AP in DKA patients with TG levels above 11.1 mmol/L, while the risk of AP is reduced when TG levels are below 5.65 mmol/L [15]. In this case, the patient had milky blood and urine, and a high blood TG level of 25.49 mmol/L, indicating a high possibility of AP caused by HTG. However, the specific mechanism of AP caused by HTG is not yet fully understood, but may be related to the cellular damage, edema, and ischemia induced by the breakdown of free fatty acids from triglycerides, as well as pancreatic circulation disorders caused by high chylomicronemia [16]. Secondly, COVID-19 can directly lead to the occurrence of AP. SARS-CoV-2 can cause direct damage to both exocrine and endocrine cells, leading to pancreatitis, and can also cause pancreatitis through cytokine storm-mediated pancreatic injury [17]. In addition, severe COVID-19 can cause severe diffuse mucosal subvascular endothelial inflammation leading to diffuse microischemic lesions, pancreatic hypoperfusion, and ischemic injury [18]. The patient in this case had a COVID-19 infection, and other common pathogen tests were negative, indicating that SARS-CoV-2 infection may have contributed to the occurrence of AP. Thirdly, the patient had multiple spleen syndrome with circular pancreas and short pancreatic tail, which may affect pancreatic secretion and increase the risk of AP under other influencing factors.
Patients with the triad of DKA, severe HTG, and AP have a higher incidence of multiple organ dysfunction, parenteral nutrition requirements, and longer hospital stays, with a higher risk of mortality [19]. Therefore, early diagnosis and timely treatment are crucial. However, diagnosing AP caused by HTG induced by DKA may be challenging. Firstly, abdominal pain is a common symptom of DKA, which may mask the coexistence of AP; secondly, about 25% of DKA patients have elevated levels of serum amylase and lipase without clinical or imaging signs of AP, leading to overdiagnosis; In addition, the pancreatic enzyme levels of some patients with DKA combined with AP may remain normal, leading to missed diagnosis.
In conclusion, COVID-19 can induce DKA and AP through direct or indirect effects, and patients with DKA may also have HTG-induced AP, necessitating increased awareness and ability to diagnose and treat this disease, with timely monitoring of blood glucose, lipids, pancreatic enzymes, and abdominal imaging to prevent misdiagnosis and late-stage treatment, leading to improved prognosis. | 2023-06-07T14:12:51.892Z | 2023-06-07T00:00:00.000 | {
"year": 2023,
"sha1": "d31554e2fdc2d6c7ea851d50c19e88c3255f531c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "d31554e2fdc2d6c7ea851d50c19e88c3255f531c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226229196 | pes2o/s2orc | v3-fos-license | Low nitrogen availability inhibits the phosphorus starvation response in maize (Zea mays ssp. mays L.)
Background Nitrogen (N) and phosphorus (P) are macronutrients essential for crop growth and productivity. In cultivated fields, N and P levels are rarely sufficient, contributing to the gap between realized and potential production. Fertilizer application increases nutrient availability, but is not available to all farmers, nor are current rates of application sustainable or environmentally desirable. Transcriptomic studies of cereal crops have revealed dramatic responses to either low N or low P single stress treatments. In the field, however, levels of both N and P may be suboptimal. The interaction between N and P starvation responses remains to be fully characterized. Results We characterized growth and root and leaf transcriptomes of young maize plants under nutrient replete, low N, low P or combined low NP conditions. We identified 1555 genes to respond to our nutrient treatments, in one or both tissues. A large group of genes, including many classical P starvation response genes, were regulated antagonistically between low N and P conditions. An additional experiment over a range of N availability indicated that a mild reduction in N levels was sufficient to repress the low P induction of P starvation genes. Although expression of P transporter genes was repressed under low N or low NP, we confirmed earlier reports of P hyper accumulation under N limitation. Conclusions Transcriptional responses to low N or P were distinct, with few genes responding in a similar way to the two single stress treatments. In combined NP stress, the low N response dominated, and the P starvation response was largely suppressed. A mild reduction in N availability was sufficient to repress the induction of P starvation associated genes. We conclude that activation of the transcriptional response to P starvation in maize is contingent on N availability. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-021-02997-5.
Background
Nitrogen (N) and Phosphorus (P) are essential macronutrients required for multiple biological processes [1][2][3][4][5]. N is a component of all proteins and the chlorophyll required for photosynthetic carbon fixation. P is required to produce the phospholipids forming the membranes that surround cells and intracellular organelles. Furthermore, N and P are structural components of nucleic acids, including the abundant RNA molecules that play a key role in protein synthesis. The demand for these macronutrients is such that N and P availability in agricultural soils is rarely sufficient to realize the full yield potential of crops [6,7]. P reacts readily with other elements, such as aluminum in acid soils or calcium in alkaline soils, holding it in the upper layers of the soil and reducing its availability to plants [8,9]. By contrast, N, largely present in the form of nitrate, is mobile and tends to move to deeper soil layers where it may be beyond the reach of plant root systems [10]. In high-input systems, the problem of N and P limitation is mitigated by chemical fertilizer addition, although current levels of application are neither sustainable nor desirable given negative environmental impacts [11]. Industrial N fixation is energetically costly and contributes to greenhouse gas production [12]. High grade phosphate rock is a non-renewable resource, predicted to pass peak production before the end of this century [13]. For these reasons, increasing N and P efficiency has been identified as a key goal in plant breeding and agricultural management [11,14].
Studies in Arabidopsis thaliana and rice (Oryza sativa) have identified physiological and developmental responses to low N or P stress, coupled with underlying large-scale changes in gene expression (the N starvation response -NSR, and P starvation response -PSR, respectively [15][16][17][18][19][20]). A common strategy under nutrient deficiency is to promote uptake by increasing the abundance of high-affinity transporter proteins in the roots. Under N or P limitation, there is an induction of genes encoding nitrate [21][22][23] or phosphate transporters [24][25][26][27]), respectively. Further aspects of the NSR include the downregulation of genes associated with nitrate assimilation and amino acid, oligosaccharide and nucleic acid biosynthesis [15,28]. The PSR includes the induction of purple acid phosphatases (PAPs) involved in recycling internal and external P from organic pools, altered polysaccharide metabolism, and remodeling of lipid membranes to reduce the requirement for phospholipids [29][30][31]. Interestingly, aspects of the NSR and PSR are antagonistic and under N limitation many genes induced in the PSR are repressed [15,30,32,33]. It has long been appreciated that a deficiency in one element can impact the response to a second element, and that the effects of different nutrient deficiencies are not necessarily additive [34][35][36][37][38][39][40]. Thus, it is difficult to predict the transcriptomic response to a combination of N and P deficiency from the single stress data, especially in the context of antagonistically regulated genes. Several studies, however, have now demonstrated clear points of molecular interaction between N and P signaling pathways.
One of the first molecular links between N and P signaling was the identification of the SPX-RING (SPX domain: named after the Suppressor of Yeast gpa1, the yeast Phosphatase 81 and the human Xenotropic and Polytropic Retrovirus receptor 1; RING domain: Really Interesting New Gene) protein NITROGEN LIMITATI ON ADAPTATION (NLA1) in Arabidopsis. Atnla1 mutants fail to adapt to low N conditions and exhibit early senescence [41] associated with P toxicity [42]. Further studies have shown that AtNLA directly targets PHT1 phosphate transporters for degradation in a Ndependent manner [43] as well as targeting the nitrate transporter NRT1.7 [44]. Under P starvation, downregulation of AtNLA by the P starvation inducible microRNA miR827 promotes accumulation of PHT1 [42]. Rice OsNLA also regulates PHT1 abundance and modulates P accumulation in an N-dependent manner [45,46]. However, in rice, miR827 does not target OsNLA, nor do N and P levels regulate OsNLA transcript accumulation, indicating regulatory differences with Arabidopsis [45,47].
The MYB-CC transcription factor AtPHR1 plays a central role in activating the PSR [48]. Under high P, OsPHR2, the rice ortholog of AtPHR1, is sequestered by the SPX protein OsSPX4 preventing its translocation into the nucleus and activation of PSR genes [49]. Under P starvation, the 26S proteasome degrades OsSPX4, allowing OsPHR2 to activate its targets. Recently, the Nregulated OsNRT1.1b nitrate transporter has been shown to be required for OsSPX4 degradation. Under N starvation, levels of OsNRT1.1b are reduced, freeing OsSPX4 from turnover and leading to inhibition of the PSR [50]. Interestingly, OsSPX4 not only sequesters OsPHR2, but also the NIN-like protein OsNLP3, a central regulator of nitrogen response in rice [50]. These studies, and others, have demonstrated the interaction of N and P responses, and identified the SPX domain containing proteins as playing an important role in their coordination.
Maize is one of the world's most economically important crops. Limitation of N or P represents a significant constraint on maize productivity worldwide [51][52][53][54]. Work in Arabidopsis and rice has begun to define the interactions between N and P signaling networks. Nevertheless, much remains to be discovered before we can apply this knowledge to the design of more efficient management practices or the development of more nutrient efficient crop varieties. Here, we report whole transcriptome data for the leaves and roots of maize seedlings grown in nutrient replete, low N, low P and a combined low NP stress. We observed antagonism between responses to single low N and low P treatments, with the low N response dominating in the combined low NP treatment. We further show that even a mild reduction in N availability is sufficient to suppress components of the maize PSR.
Results
Growth of maize seedlings was reduced under low N and P treatments To select material in which to characterize transcriptional responses to combined N and P limitation, we first characterized the growth of maize plants grown for 40 days after emergence under complete nutrient conditions (Full; see Methods), reduced N (LowN: 9% of complete concentration), reduced P (LowP: 3% of complete concentration), and under combined reduced N and P (LowNP). Plants were grown in 1 m tall, 15 cm diameter (~17 L volume) PVC tubes, providing sufficient depth for root development (Fig. 1a). We followed plant growth by manual measurement of green leaf area (LA) every 5 days, starting at 10 days after emergence (DAE). Plants in Full conditions showed an increase in the rate of leaf initiation compared with reduced nutrient treatments ( Fig. 1b 7.0 ± 0.29b). The first two leaves were fully expanded in all treatments when we started to collect measurements at 10 DAE, and they began to senesce early in the course of the experiment, reflected by a loss of LA (Fig. 1c,d). Second leaves showed equivalent LA in all treatments (Leaf 2 LA -KW adj. p > 0.05 for treatment at all time points) and began to senesce at the same time (~30 DAE; Fig. 1d). Senescence began earlier in first leaves than second leaves (~20 DAE), and was more rapid under LowN and LowNP than in the other conditions ( Fig. 1c Fig. 1f-j; S1; MZ66_Growth_Analysis in Supplemental File 1). Treatment differences became more dramatic with each leaf to be initiated. In fourth leaves, we observed a mild treatment effect from~10 days after leaf expansion (Leaf 4 LA 20 DAE -KW adj. p = 0.044), leaves of the LowNP plants having a lower surface area than those of the other treatments ( Fig. 1g; Dunn test at α = 0.05). Differences in the later leaves were evident within 5 days after initiation, the timing of initiation also becoming delayed in the low nutrient treatments. By the sixth and seventh leaves, we observed a difference between Full, LowP and LowN/LowNP treatments (Fig. 1h, To further characterize differences in root system architecture (RSA) among nutrient treatments, we photographed the roots of each plant and processed the images using GiaRoots analysis software [55] to extract a series of root features. Nutrient treatment had a significant effect on several features related to root system size ( Fig. S3; MZ66_Giaroots_Analysis in Supplemental File 1) including network area, perimeter and volume and the maximum and median number of roots crossing a horizontal line in a vertical scan (see [55] for a complete description of root features). The Full treatment was associated with the largest, most solid root systems, followed by LowP, LowN and LowNP. We also saw a significant effect on the ratio of the minor/major axes (EAR) of an ellipse fitted around the root system (EAR -KW adj. p = 0.016; Dunn test at α = 0.05. Full: 0.46 ± 0.03a; LowP: 0.40 ± 0.04a; LowN: 0.36 ± 0.04a; LowNP: 0.29 ± 0.03b). EAR reflects the tendency to relatively narrower but deeper root systems in the low Low N and P availability alters relative growth and element profile. a Root fresh weight at harvest (RFW, g; estimated coefficient and associated standard error) of plants grown in Full, LowN, LowP or LowNP. The significance of the treatment effect is shown as *** p < 0.001, ** p < 0.01, * p < 0.05, p < 0.1 (Kruskal-Wallis test; p-value adjusted for multiple tests). Lowercase letters indicate significant (p < 0.05) pairwise differences (Dunn test). b-d As A, showing shoot fresh weight (SFW, g), the ratio of RFW/SFW (RS) and specific root depth (SRD cm/g), respectively. e Heat map representation of total ion concentration for 20 named elements (z, concentration standardized within row). The significance of the treatment effect on concentration is shown as *** p < 0.001, ** p < 0.01, * p < 0.05. p < 0.1 (ANOVA; p-value adjusted for multiple tests). Lowercase letters indicate significant (p < 0.05) pairwise differences (Tukey) nutrient treatments. In comparison to the aerial traits, there was less difference between LowN and LowP for root features, and there was evidence of a partially additive effect in the combined LowNP treatment ( Fig. S3; MZ66_Giaroots_Analysis in Supplemental File 1. e.g. network volume -KW adj.p = 0.009; Dunn test at α = 0.05. Full: 19.40 ± 1.68a; LowP: 12.43 ± 1.89ab; LowN: 10.10 ± 1.92bc; LowNP: 7.23 ± 1.71c).
The leaf ionome was modified under low N and P treatments
We quantified the total concentration of twenty different elements in the leaf tissue using inductively coupled plasma mass spectrometry (ICP-MS). The protocol used did not allow determination of N concentration. We detected a significant (ANOVA adj. p < 0.05) effect of treatment on the concentration of ten of the elements quantified ( Fig. 2e, S4; MZ66_Ionomics_Analysis in Supplemental File 1). We observed both decreases and increases in concentration for different elements, indicating that the effects could not be explained solely based on changes in root:shoot ratio. In line with previous studies [15,28,46], we observed an increase (1.8 fold) in leaf total P concentration of plants grown under LowN compared with Full (P concentration -ANOVA adj.p < 0.001; Tukey test at α = 0.05. LowN: 3543 ppm ± 110a; Full: 1983 ppm ±96b; LowNP: 1174 ppm ±98c; LowP: 734 ppm ±108d). Unsurprisingly, total P concentration was lower under LowP (734 ppm). More remarkably, total P concentration was higher under LowNP (1174 ppm) than in LowP, although we note that LowNP plants were also smaller than those under LowP. We also saw a significant increase in Ni concentration under LowN and increases in K and Rb concentration under LowP (Fig. 2e, S4; MZ66_Ionomics_Analysis in Supplemental File 1).
The transcriptional response to P starvation is repressed under N limitation
Based on our initial characterization, we selected 25 DAE -the point at which we first saw a significant treatment effect on growth across leaves ( Fig. 1) -for transcriptional profiling. We grew a second set of plants under the same nutrient conditions as used previously, harvesting total roots and pooled leaf blades at 25 DAE from two individuals per treatment for RNA extraction and sequencing. Sequencing reads were aligned to the maize (var. B73, ref-gen V3) transcript set and collapsed at the gene level to obtain read count data. We analyzed count data from all treatments and both tissues in a single linear model to identify significant effects of LowN, LowP or their interaction on gene expression. A total of 1555 genes were identified to be N/P regulated (false discovery rate [FDR] nutrient terms < 0.01; |log 2 fold change [LFC]| > 1 for at least one nutrient-associated model term; MZ67_DEG_set in Supplemental File 2). Regulated genes were further classified as upregulated or downregulated in different tissue/treatment combinations by the sign and magnitude (|LFC| > 1) of pairwise differences with respect to the Full treatment in the relevant tissue (MZ67_DEG_set in Supplemental File 2).
A similar number of genes were upregulated as were downregulated; a greater number of regulated genes were detected in leaves than roots (Fig. 3a-d). We compared the transcriptional response to the treatments by tissue and sign of the effect (up or down). There was little overlap between the responses to LowN and LowP single stress treatments (Fig. 3a-d. e.g. of a combined total of 737 genes upregulated in the leaf between LowN and LowP, only 30 were shared). When presented with the combined LowNP treatment, plants broadly followed the LowN response pattern: most regulated genes were shared between LowN and lowNP; very few genes that were regulated under LowP showed similar regulation under lowNP (Fig. 3a-d). This trend was evident in both leaves and roots, and among both up-and down-regulated genes.
Our model included an NxP interaction term. Although our power to detect interaction effects was no doubt limited by the level of replication, we were able to identify 81 NxP interaction genes (FDR NxP terms < 0.05; |LFC| > 1 for at least one nutrient interaction model term. MZ67_NxP_set in Supplemental File 2), i.e. genes regulated by the availability of one nutrient in a manner conditional on the availability of the second. We explored the distribution of these 81 genes across the sets of upregulated and downregulated genes (pairwise |LFC| > 1) from the different treatments and tissues ( To gain insight into the functional consequences of the transcriptional responses, we examined "classical genes" (a curated set of~5000 well-annotated genes, many linked with existing functional data: maizegdb.org/gene_center/ gene) in our regulated gene set. We supplemented the classical set with a number of additional annotations [56,57] based on identification of maize orthologs of highinterest candidate genes, notably members of the maize SPX-domain and PAP gene families (Fig. S5C [58]). The SPX-domain family proteins have been clearly linked with crosstalk in N-P signaling in Arabidopsis and rice [42,44,50], but the family has not been previously annotated in maize. We therefore identified the complete set of SPXdomain protein encoding genes from maize and assigned a nomenclature based on phylogenic analysis that we use below ( Fig. S6; MZ67_Spx_Genes in Supplemental File 2). The behavior of the top thirty (ranked by FDR) regulated classical genes mirrored the global trend -namely, strong induction under LowP that was absent, or shifted to repression, in LowN or LowNP (Fig. S5C). The top classical genes encoded functions previously associated with the PSR [59][60][61][62], including PHT1 high-affinity phosphate transporters, PAPs, lipid-remodeling enzymes and members of the SPX domain family (Figs. S5C, S6). We further examined functional patterns using Gene Ontology (GO) . LFC calculated with respect to Full, separately for roots and leaves. GO term names are abbreviated. GO term identifiers are given in parentheses along with the number of genes assigned in the test set over the total number in the GO term. The significance of GO term enrichment is indicated to the left of the heat map as *** p < 0.001, ** p < 0.01, * p < 0.05
Mild N stress is sufficient to repress the P starvation response
Although LowN and LowP treatments were adjusted to 9 and 3% of the Full concentration, respectively, it was evident by 40 DAE that the LowN treatment produced a greater limitation on growth than LowP. As such, we speculated that the dominance of the LowN transcriptional response under the combined NP treatment was simply a consequence of the greater severity of the LowN stress. To address this hypothesis, we grew an additional set of plants under high and low P (P5 and P1, respectively; our original Full and LowP levels) in combination with five different levels of N (N5 to N1, high to low; the extremes corresponding to our previous Full nutrient and LowN treatments). As for our whole transcriptome experiment, we harvested plants at 25 DAE (Fig. 1). We measured shoot and root fresh weight and again saw that the single stress combination N1P5 reduced growth more than the complementary N5P1 treatment (Fig. 5a, b). At intermediate N availability, however, we could observe different combinations of N and P with equivalent growth: e.g., N4P5 was indistinguishable from N5P1 in terms of shoot fresh weight. To evaluate the impact of N availability on the PSR, we used real-time PCR to quantify the expression of a panel of selected genes. We first assayed the well-characterized N responsive genes Nir-a (GRMZM2G079381) and Npf6.6 (GRMZM2G161459), encoding a nitrite reductase and a nitrate/peptide transporter [23,63], respectively, to confirm the impact of the N treatments. As previously shown and as observed in our transcriptome data (MZ67_DEG_set in Supplemental File S2), Nir-a and Npf6.6 were down-regulated in reduced N treatments (Nir-a is expressed predominantly in leaf tissue. Figure 5c, d; MZ95_DE_analysis in Supplemental File S3). The accumulation of Nir-a and Npf6.6 decreased from N5 to N1 treatments, indicating a progressive impact on plant N status and signaling (Fig. 5c, d). Interestingly, expression of Npf6.6 was also induced in the roots under P1, this response being most pronounced at N5. We then assayed four PSR genes, selected based on previous reports and our transcriptome data: Pht1;9, Pht1:13 phosphate transporter genes in roots [27], the Mfs2 SPX-family gene in leaves, and the Pap10 purple acid phosphatase gene in both roots and leaves [58]. All four PSR genes were strongly induced by P1 under N5 conditions ( Fig. 5c-d; Mfs2 1.8-fold increase N5P1/N5P5 in leaves; Pap10 1.85-fold increase N5P1/N5P5 in leaves, 4.93-fold increase in roots; Pht1;9 4.33-fold increase N5P1/N5P5 in roots; Pht1:13 4.82-fold increase N5P1/ N5P5 in roots). However, once N availability was reduced to N4, the level of PSR transcript accumulation under P1 was reduced (Fig. 5c,
P concentration in the leaves responds to both P and N availability in the substrate
Previous studies and our observations at 40 DAE showed an increase in total P concentration in the leaves of young plants grown under N limitation [15,28,46]. As such, the antagonism observed between transcriptional responses to our LowN and LowP treatments might be driven by downregulation of PSR genes in response to higher cellular P concentration. To investigate this possibility, we quantified total P concentration using ICP-MS in the roots and leaves of the plants in our N-dose experiment (MZ95_Ion_Concentration_Analysis in Supplemental Table S3). We again observed an increase in total P concentration in both leaves and roots as N was reduced, in either P1 or P5 (Fig. 6a). However, the increase over the N5-N3 range was minimal (P concentration root. Tukey test at α = 0.05. N5P5: 1035 ppm ± 100ab; N3P5: 1082 ppm ±82ab; N5P1: 809 ppm ±32b; N3P1: 1139 ppm ±55ab; P concentration leaf. Tukey test at α = 0.05. N5P5: 2525 ppm ±103efg; N3P: 3545 ppm ± 203abc; N5P1: 2064 ppm ±81 g; N3P1: 2277 ppm ±86 fg), suggesting that total P concentration does not explain the strong effects on gene expression we saw over the same range.
Discussion
To explore the interaction between N and P signaling pathways in maize, we characterized transcriptional responses in roots and leaves to low N, low P and combined low NP stress. We observed responses to our LowN and LowP treatments to be distinct and antagonistic. Furthermore, under combined LowNP, the LowN pattern dominated and the classic PSR was absent, even though plant growth was partially P limited (as determined by phenotypic comparison to plants grown under the single LowN stress). Although there were differences at the level of individual genes, our LowN and LowP single stress results are in broad agreement with a previous report in which a similar antagonism was observed, and many classic PSR genes were seen to be down-regulated under LowN [15]. The potential adaptive value of such antagonism is not clear.
N is typically found deeper in the soil than P, reflecting differences in mobility. Consequently, a root system optimized to access P in the topsoil will be less suited to N acquisition, and vice versa [64][65][66]. In addition, the optimal pattern of root branching and root length is different for acquisition of N or P [64][65][66]. We did not detect dramatic differences in RSA between LowN and LowP treatments at 40 DAE, although the growth system, the relatively young age of the plants, and the severity of stress may have limited the expression of potential root developmental responses. Nonetheless, the antagonistic regulation of genes associated with hormone signaling (e.g., genes belonging to GO terms GO:9851 auxin biosynthetic process, GO:9695 jasmonic acid biosynthetic process; GO:9735 response to cytokinin) may mirror the differing demands placed on plant architecture by N and P limitation.
Once acquired, the efficiency of internal P use can be maximized by remobilization to the part of the plant where need is greatest over the growing season [3,4]. 5 Moderate N stress is sufficient to repress the low P response. a Representative 25-day-old maize seedlings grown across five levels of N availability (N5 to N1, high to low) and two levels of P availability (P5, high and P1, low). b Shoot fresh weight of maize seedlings grown as A. Boxes show 1st quartile, median and 3rd quartile of 4 biological replicates. Whiskers extend to the most extreme points within 1.5x box length; outlying values beyond this range are not shown. Letters indicate groups based on HSD Tukey (p < 0.05). Transcript accumulation (relative abundance) determined by real-time PCR for c leaves and d roots of 25-day-old maize seedlings grown as A. Median of 5 biological replicates. Pap10 -Purple acid phosphatase10, GRMZM2G093101; Pht1;9 -Phosphorus transporter1;9, GRMZM2G154090; Pht1;13 -Phosphorus transporter1;13, GRMZM2G070087; Mfs2 -ZmSPX-MFS2, GRMZM2G166976; Npf6.6 -Nitrate/peptide Transporter6.6, GRMZM2G161459); Nir-a -nitrite reductase-a, GRMZM2G079381) PAPs remobilize P by releasing inorganic P from organic compounds. Induction of PAP encoding genes and increased PAP activity is a classic component of the PSR across the tree of life, including Arabidopsis [67,68], rice [69] and maize [58]. We observed several Pap genes to be upregulated under LowP in both roots and leaves. In addition to remobilizing P within the plant, PAPs are also secreted to the rhizosphere, enhancing the availability of inorganic P for uptake [67,68]. Pap10 was one of the most strongly regulated genes in our analysis. Reflecting the global pattern, Pap10 was strongly induced by LowP, but only in N4-N5 conditions. Furthermore, Pap10 showed a constitutive level of expression in our Full nutrient condition that was reduced by lowering N availability. Genes linked to lipid remodeling -the replacement of membrane phospholipids by galactolipids or sulfolipids under P starvation [4,70,71] -followed a similar trend. Downregulation of constitutively expressed PSR-associated genes by single low N treatments has been previously reported in four commercial maize hybrids and two maize inbred lines [28,32,72]. One study that did not report such downregulation of PSR genes also found no evidence of the downregulation of N assimilation genes typically associated with N starvation, indicating that the precise nature and timing of the treatment are important [73]. A similar downregulation of PSR genes occurs in rice under prolonged N starvation [50], but not within the first 12 h of shift to N starvation conditions [5], although a low N metabolic response can occur as early as 1 h after such a shift [74]. Our observations that the negative impact of low N availability on PSR gene expression dominates in the combined LowNP treatment implies that, under this dual stress, maize plants are failing to activate welldefined aspects of the PSR, such as P remobilization or lipid remodeling. In the future, it will be informative to assay PAP activity and lipid composition at low N and low P availability.
Our study confirmed previous observations of P hyper-accumulation in maize leaves under N limitation [15,28], an effect also reported in rice and Arabidopsis [42,75]. Initially, we considered the hypothesis that down-regulation of PSR genes in LowN was a secondary response to an increase in total internal P concentration. However, LowNP conditions downregulated PSR genes even when low P availability prevented accumulation of total P to the concentration seen under LowN conditions. Significantly, mild N limitation (N4) was sufficient to suppress induction of PSR genes under LowP with no change in internal total P concentrations. Plants perceived N reduction from N4 and below, as demonstrated by the reduced accumulation of Nir-a transcripts, a well characterized marker of plant N status [76]. Overall, our data support an N-mediated impact on PSR via modified signaling or P partitioning, rather than as the secondary effects of total internal P hyper-accumulation.
Currently, it is difficult to reconcile PSR repression and P hyper-accumulation. It would be informative to examine earlier stages of plant growth for evidence of a transient induction of PHT1 transporter encoding genes under LowN, although no such signal has been previously reported in comparable experiments in maize or other plants, nor in experiments using a transfer from replete to N starvation conditions [74]. PHT1 transporters are subject to regulation at the post-translation level [45,77,78] and measurement of protein levels and localization would provide a fuller picture, as would quantification of root P permeability and P uptake. In rice, it has been reported that the roots of plants grown under N starvation show increased permeability to inorganic P [46]. The balance between P concentration in the leaves and P uptake by the roots is maintained by Fig. 6 P accumulation responds to P and N availability. a root and b leaf P concentration (ppm dry mass) of 25-day-old maize seedlings grown across five levels of N availability (N5 to N1, high to low) and high (green points and trace) and low (yellow points and trace) levels of P availability. Large points show treatment medians; small points show individual (4) biological replicates. Dashed lines show best fit from a multiple regression model. Asterisks represent statistical significance of model terms (p value ≤0.001 ***; 0.001-0.01 **; 0.01-0.05 *). N, P main effect of N and P, respectively. NP, NxP interaction term. Lowercase letters indicate significant (p < 0.05) pairwise differences (Tukey) systemic signaling through the mobile microRNA miR399 [79,80]. As P becomes limiting in the shoots, miR399 is produced and travels to the roots to target transcripts encoding the PHOSPHATE2 (PHO2) E2 ubiquitin conjugase, in turn promoting accumulation of PHT1 transporters [81][82][83]. Previous reports have shown that miR399 expression in maize can increase in N starvation, although the effect depends on both the nature of the N treatment and the length of exposure [84,85].
Study of NP crosstalk in Arabidopsis and rice has highlighted the importance of the SPX protein family. Although first described as regulators of P homeostasis [86], SPX and SPX-RING proteins have subsequently been linked with N signaling [42,44,50]. We identified 15 SPX-domain family genes in maize, the same as in rice, grouped into the four previously reported classes (SPX, SPX-EXS, SPX-MFS and SPX-RING [87]). N and P availability regulated transcript levels across the SPX family, consistent with a role in the integration of N and P signaling pathways (Fig. S6). Transcripts encoding members of the single SPX domain class responded positively to LowP in both roots and leaves, as has been seen previously in Arabidopsis and rice [88,89]. In rice, over-expression of OsSPX1 and OsSPX6 suppresses the PSR, suggesting that they may act in a negative-feedback loop. Conversely, under-expression of OsSPX1 and OsSPX6 leads to increased P accumulation through upregulation of genes involved in P uptake [89,90]. The rice SPX4 protein exerts a further negative control on the PSR by sequestering the MYB transcription factor PHR2 in the cytosol, preventing its translocation into the nucleus and activation of target genes [49]. Under P starvation, SPX4 is degraded, freeing PHR2 to activate the PSR. It has recently been reported that SPX4 turnover in rice requires the activity of the NRT1.1b [50]. Given that the abundance of NRT1.1b itself is N responsive, the NRT1-SPX4 module represents a point of integration between N and P signaling pathways.
Hyperaccumulation of P under N limitation indicates an uncoupling of P uptake from leaf P concentration [81][82][83]. Similar uncoupling occurs in Arabidopsis mutants under-expressing the SPX-EXS gene PHO1, in parallel with changes in subcellular partitioning of P between vacuolar stores and the cytosol [91]. The maize genome encodes two co-orthologs of the Arabidopsis PHO1 -maize Pho1;2a and Pho1;2b [92]. We found both Pho1;2a and Pho1;2b to show evidence of downregulation under LowN, potentially contributing to changes in P partitioning. While our observations suggest that changes in total internal P concentration cannot explain the observed effect of N limitation on the PSR, we do not have data on the level of P in the cytosol itself. A second group of SPX proteins, the SPX-MFS proteins, plays a more direct role in regulating cytosolic P concentration by mediating P influx into the vacuole [93,94]. Under P starvation, OsSPX-MFS1 and OsSPS-MFS3 are down regulated, consistent with retaining more of the total internal P pool in the cytosol for direct use [84]. In contrast, OsSPX-MFS2 is upregulated under P starvation, and may be acting differently [95,96]. The MFS2 protein was not identified in a screen for vacuolar P efflux transporters [94], suggesting that it is not simply working antagonistically to MFS1 and MFS3. In maize, we found both Mfs1 and Mfs3 to be encoded by two genes, with both paralogs of each pair down regulated under LowP in the leaves, indicating a similar function to the rice genes. Mfs2 was found to be a single copy gene in maize, and, as in rice, to be upregulated under LowP. It will be informative to functionally characterize the link between the maize SPX-domain proteins and N-P signaling.
Conclusions
A reduction in N availability suppresses the PSR in young maize plants. Somewhat paradoxically, low N availability also results in an increase in internal P concentration, although not to levels that might explain the repression of low P responsive genes. In cultivated fields, P limitation may coincide with low N availability. As such, maize may grow without the classical low P response of model systems, making us rethink our current understanding of acclimation to P starvation. Further work is needed to evaluate the nature of the transcriptional PSR in maize under cultivation. We might also consider the merits of biotechnological manipulation to enhance low P responses under low N conditions.
Plant material and growth conditions
Plants in this study were maize (Zea mays ssp. mays var. W22) wild-type segregants from a larger population segregating for the Zmpho1;2-m1.1′ mutation, generated from the stock bti31094::Ac [92]. The original bti31094:: Ac stock is available from the Maize Genetics Cooperation Stock Center. Genotypic analysis of the segregating population was as described previously [92]. Samples from individuals carrying the Zmpho1;2-m1.1′ mutation were retained for future analysis. Plants [98]. Hoagland N concentration was adjusted by substitution of KNO 3 with KCl and CaCl 2 [99,100]. Hoagland solution was applied at 1/3 strength with the final N and P concentrations used in different experiments as stated below.
For growth to 40 days after emergence (DAE), 35 plants were evaluated in PVC tubes (15 cm diameter; 1 m tall), planted in 4 groups, at intervals of 1 week. Tubes were filled with~17 l of washed sand. In the upper third of the tube, soil was mixed with 1.5% solid-phase P buffer (alumina-P) [98] loaded with 209 μM KH 2 PO 4 for Full treatments and 11 μM KH 2 PO 4 for LowP treatments. Four imbibed seeds were planted at 4 cm depth per tube, thinned to a single plant a week after emergence. Plants were irrigated with distilled water up until 10 DAE after which Hoagland treatments were applied as a 1/3 strength solution, at a rate of 200 ml every third day, with final concentration: Full 1750 μM NO 3 2 ; LowN 157.5 μM NO 3 2 ; Full 333 μM KH 2 PO 4 ; LowP 10 μM KH 2 PO 4 . During the growth period, plants were evaluated by non-destructive measurement of stem width, stem height, leaf number, and length and width of each fully expanded leaf. Stem height was measured from the soil to the last developed leaf collar. Measurements were collected every fifth day from 10 DAE. At 40 DAE, plants were removed from the tubes, minimizing damage to the root system, washed in distilled water and dried with paper towels before measuring root and shoot fresh weight. The cleaned root system was placed in a waterfilled tub and photographed using a digital Nikon camera D3000. Raw images were individually processed using Adobe Photoshop CC (Version 14.0) to remove the background and obtain a good contrast between foreground and background non-root pixels. Processed images were scaled and analyzed using GiA Roots software [55]. Roots and shoots were placed in an oven at 42°C for a week before measuring dry weight and collecting samples for ionomic analysis (see below). The complete set of measurements collected is described in MZ66_Raw_Data in Supplemental File 1.
For growth up to 25 DAE, plants were grown in smaller PVC tubes (15 cm diameter, 50 cm tall). For the RNA-seq analysis, the top 30 cm of the 50 cm tube included 1.5% solid-phase P buffer (alumina-P [98]). The whole plant was harvested, separating the stem and leaves, a segment 2 cm above and below the crown roots and the remaining root system. Tissue was immediately frozen in liquid nitrogen and stored at − 80°C. Samples were homogenized with cooled pestle and mortar and aliquoted under liquid nitrogen for transcriptome analysis. For the N-dose experiment, plants grown in 50 cm tubes were irrigated with combinations of P at 10 or 333 μM (P1, P5; solidphase P buffer was not used in this experiment), and N at 157.5, 233, 350, 875 or 1750 μM (N1 to N5). Leaf and root tissue were collected at 25 DAE for gene expression and ionomic analysis.
Determination of elemental concentration by ICP-MS analysis
Ion concentration was determined as described previously by [101]. Briefly, root and shoot samples were analyzed by inductively coupled plasma mass spectrometry (ICP-MS) to determine the concentration of twenty metal ions. Weighed tissue samples were digested in 2.5 mL concentrated nitric acid (AR Select Grade, VWR) with an added internal standard (20 ppb In, BDH Aristar Plus). Concentration of the elements Al, As, B, Ca, Cd, Co, Cu, Fe, K, Mg, Mn, Mo, Na, Ni, P, Rb, S, Se, Sr and Zn was measured using an Elan 6000 DRC-e mass spectrometer (Perkin-Elmer SCIEX) connected to a PFA microflow nebulizer (Elemental Scientific) and Apex HF desolvator (Elemental Scientific). A control solution was run every tenth sample to correct for machine drift both during a single run and between runs.
Statistical analysis of plant growth and ionomic data
For plants grown to 40 DAE, traits were obtained from 34 individuals (one individual was removed as a clear outlier with poor growth). Individuals were distributed across nutrient treatments as: Full, n = 7; LowN, n = 5; LowP, n = 9; LowNP, n = 13, across 4 planting dates. Traits included direct measurements and derived values (e.g., total leaf surface area or biomass totals). Nondestructive measurements were repeated at 5-day intervals during the experiment. Destructive measurements were made for all 34 individuals at harvest. The data set include element concentrations determined by ICP-MS and root architectural traits extracted by image analysis, as described above. The dataset and analysis are presented in Supplemental File 1.
All statistical analysis was performed in R [102]. Full, LowN, LowP and LowNP were treated as four levels of a single treatment factor. For growth and endpoint data and GiaRoots features, we used R/stats::kruskal-test to assess the treatment effect on each trait with a nonparametric Kruskal-Wallis test. Element concentration was analyzed using ANOVA. In all cases, p-values were adjusted for multiple testing using the Bonferroni method with R/stats::p.adjust, applied separately to growth, endpoint, GiaRoots and element data sets. Where the treatment effect was significant (adjusted p < 0.05), we applied a pairwise post hoc test to identify differences between treatments: Dunnett test (R/dunn.test:: dunn.test [103]) for growth, endpoint and GiaRoots features and Tukey HSD for element data (R/agricolae:: HSD.test [104]). For Dunnett test results, letters were assigned to means groups using R/multcompView::mult-compLetters [105]. For visualization, we used R/stats::lm to fit the model trait value~0 + treatment + planting date + error, extracting model coefficients and standard errors for plotting.
RNA-sequencing analysis of differential gene expression RNA-sequencing analysis was carried out on roots and leaves for the 4 nutrient treatments (Full, LowN, LowP and lowNP) and two replicates, for a total of 2 tissues × 4 treatments × 2 replicates = 16 samples. Libraries were prepared by the Laboratorio de Servicios Genomicos, LANGEBIO, Mexico (www.langebio.cinvestav.mx/ labsergen/). Libraries were prepared using the TruSeq RNA Sample Prep Kit v2 (https://support.illumina.com/ sequencing/sequencing_kits/truseq_rna_sample_prep_ kit_v2.html) and sequenced using the Illumina HiSeq4000 platform at the Vincent J. Coates Genomics Sequencing Laboratory at UC Berkeley, supported by NIH S10 OD018174 Instrumentation Grant, and at Labsergen on the Illumina NextSeq 550 equipment. Transcriptome data are available in the NCBI Sequence Read Archive under study SRP287300 at https://trace.ncbi. nlm.nih.gov/Traces/sra/?study=SRP287300 RNA sequencing reads were aligned against the AGPV3.30 maize gene model set available at Ensembl Plants [106] using kallisto version 0.43.1 [107]). Transcript-level abundance data was pre-processed using R/tximport [108] and summarized at the genelevel before further analysis. Count data were analyzed using a linear model approach in edgeR [109,110]. We fitted the complete model counts~intercept + tissue * Nlevel * P-level + error across the 16 samples. We selected genes-of-interest based on evidence of a non-zero coefficient for at least one model term containing N-level or P-level (the coef argument to R/edgeR::glmQLFTest included all model coefficients except for the intercept and tissue main effect; adjusted FDR < 0.01; absolute log fold change (LFC) > 1; log counts per million (CPM) > 1). An additional subset of 81 NxP interaction genes was selected based on the coefficients N-level x P-level and tissue x N-level x P-level (adjusted FDR < 0.05; |LFC| > 1; logCPM > 1). Genes-of-interest were further categorized based on pairwise LFC for each stress treatment with respect to the full nutrient control for either root or leaves. LFC for each tissue was extracted from the model counts treatment + error, a threshold of + 1 and − 1 being used for up-and down-regulation, respectively. Gene functional annotations were assigned as the functional annotation of the blastp reciprocal best hits versus Araport11 [https://doi.org/10.1111/tpj.13415] and uniprot proteins, and the description from the PANNZER2 [https://doi.org/ 10.1093/nar/gky350] functional annotation webserve. Upset diagrams were generated using R/UpSetR and R/ ComplexHeatmap [111,112]. GO analysis was performed with BiNGO 3.0.3 [113] in the Cytoscape 3.7.2 environment [114] using a hypergeometric test, Benjamini & Hochberg FDR correction and a significance level of 0.05. The Gene ontology file (go.obo) was retrieved from the gene ontology web page (http://geneontology.org/docs/ download-ontology/). For each GO category, the mean LFC of the associated genes-of-interest was calculated with respect to each tissue/treatment combination using the pairwise values described above.
Real-time PCR
For real-time PCR transcript quantification, leaves and roots of five biological replicates per treatment were analyzed. Total RNA was extracted using Trizol and cDNAs were synthesized using SuperScript® II Reverse Transcriptase from Invitrogen (Cat No. 18064071). RT-PCR was performed using 96 well plates in a LightCy-cler® 480 Instrument by Roche. PCR reactions were performed using KAPA SYBR FAST qPCR Master Mix kit by Kapa Biosystems, with the following cycling conditions: 95°C for 7 min, followed by 40 cycles of 95°C for 15 seg; 60°C for 20 seg; 72°C for 20 seg. The final reaction volume was 10 μl including 1 μl of each 5 μM primer, 1 μl of (40 ng/μl) template cDNA, 5 μl of SYBR Master Mix and 2 μl of distilled water. The relative quantification of the gene expression was determined as 2 ΔCt , where ΔCt = 2^(Average Ct of reference genes -Ct of gene of interest) [115]. Values reported are the mean of five biological replicas ± SE of one representative experiment. Previously described reference genes [116] were used as controls: Cyclin-Dependent Kinase (Cdk; GRMZM2G149286) and a gene encoding an uncharacterized protein (Unknown; GRMZM2G047204). PCR primers were designed using Primer3Plus software [117] and are listed in MZ95_RT_Primers in Supplemental File 3.
Phylogenetic analysis of the SPX-domain protein family
Maize putative SPX-domain protein encoding genes were identified using a methodology previously described for the maize Pap gene family [58]. Briefly, Arabidopsis and rice proteins [64] were retrieved and aligned using MUSCLE v3.8 [118]. The alignment was then converted to Stockholm format. B73 maize primary transcript predicted protein sequences v3.31 [119] obtained from Ensembl Plants [106] were searched using HMMER suite version 3.1b2 [120]. After manually checking and filtering for proteins lacking the canonical SPX domain [121], 15 putative SPX-protein sequences were identified. Where noted, gene models annotated in the v4 genome assembly were preferred. For phylogenetic analysis, Arabidopsis, rice and maize SPX proteins were aligned using MUSCLE [118] and passed to MEGA version X [122,123]. We manually selected SPX subdomains defined by [87] and corrected mismatches in the alignment (Fig. S3). A 1000 bootstrap phylogenetic tree was constructed with Maximum Likelihood method and Le_Gascuel_2008 model [124]. | 2020-11-03T14:18:02.824Z | 2020-10-29T00:00:00.000 | {
"year": 2021,
"sha1": "bf8af9746c8b15670795397d25109ba7132f85c9",
"oa_license": "CCBY",
"oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-021-02997-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "66965a6670eb5aa780b9a0237885c4d853e54522",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
261139758 | pes2o/s2orc | v3-fos-license | Clinical Characteristics and Novel ZEB2 Gene Mutation Analysis of Three Chinese Patients with Mowat-Wilson Syndrome
Purpose Mowat-Wilson syndrome (MWS) is an autosomal dominant disease caused by a pathogenic variant of the ZEB2 gene. The main clinical manifestations include special facial features, Hirschsprung disease (HSCR), global developmental delay and other congenital malformations. Here, we summarize the clinical characteristics and genetic mutation analysis of three Chinese patients with MWS. Patients and Methods The clinical characteristics of the patients were monitored and the treatment effect was followed up. DNA was extracted from peripheral blood and analyzed by sequencing. Whole exome sequencing was then performed. Results Three novel ZEB2 gene mutations were identified in 3 patients (c.1147_1150dupGAAC, p.Q384Rfs*7, c.1137_1146del TAGTATGTCT, p.S380Nfs *13 and c.2718delT, p.A907Lfs*23). They all had special facial features, intellectual disability, developmental delay, microcephaly, structural brain abnormalities and other symptoms. After long-term regular rehabilitation treatment, the development quotient of each functional area of the patient was slightly improved. Conclusion Our study expanded the mutation spectrum of ZEB2 and enriched our understanding of the clinical features of MWS. It also shows that long-term standardized treatment is of great significance for the prognosis of patients.
Introduction
Mowat-Wilson syndrome (MWS) is an autosomal dominant disease caused by a pathogenic variant of the ZEB2 gene. The main clinical manifestations include special facial features, Hirschsprung disease (HSCR), moderate to severe intellectual disability, global developmental delay, epilepsy and other congenital malformations. Special face is mainly manifested in forehead protrusion, eye distance is increased, eyebrows are widened and have horn like protrusion inside, nose tip protrusion is round, open mouth with M-shaped upper lip, small jaw, etc. In addition, other congenital malformations may include congenital heart disease, agenesis of the corpus callosum, genitourinary abnormalities (especially in men with hypospadias), and ocular defects. [1][2][3] The syndrome is caused by de novo mutations in one allele of zinc-finger E-box-binding homeobox 2 gene (ZEB2). ZEB2 also known as ZFHX1B (zinc finger homeobox 1B) or SIP1 (Smad Interacting protein 1). 4,5 The ZEB2 protein contains a number of functional domains, including a nucleosome remodeling and deacetylase-interaction motif, one zinc-finger (ZF) cluster in the amino terminal region (N-ZF), a SMAD binding domain, a homeodomain, a C-terminal binding protein interacting domain, and one ZF cluster in the carboxyl terminal region (C-ZF). ZEB2 is a member of the two-handed zinc-finger/homeodomain transcription factor family, it consists of nine coding exons (exons 2-10) and one non-coding exon (exon 1). 6,7 ZEB2, like its family members, interacts with Smad1 protein and functions as a transcriptional repressor of TGF-β signaling pathway. 8 ZEB2 plays important roles in development, such as neural crest formation, gastrula formation, cardiac morphogenesis, musculoskeletal system formation, and craniofacial structure establishment. 6,7 MWS was first described in 1998 by Mowat et al, and the genetic locus was identified on chromosome 2q22-q23. 9 It has been reported that the incidence of the disease is about 1:50, 000 to 1:70, 000. 10 By reviewing the literature, it can be found that about 300 MWS patients have been reported, and about 280 ZEB2 variants have been found (Human Gene Mutation Database). [11][12][13][14] It has mainly been reported from countries such as Europe, Australia and the United States, with Japan also reporting a large number of cases. 14 35 cases of MWS and 25 pathogenic ZEB2 variants have been reported in China. 11,[15][16][17][18] Here, we report three novel ZEB2 variants in Chinese MWS patients and summarize their clinical manifestations, genetic mutations, and follow-up.
Patients and Methods
All three patients were from The Second Children & Women's Healthcare of Jinan City. All tests were performed as routine clinical investigations in accordance with the ethical principles of the Declaration of Helsinki. Written informed consent was obtained from the patient's guardian. This study was approved by the Medical Ethics Committee of The Second Children & Women's Healthcare of Jinan City.
Genomic DNA was extracted from peripheral blood leukocytes using a commercial kit (Qiagen). Whole exome sequencing was then performed. The genetic analysis was approved by the Medical Ethics Committee of The Second Children & Women's Healthcare of Jinan City.
Patient One
The patient was the first child of a healthy unrelated couple. She was born by vaginal delivery at 40 weeks of gestation. Her birth weight was 3.74 kg. She had tears in her eyes and a lot of secretions after her birth. When she was 7 months old, she came to the hospital because she could not turn over and sit down. At this time, she has unique facial features, including wide eyebrows, a slightly wider distance between the eyes, and a prominent rounded nose tip.
After examination, the echocardiography of the patient showed that the intracardiac structure was generally normal, and the liver, gallbladder, pancreas, spleen and kidney were not significantly abnormal. The results of tandem mass spectrometry and gas-chromatography mass spectrometry were normal. Gastrointestinal ultrasound showed no obvious abnormalities, and craniocerebral MRI showed suspicious white matter retardation. SNP array did not detect clinically significant chromosome copy number abnormalities. A novel heterozygous variant (NM_014794.3:c.1147_1150dupGAAC, p.Q384Rfs*7) was found in this patient by whole exome detection, which was not detected in the peripheral blood of the patient's parents after Sanger verification, so it may be a de novo variant, as shown in Supplementary Figure 1. This variant has not been previously reported and is defined as a pathogenic variant based on the available evidence.
Clinical follow-up showed that when the patient was diagnosed with MWS (8 months old), he was tested by neuropsychological examination. The results showed that the development quotient of the patient was 55, which was mild intellectual disability. The development quotient of each functional region was shown in Table 1. At this time, the height of the patient was 72cm, her weight was 9.6kg, and head circumference was 42.5cm. At 11 months, the patient showed more severe growth retardation and intellectual disability, and all developmental milestones were delayed. At this time, she could not sit alone very well. The neuropsychological examination of children showed that the development quotient of the patient was 44.1, which was moderate intellectual disability. The development quotient of each functional area was shown in Table 1. At this time, in addition to her previous special appearance, she also developed uplifted earlobes, as shown in Figure 1. We recommended long-term regular rehabilitation training, but the patient did not comply. At the age of 1 year, the patient showed nodding motion. We performed video electroencephalography (EEG) monitoring on the patient, and the results showed abnormal infant EEG. During the follow-up, we monitored whether the child had epileptic symptoms. Fortunately, it did not occur, but the child had high fever and convulsion. At the age of 1 year and 2 months, the intellectual disability of the patient was worse than before, and the developmental quotient was 38.4, which was severe intellectual disability. The developmental quotient of each functional area is shown in Table 1.
She was able to stand alone at 2 years of age and walk alone at 2 years and 2 months of age, but she is still nonverbal and has a poor prognosis. At the age of 2 years and 2 months, her weight was 12.2kg, in the 50th percentile of the average weight of infants of the same age.
Patient Two
The patient is the second child of a healthy unrelated couple, and his older brother is healthy. He was born by caesarean section at 37 weeks gestation. The birth weight was 2.65 kg and the birth length was 48 cm. When he was 3 months old, he came to our hospital because he could not lift his head and was prone to crying. At this time, his length was 59.5 cm, weight was 5.8 kg and head circumference was 37.2 cm. As we can see, he has microcephaly (<1%). In terms of facial features, there was no other special manifestation except earlobe bulge with central depression. In addition, his cardiac ultrasound revealed a ventricular septal defect and patent ductus arteriosus. Cryptorchidism was detected. Brain magnetic resonance imaging (MRI) showed agenesis of the corpus callosum. Since he did not pass neonatal hearing screening, transient evoked otoacoustic emission and automatic auditory brainstem response (ABR) tests were performed. However, the results showed bilateral sensorineural deafness. Through the whole-exome testing, a novel heterozygous variant was identified in this patient (NM_014795: c.1137_1146del TAGTA TGTCT, p. S380Nfs*13), which was not detected in the peripheral blood of the patient's parents after Sanger verification, so it may be a de novo variant, as shown in Supplementary Figure 2. And this variant has only been previously reported by our group.
Clinical follow-up showed that the patient was initially diagnosed with MWS at 6 months of age. The results of the neuropsychological examination showed that the development quotient of the patient was 60.9, indicating mild mental deficiency. The development quotient of each functional area is shown in Table 2. At this time, the child was just able to lift his head, and he could sit alone at 12 months. At 17 months, a neuropsychological examination was conducted, and the results showed that the development quotient was 46.5, which was a moderate mental deficiency. The development quotient of each area is shown in Table 2. At 18 months, he stood without help, and at 22 months, he walked on a wide base supported by support. At 30 months, his developmental quotient was 27.6, which developed into severe intellectual Table 2. During this period, children still adhere to long-term regular rehabilitation training. The child could walk alone at 3 years old. The developmental quotient was 28.4 at 3 years and 3 months, At the age of 4, the face is shown in Figure 2 and his weight was 16kg, in the 45th percentile of the average weight of children of the same age, and the developmental quotient was 28.6. The development quotient of each energy zone is shown in Table 2. We can see that although the overall development quotient of the child is not obvious, the development quotient of each functional area is improved. Surprisingly, he never had a seizure during follow-up. But his hearing loss did not improve as he grew.
Patient Three
The patient is the first child of a healthy unrelated couple, she was born by vaginal delivery at 40 weeks of gestation. The birth weight was 3.16 kg and the head circumference was 31cm, slightly smaller than that of normal neonates. The patient was admitted to the hospital for repeated vomiting for 14 hours after birth, but the patient still vomited repeatedly after admission, and the gastric lavage effect was not good. Barium meal angiography combined with clinical consideration of sigmoid colon, descending colon, transverse colon and partial ascending colon megacolon is highly likely, and the diagnosis of congenital megacolon was considered after pediatric surgical consultation. In addition, echocardiography showed patent ductus arteriosus, ventricular septal defect (perimembranous), and atrial septal defect (patent foramen ovale), which indicated congenital heart disease. Brain magnetic resonance imaging (MRI) is consistent with corpus callosum dysplasia. During hospitalization, the child had a lot of eye discharge, and after ophthalmic consultation, it was suggested that the right eye retinal hemorrhage and conjunctivitis were present. Due to the wide eye distance, small head circumference, congenital heart disease and HSCR, the patient underwent genetic testing. The heterozygous variation of ZEB2 gene c.2718delT (p. A907Lfs*23) was found. The variant was not detected in the peripheral blood of the patient's parents as verified by Sanger, so it may be a de novo variant and was not reported in our reference population gene database. During the follow-up, we can see that when the child was 9 months old, a neuropsychological examination was performed, and the developmental quotient was 32, indicating severe intellectual disability, including 33 for gross motor development, 33 for fine motor development, 33 for adaptive ability, 22 for language, and 39 for social behavior. The developmental quotient of each functional area is shown in Table 3. We recommended long-term standardized rehabilitation, but the patient did not comply. At present, the child is 3 years old, her weight was 13.6kg, in the 40th percentile of the average weight of infants of the same age. She still could not stand alone and walk alone, only know the simple twoword address. She still displayed persistent growth delay, and the prognosis is not good.
Discussion
Mowat-Wilson syndrome is a rare congenital malformation syndrome. Due to the diversity of clinical phenotypes, MWS is difficult to be diagnosed clinically. Currently, more than 300 MWS patients have been reported, [11][12][13] and more than 280 ZEB2 variants have been found. However, only 35 cases of MWS and 25 pathogenic ZEB2 variants were reported in China. 11,[15][16][17][18] In this study, we found three new ZBE2 variants in three Chinese patients with MWS, all of whom showed special facial features, intellectual disability, developmental delay, microcephaly, structural brain abnormalities and other abnormalities consistent with MWS.
Hirschsprung disease is one of the most characteristic manifestations of MWS. Hirschacolon incidence has been reported to be 44%, 1 However, the rate of HSCR is decreasing as more patients with MWS are diagnosed. 19 Among the three patients in this study, only one patient (patient 3) was observed and surgically treated with HSCR, and the other two patients had no history of constipation. Previous studies have shown that the incidence of Hirschsprung's disease in Chinese MWS patients is low, 11 and this study is consistent with previous reports.
Brain structural abnormalities are also common in MWS and can be detected by cranial MRI. Garavelli et al studied the neuropathology of 54 MWS patients and showed that 96% of the patients had brain MRI abnormalities, of which corpus callosum dysplasia was the most common (79.6%). Other common abnormalities included hippocampal abnormalities (77.8%), lateral ventricle enlargement (68.5%), and white matter abnormalities (40.7% thickness reduction). Local signal changes of 22.2%). 6,7 In our study, three patients showed abnormal brain MRI, including two with agenesis of corpus callosum (Patients 2 and 3) and one with white matter retardation. This is consistent with previous reports. Because agenesis of the corpus callosum is the only feature of MWS that can be detected antenatal, special attention to facial features during ultrasound or fetal pathology may be helpful in diagnosing MWS in fetuses with agenesis of the corpus callosum. 2 Epilepsy is one of the most common neurological problems in MWS patients, accounting for about 80%. 20,21 Seizures usually occur in the second year of life, and the average age of seizure onset in 87 MWS patients studied by Ivanovski et al was 27.5 months. It can occur as early as 1 month old and as late as 11 years old. 21 Fortunately, only one of the three patients we followed (patient 1) had febrile convulsions, and the other two patients did not have seizure-related symptoms.
According to statistics, all children with MWS present with intellectual disability and overall developmental delay of varying severity. 21 At least moderate to severe intellectual disability. Developmental milestones such as sitting and walking are very delayed, with literature reporting that the average age of sitting without support is 20 months and the average age of walking is 4 years and 3 months. 2 Our three patients were about 1 year old sitting alone. And all fine motor skill milestones are delayed in MWS patients. Most patients over the age of 20 still need help with dressing and other daily activities. Speech
781
is rarely more than a few words and starts at the average age of four. 21 Just like our patient, all three patients were basically speechless, or only known by simple names such as mom and dad. In addition, few previous studies have reported formal intelligence tests and have mostly been limited to imaging changes and rough clinical findings. This study is the first to describe mental development and rehabilitation treatment, which fills the gap of follow-up treatment effect research. At the same time, by comparing the intellectual development level of patient 2 with patients 1 and 3, it can be seen that the prognosis of patients with standardized rehabilitation treatment is better. Although patient 2 was also severely mentally retarded, the development quotient of each functional area of the child was improved through regular rehabilitation treatment. This shows the importance of regular follow-up treatment in the prognosis of children, and is more conducive to promote the healthy growth of children. It also illustrates the difficulty of MWS treatment. In addition, another common feature of MWS is congenital heart disease, 1 Two patients in this study had ventricular septal defects and patent ductus arteriosus. Eye abnormalities can also be detected in WMS. 1,2 In our study, one patient developed retinal hemorrhage. About 60% of men with genital abnormalities have hypospadias, and about 40% of men have cryptorchidism. 1,22 Only one of our 3 patients was male, and he had cryptorchidism. In addition, our team has reported that MWS patients have symptoms of hearing loss.
In our study, we report three different novel ZEB2 mutations that have not been previously mentioned in the literature. It greatly enriches the mutation spectrum and helps to understand the relationship between clinical phenotypes and genotypes.
Conclusion
Our study expanded the mutation spectrum of ZEB2 and enriched our understanding of the clinical features of MWS. It also shows that long-term standardized treatment is of great significance for the prognosis of patients.
Data Sharing Statement
All data generated or analyzed in this study are included in this published article.
Ethical Approval
All three patients were from The Second Children & Women's Healthcare of Jinan City. All tests were performed as routine clinical investigations in accordance with the ethical principles of the Declaration of Helsinki. All patients' guardians provided written informed consent for the publication of images as well as personal and medical information for data analysis and publication. This study was approved by the Medical Ethics Committee of The Second Children & Women's Healthcare of Jinan City. It agreed to publish. | 2023-08-26T15:13:43.939Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "aef5ac9277d25aa6457f005c7ce59eaf01b7ba53",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=92188",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a2530b41889f49d20ac77be0c54de2a8febd6f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
36197019 | pes2o/s2orc | v3-fos-license | Histone methyltransferase SETDB1 regulates liver cancer cell growth through methylation of p53
SETDB1 is a histone H3K9 methyltransferase that has a critical role in early development. It is located within a melanoma susceptibility locus and facilitates melanoma formation. However, the mechanism by which SETDB1 regulates tumorigenesis remains unknown. Here we report the molecular interplay between SETDB1 and the well-known hotspot gain-of-function (GOF) TP53 R249S mutation. We show that in hepatocellular carcinoma (HCC) SETDB1 is overexpressed with moderate copy number gain, and GOF TP53 mutations including R249S associate with this overexpression. Inactivation of SETDB1 in HCC cell lines bearing the R249S mutation suppresses cell growth. The TP53 mutation status renders cancer cells dependent on SETDB1. Moreover, SETDB1 forms a complex with p53 and catalyses p53K370 di-methylation. SETDB1 attenuation reduces the p53K370me2 level, which subsequently leads to increased recognition and degradation of p53 by MDM2. Together, we provide both genetic and biochemical evidence for a mechanism by which SETDB1 regulates cancer cell growth via methylation of p53.
H epatocellular carcinoma (HCC), the fifth most common cancer worldwide, is one of the most prevalent malignancies in Asian populations. Similar to other cancers, HCC is a heterogeneous disease driven by progressive genetic aberrations including silencing of tumour suppressor genes, oncogene activation and chromosomal anomalies. Epigenetic mechanisms often cooperate with genetic ones in the alteration of chromatin status that leads to the development of malignancy. TP53, which encodes the human tumour suppressor p53, is among the most frequently mutated genes in HCC. p53 functions as a transcription factor that regulates a large number of genes in response to a variety of cellular stress stimuli. Reportedly, TP53 is mutated in B50% of human tumours including HCC [1][2][3][4][5] . The majority TP53 monoallelic missense or nonsense mutations in the DNA-binding domain abrogate p53 DNA-binding specificity and lead to a loss of its tumour-suppressive nature. The mutated alleles can also act as a dominant-negative form to suppress the wild-type allele functions. Moreover, certain mutations may acquire oncogenic properties that are independent of the wildtype p53, which is known as the gain-of-function (GOF) TP53 mutation 1,6 . GOF TP53 mutations contribute to genomic instability, inactivation of P63 and P73, aberrant gene transcription, anti-apoptosis activity and enhanced tumour cell invasion and migration.
Normally, the wild-type p53 protein is kept at a very low level in cells. In response to cellular stress, such as DNA damage or hypoxia, it is rapidly stabilized and activated. However, the GOF p53 protein is often constitutively stable in tumour cells, and the accumulation of the mutant p53 is thought to be required for its oncogenic activities. The p53 activity is regulated through various post-translational modifications, including phosphorylation, acetylation, ubiquitination and methylation [7][8][9] . Recently, several histone methyltransferases (HMTs) and demethylase (HDMs) such as KMT7, KMT3c, KMT5A, EHMT2 and KDM1 are found to modulate the methylation status of p53 at distinct sites [10][11][12][13][14][15] . However, the methylase(s) responsible for the previously observed K370 di-methylation remained unidentified.
SETDB1 is an H3K9 methyltransferase that methylates histone H3 on lysine 9, up to tri-methylation (H3K9me3) 16 . It is recruited to the chromatin by the methyl-CpG-binding protein MBD1 (ref. 17) and silence genes including tumour suppressor genes, such as RASSF1A and P53BP2. In previous studies we have shown that Setdb1 is critical for embryonic development 18 . It is also involved in the counteraction between the Notch and Wnt/bcatenin pathways in colon cancer 19,20 . SETDB1 is amplified in many tumour types, such as lung cancer 21 , sits within a melanoma susceptibility locus 22 and facilitates melanoma formation in zebrafish 23 .
In the present study, we report that SETDB1 is overexpressed in HCC and that SETDB1 overexpression associates with p53 mutations. Moreover, GOF but not wild-type p53 status renders cells dependent on SETDB1. SETDB1 executes its role on cancer cell growth through di-methylating p53 at K370.
Results
SETDB1 is overexpressed in liver cancer. It is reported that SETDB1 is amplified in melanoma as well as in other cancer types including liver cancer 23 . Using the GISTIC (Genomic Identification of Significant Targets in Cancer) algorithm 24 , we analysed DNA copy number alternation in human cancer cell lines and primary tumour tissues. We confirmed that SETDB1 was amplified in various tumour types including liver cancer as well as in many cancer cell lines similar to previous reports 23 . For example, in one study with 103 HCCs with hepatitis C virus infection (GSE9845), SEDTB1 was adjacent (0.08 MB away) to the second most significant GISTIC amplification peak (q value ¼ 3e À 25).
We next asked whether SETDB1 is overexpressed in liver cancer. We surveyed publicly available gene expression data and found that SETDB1 is highly expressed in various tumour types including breast, renal cell cancer (RCC) and liver cancers. In an Hepatitis C Virus (HCV)-induced HCC study (GSE6764), gene expression was measured at various stages of tumorigenesis. We found that the expression of SETDB1 correlated well with the grade of tumorigenesis with later-stage cancer expressing higher level of SETDB1 (Fig. 1a). The average expression level of SETDB1 in late-stage liver cancer is significantly higher than that of the normal control (t-test, Po0.001). To corroborate these results, we assayed the expression of SETDB1 in six independent pairs of liver cancer samples with adjacent normal tissue as the control using reverse transcriptase quantitative PCR (RT-qPCR). In four out of the six pairs, SETDB1 expression is higher in tumour than in control (Fig. 1b).
To assess the protein levels of SETDB1 in liver cancer samples, we performed immunocytochemistry using tumour tissue microarray (TMA) with an antibody against SETDB1. The results are shown in Fig. 1c,d. We found that SETDB1 was highly expressed in tumour tissues relative to the adjacent normal controls. Sixty per cent (35 out of 59) tumours were positive for SETDB1, while only seventeen per cent (6 out of 35) were positive for the adjacent normal controls. From these experiments, we conclude that SETDB1 is overexpressed in a subset of liver cancer patients.
SETDB1 copy number gain associates with TP53 mutations in HCC. To explore genetic aberrations associated with SETDB1 amplification and/or overexpression, we profiled 84 Asian HCC primary tumour tissue samples with exome sequencing, Human SNP array 6.0 and RNA expression microarrays. We found that TP53 mutations were associated with SETDB1 copy number gain (copy number 42.5) or overexpression (summarized in Table 1). We observed a statistically significantly increased proportion of TP53 mutation among the HCC tumour samples carrying SETDB1 copy number gain or overexpression (Fisher's exact test P ¼ 0.03; odds ratio ¼ 3.2, Fig. 2a,b). We also observed a trend of TP53 mutation enrichment in gastric cancer with SETDB1 copy number gain/overexpression, albeit not statistically significant. Among the 12 cases of TP53 mutations found in liver cancer patients with SETDB1 copy number gain/overexpression, four carried the hotspot R249S mutation.
R249S is a well-characterized GOF TP53 hotspot mutation 5,6,25 . It is reported that the R249S mutation can increase cell migration and cell proliferation in lung cancer cells 26,27 . To evaluate the frequency of R249S mutation in Asian liver cancer patients, we genotyped an independent Chinese liver cancer cohort using restriction fragment length polymorphism analysis. In one group of DNA isolated from paraffin sections, we found that 91% (91 out of 100) of the tumour samples carried the R249S mutation ( Supplementary Fig. 1a), consistent with previous reports [3][4][5] . The well-known P72R polymorphism was used as a control in the same set of DNA, and its pattern was apparently different from that of R249S, suggesting a low likelihood of DNA cross-contamination ( Supplementary Fig. 1b). In another independent collection of 20 pairs of tumour and adjacent normal control samples, we found that 15% (3 out of 20) of the tumours carried the R249S mutation but none of the adjacent normal controls does.
SETDB1 regulates the growth of HCC cell lines bearing R249S.
To explore the potential roles of SETDB1 in HCC, we profiled SETDB1 expression and examined the TP53 mutation status in a panel of HCC cell lines ( Supplementary Fig. 2). Three cell lines ARTICLE NATURE COMMUNICATIONS | DOI: 10.1038/ncomms9651 (HCCLM3, PLC/PRF and JHH4) were found with endogenous R249S mutation. Among them, HCCLM3 also has a SETDB1 copy number gain with relatively high SETDB1 expression. Inhibition of SETDB1 using short interfering RNAs (siRNAs) suppressed cell proliferation of HCCLM3 cells (Fig. 3a). To further confirm these results, we generated stable cancer cell lines expressing inducible short hairpin RNAs (shRNAs) against SETDB1. Similar growth inhibitory effects were observed by induction of SETDB1 shRNAs (Fig. 3b). The two cell lines with R249S mutation but not SETDB1 copy number gain (JHH4 and PLC/PRF) were moderately inhibited when SETDB1 was attenuated ( Supplementary Fig. 3).
We then asked whether SETDB1 overexpression drives cancer cell growth. Since gene transfection efficiency is low in HCCLM3 cells, we addressed this in the PLC/PRF cell line. The cell proliferation was stimulated by exogenous expression of wildtype SETDB1, but not set domain-truncated SETDB1 (Fig. 3c). Western blot analysis confirmed that the protein expression level of the mutant SETDB1 is not less than that of the wild type. Our findings suggest that SETDB1 promotes liver cancer cell growth, which depends on its methylation enzymatic activity.
To address whether SETDB1 inhibition affects DNA synthesis, we treated HCCLM3 cells with a pulse of 5-ethynyl-2 0deoxyuridine (EdU) on SETDB1 knockdown. We found that the percentage of EdU-positive cells was significantly reduced on SETDB1 knockdown (Fig. 3d), suggesting that SETDB1 inhibition reduced the number of cells in the S phase. In addition, inhibition of SETDB1 using inducible shRNA significantly inhibited anchorage-independent growth of HCCLM3 cells in a three-dimensional soft agar assay (Fig. 3e).
TP53 mutation confers cancer cell sensitivity to SETDB1. To address whether the inhibition of cell growth by SETDB1 depends on the p53 status, we examined SETDB1 knockdown in p53 null Hep3B cells, which also contain high SETDB1 expression. Interestingly, Hep3B cells were insensitive to SETDB1 knockdown (Fig. 4a), suggesting that the SETDB1 expression level per se does not determine the cancer cell dependence. Knockdown of SETDB1 in wild-type p53-restored Hep3B did not affect cell growth either (Fig. 4a). We also performed SETDB1 knockdown experiments using the isogenic pair of HCT116 cells in which p53 is somatically deleted or intact. Knockdown of SETDB1 showed no growth inhibition in either wild-type or p53-deleted cells (Fig. 4b). Together, the data suggest that wild-type p53 does not dictate SETDB1 dependence in cancer cells.
Since R249S is a GOF mutation of TP53, we then asked whether the R249S mutation would confer cell sensitivity to SETDB1 knockdown. We first attenuated p53 in HCCLM3 using shRNA against p53. We found that SETDB1 knockdown no Figure 2 | Association of TP53 mutations with SETDB1 copy number gain or overexpression in HCC tumours. Waterfall plots were generated to illustrate the association of TP53 mutations with SETDB1 copy number gain (a) or overexpression (b). Eighty-four HCC tumours were either ranked by SETDB1 copy number or expression level. Each bar represents one tumour sample, which is coloured by the TP53 mutation status. Tumours with TP53 mutation are marked as red. . The knockdown efficiency of these two siRNAs was confirmed by using RT-qPCR (n ¼ 3). Data are presented as mean ± s.d. *Po0.001, t-test. (b) Cells were infected with inducible lentiviral-based shRNA against SETDB1. Cells were treated with 1 mg ml À 1 doxycycline to induce SETDB1 knockdown and cell growth was measured on days 3, 5 and 7, respectively. Data are presented as mean ± s.d. *Po0.001, t-test, n ¼ 6. Target knockdown was confirmed using RT-qPCR. (c) PLC cells were transfected with FLAGtagged SETDB1 overexpression construct or pcDNA3 vector control or a set domain-deleted SETDB1 mutant. Cell growth was measured 5 days after transfection. Data are presented as mean ± s.d. *Po0.05, t-test, n ¼ 6. Expression level of the wild-type or mutant SETDB1 was confirmed with western blot analysis using the antibody against FLAG. FL, full length; Mu, mutated SETDB1. (d) HCCLM cells were treated with 1 mg ml À 1 doxycycline for 3 days, and 10 mM EdU was added to the culture for the last 6 h of culture. EdU incorporation was assessed by immunofluorescence using the antibody against EdU. longer inhibited cell growth in case of p53 abolishing (Fig. 4c). The knockdown efficiency of p53 shRNA was confirmed using PCR analysis ( Supplementary Fig. 4). We next expressed p53R249S in p53 null Hep3B cells. We found that R249Sexpressing cells became sensitive to SETDB1 knockdown (Fig. 4d). We next treated the wild-type or R249S p53-restored Hep3B cells with a DNA damage inducer, Doxorubicin. Under such conditions, cell growth was inhibited on SETDB1 inhibition only in the cells expressing R249S but not in wild-type p53 (Fig. 4e). p53R249S can render both Hep3B and HCT116 cells sensitive to SETDB1 knockdown (lowest panel, Fig. 4a,b). Together, our data strongly suggest that R249S, not the wildtype p53, determines the cell growth sensitivity to SETDB1 knockdown.
To understand the interplay between p53 and SETDB1, we asked whether SEDTB1 could form a complex with p53 protein in cells.
We therefore treated HCCLM3 cells with Doxorubicin, followed by co-immunoprecipitation experiment and found that SETDB1 was indeed associated with p53 (data not shown). We also transfected HCT116 p53 null cells with Flag-tagged SETDB1 and wild-type or R249S-mutated p53. Immunoprecipitation results indicated that both wild-type and mutant p53 can form complex with SETDB1; however, the association of SETDB1 with the mutant p53 appears to be stronger than that with the wild type ( Supplementary Fig. 5). It has been shown that HMTs, such as KMT7, KMT3C, KMT5A and EHMT2, can methylate p53 at distinct sites 10,11,13,14 . Therefore, we decided to test whether SETDB1 could methylate p53. We first assessed p53 methylation by liquid choromatography with tandem mass spectrometry (LC-MS/MS) analysis on synthetic p53 peptides reacted with the SETDB1 complex. HCCLM3 cells were pretreated with Doxorubicin before SETDB1 pull down, and the methylase activity of the SETDB1 pull down was first confirmed with a histone methylation assay ( Supplementary Fig. 6) We then examined in vitro p53 methylation by such endogenous SETDB1 complex using the p53 peptide as the substrate and S-adenosyl methionine as the methyl donor. We only detected a small peak of monomethylation production from the unmethylated p53 substrate (Fig. 5a). Dot blot analysis indicated that this mono-methylation is p53K370me1 ( Supplementary Fig. 7). Substantial production of K370me2 was observed using synthetic K370me1 peptide as the substrate ( Fig. 5b and Supplementary Fig. 8). Consistently, peptides pretreated with SMYD2 (induces K370me1) also led to a significant production of K370me2 by the SETDB1 complex (Fig. 5c). As a control, SMYD2 alone monomethylated p53K370 as previously reported (Fig. 5d). These data suggest that SETDB1 infected with an inducible lentiviral-based shRNA against SETDB1. Cells were treated with 1 mg ml À 1 doxycycline to induce SETDB1 knockdown, and cell growth was measured on days 2, 5 and 7. Data are presented as mean ± s.d. n ¼ 6. (b) Assessing growth phenotypes on SETDB1 knockdown in isogenic pair of somatic p53 knockout HCT116 cells, or R249S restored HCT116 cells. Cells were infected with SETDB1 shRNA, and growth was measured on days 3 and 5, the same as above. Data are presented as mean ± s.d. n ¼ 6. (c) Growth inhibition on SETDB1 knockdown was measured in HCCLM3 with stably infected SETDB1-inducible shRNA. The cells were treated with a control scramble shRNA or shRNA against p53. Cell growth assessment was performed the same as above. Data are presented as mean ± s.d; *Po0.05, t-test, n ¼ 6. (d) Hep3B cells with stably infected SETDB1 shRNA were transfected with vector control, wild-type p53 or p53R249S mutants. Growth inhibition on SETDB1 knockdown was measured 2 days after transfection. Data are presented as mean ± s.d.; *Po0.05, t-test, n ¼ 6. (e) Cell growth analysis was performed the same as above, except that the cells were treated with 0.05 mg ml À 1 Doxorubicin for 2 or 5 days. Data are presented as mean±s.d; *Po0.05, t-test, n ¼ 6.
could be a p53 methylase that mainly converts K370me1 to K370me2. We next tested methylation of endogenous p53 by endogenous SETDB1. Knockdown of SETDB1 reduced the K370me2 level in HCCLM3 cells (Fig. 6a). To validate the specificity of the p53K370me2 antibody, we performed peptide competition assay. The putative p53K370me2 bands were only competed away by the synthetic p53K370me2 peptides in a dose-dependent manner, but not by any other p53 peptides tested (Fig. 6b). By using a K370A-mutant p53, we observed no bands detected with the antibody (Fig. 6c), indicating that p53K370me2 was specifically recognized by the antibody. While endogenous SETDB1 can methylate either mutant or wild-type p53, mutant p53 appears to be methylated to a higher degree ( Supplementary Fig. 5).
To address whether methylation of p53K370 is mediated by SETDB1 methyltransferase activity, we transfected HCT116 p53 null cells with p53 together with wild-type or catalytically dead SEDTB1. Overexpressed SETDB1 methylated either wild-type or mutated exogenous p53 at K370me2; however, the set domaintruncated SETDB1 had no effect on p53 methylation (Fig. 6d), indicating that the set domain of SETDB1 is required for p53K370 methylation. To ask whether SETDB1 can directly methylate full-length p53, we performed the in vitro methylation assay using the endogenous SETDB1 pull down complex and recombinant wild-type p53 protein. Indeed, the SETDB1 complex catalysed p53K370 to di-methylation (Fig. 6e).
SETDB1 regulates p53 protein stability. It has been shown previously that the methylation of p53 might affect p53 protein stability 10 . We thus asked whether di-methylation of p53 at K370 would be involved in the regulation of p53 stability. We introduced wild-type or mutated p53 to HCT116 p53 null cells, treated the cells with cycloheximide, a protein synthesis inhibitor, and measured the turnover of the p53 protein using western blot analysis. As expected, we observed a very rapid disappearance of the wild-type p53 (Fig. 7a) with the half-life estimated to be B3 h. In contrast, the R249S-mutant form of p53 was much more stable (Fig. 7b) with an estimated half-life of over 10 h. The turnover rate of p53R249S was accelerated by knocking down SETDB1 (Fig. 7b). Consistent evidence was seen in the HCCLM3 cells, where SETDB1 knockdown also promoted the degradation of the endogenous p53R249S (Fig. 7c).
We next examined whether this effect of SETDB1 knockdown on p53 stability went through MDM2-mediated ubiquitination. We found that SETDB1 knockdown increased p53 protein ubiquitination (Fig. 7d). Consistently, SETDB1 knockdown increased the association of MDM2 with p53 (Fig. 7e). Because the phosphorylation of p53 at S15 stabilizes p53 by preventing it from ubiquitination 28 , we next probed the level of p53 phosphorylation (S15) on SETDB1 knockdown. Inhibition of SETDB1 reduced the level of S15 phosphorylation of p53 (Fig. 7f). Together, these data suggest that SETDB1 regulates p53 stability presumably through the alteration of the K370 di-methylation status.
Knockdown of SETDB1 inhibits HCCLM3 cell growth in vivo. We next assessed the tumour growth rate in HCCLM3 xenograft model in nude mice on SETDB1 knockdown. In vivo tumour growth was dramatically retarded by induction of SETDB1inducible shRNA (Fig. 8a), without significant drop of body weight. SETDB1 knockdown efficiency was confirmed using RT-qPCR, western blot and IHC analyses using the tumour samples taken from the end of study (Fig. 8b-d).
Poorly differentiated tumours are often associated with poor prognosis. Since SETDB1 is important for mouse ES cell maintenance and differentiation 18,29,30 , we asked whether
SETDB1 inhibition would induce tumour differentiation in vivo.
We performed histopathology analysis of the tumours with or without SETDB1 knockdown (Fig. 8e). In the control group without doxycycline treatment (n ¼ 4), epithelial-like tumour cells were arranged in trabeculae and nests. These structures were manifested for B20-40% in three cases and very little, if any, in one case. In the doxycycline-treated, SETDB1 knocked down group (n ¼ 4), in addition to an increased abundance of trabeculae and nests, the tumour cells also formed a gland-like structure (Fig. 8e). These features took up more than 50% in three cases and B80% in one case. Together, our data suggest that SETDB1 inhibition leads to suppressed tumour growth and increased cell differentiation in HCCLM3 xenograft models.
Discussion
The present findings identified SETDB1 as a novel p53K370 dimethylase. SETDB1 di-methylates p53K370 and subsequently regulates p53 stability. While p53 stability can be regulated by a variety of modifications such as phosphorylation, acetylation, ubiquitination and methylation, the current findings suggest that SETDB1 is one of the important modulators involved in the p53 stability tuning. In cancer cells that harbour the GOF, oncogenic TP53 mutations, such as R249S, the p53 protein is stabilized by interaction with and being methylated by SETDB1. Stabilized oncogenic p53 by SETDB1 confers the cell growth advantages. On the other hand, wild-type p53 is inherently unstable. It is less associated with SETDB1 and less methylated. Since wild-type p53 is only present in cells at a very low level, methylation of the wildtype tumour-suppressive p53 by SETDB1 may contribute very little to cell growth. The current working model is depicted in a schematic drawing in Fig. 9.
The exact reason why a single point mutation of p53 such as R249S makes a difference of the level of association of p53 with SETDB1 is not known yet. We hypothesize that this can be explained in the following aspects: (1) wild type and mutant p53 may have distinct protein configurations and exist in different contexts (such as DNA-binding and so on) in cells. Therefore, the context where mutant p53 resides may favour the interaction with SETDB1. (2) While SETDB1 methylates both wild-type and mutant p53, mutant p53 may have more chances to get K370 methylation in vivo because it associates more tightly with SETDB1 in cells as we have seen ( Supplementary Fig. 5). Therefore, the effect on mutant p53 stabilization by SETDB1 is more obvious than that on wild-type p53. (3) p53 stability is broadly regulated by a variety of other modifications such as phosphorylation and acetylation. The mutant p53 is constitutively more stable, while the wild-type p53 only exists in cells at a low level. In our study, SETDB1 may function as a modulator to tune p53 degradation. As the binding of SETDB1 to mutant or wild- Figure 6 | Regulation of p53K370 methylation by SETDB1. (a) To confirm that endogenous SETDB1 methylates endogenous p53, HCCLM3 cells stably infected with inducible SETDB1 shRNA were treated with doxycycline for 3 days, and then exposed to Doxorubicin for 24 h. Methylation of p53 was assessed using the antibody against p53K370me2. (b) SNU182 cells stably infected with inducible SETDB1 shRNA were treated with or without 1 mg ml À 1 doxycycline for 3 days after transfection and then exposed to 0.05 mg ml À 1 Doxorubicin for 24 h. The cells were harvested for western blot analysis using p53K370me2 antibody (1.2 mg ml À 1 ) alone or competed with 5 mM p53K370 unmethylated or monomethylated peptides or di-methylated peptide at the dose of 1, 5 and 10 mM. All competing peptides were of 31-mer long in length. (c) p53 null HCT116 cells were transfected with wild-type p53 or a mutant p53 with K370 replaced with A (p53K370A). Cells were also treated with 0.05 mg ml À 1 Doxorubicin for 24 h and harvested for western blot analysis using p53K370me2 antibody. GAPDH was analysed as the control. (d) p53 null HCT116 cells were transfected with wild type or p53R249S mutant together with SETDB1 or the set domain-deleted SETDB1 control. The methylation of p53 at K370 was measured by western blot analysis. (e) In vitro p53 methylation assays were performed using the endogenous SETDB1 complex pulled down from HCCLM3 cells that were treated with 0.5 mg ml À 1 of Doxorubicin for 6 h. The immunoprecipitated complex was used as the enzyme in the assay with full-length p53 protein as the substrate and S-adenosyl methionine (SAM) as the methyl donor. Methylation was assayed using western blot analysis. The reaction was carried out at room temperature or at 37°C for the duration as indicated.
type p53 differs in cells, the consequent different status on K370 methylation exaggerates the stability difference between mutant and wild-type p53. As a result, mutant p53 seems to be more dependent on SETDB1. Our working model also predicts that cancer cells with other GOF TP53 mutations than R249S might also gain growth advantage in the presence of high SETDB1 expression. This was shown in the SNU182 cells with mutant TP53 (S215I þ P72R) and high SETDB1 expression. Thus, we propose that SETDB1 regulates cancer cell growth by methylating GOF-mutant p53.
It is known that p53 can be mono-methylated by SMYD2. However, p53K370me1 does not affect protein stability but abolishes its wild-type activity 11,12 . In this study, we showed that SETDB1 could affect p53 stability through K370 di-methylation, suggesting that p53K370me2 has distinct roles from p53K370me1. Preliminary evidence suggested that SETDB1 might also methylate p53 at K372 and regulate its stability when K370 is mutated and is under overexpression conditions. One possibility is that K370 is the primary methylation site and K370me2 may block K372 methylation by SETDB1. Therefore, K372 methylation can only happen when K370 is not methylated. The selectivity and substrate specificity of SETDB1 on p53 methylation as well as potential context-dependent nearby residue interaction warrant future investigation. Besides methylation, p53 could also be modified by acetylation, phosphorylation and so on. It has been shown that different forms of post-translational modifications of p53 interact with each other. For example, KMT7-mediated p53K372 methylation prevents p53K370 methylation mediated by SMYD2 (ref. 12) but facilitates p53K373/K382 acetylation 31 . How K370 di-methylation may interact with other p53 modifications remains elusive and warrants future study.
Although SETDB1 mediates histone H3K9 (ref. 16) tri-methylation as a histone methyltransferase, it remains to be determined whether SETDB1 can also tri-methylate p53 at K370. Tri-methylation was not observed in our in vitro methylation assay; however, it could be due to insufficient SETDB1 enzymatic activity or lack of cofactors in the in vitro environment.
That p53 can be methylated and demethylated by HMTs and HDMs, respectively, not only reveals new roles of HMTs and HDMs other than histone methylation but also demonstrates the molecular similarity between histone modification and their nonhistone substrate methylation 10,32 . Many non-histone protein substrates have been identified for various HMTs such as EHMT2 (ref. 33). However, very little is known about non-histone substrates of SETDB1. In the current study, we identified p53 as the first non-histone substrate of SETDB1, to the best of our knowledge. It would be interesting to explore additional nonhistone substrates of SETDB1 to better understand its biological functions.
SETDB1 is responsible for the repressive H3K9me3 modification 29,30,34 and regulates the expression of tumour suppressors P53BP2 and RASSF1A. SETDB1 is located at chromosome 1q21, which is recently identified as a melanoma susceptibility locus 22 . In a screen in zebrafish melanoma models, SETDB1 was found to contribute to tumorigenesis in a p53 null background with The cells were treated with or without doxycycline for 3 days after transfection. Then, the cells were treated with 50 mg ml À 1 cycloheximide and the turnover of p53 was measured at the time points indicated by western blot analysis using the antibody against total p53. (c) Similar experiments were performed to analyse the endogenous p53 turnover in HCCLM3 cells without p53 overexpression on SETDB1 knockdown. (d) SETDB1 knockdown increases p53 ubiquitination. HCCLM3 cells were treated with doxycycline as described above, and the cells were harvested for p53 immunoprecipitation and western blot analysis for p53 (left) or ubiquitin (right). The loading was normalized to total p53. (e) HCCLM3 cells infected with inducible SETDB1 shRNA were induced SETDB1 knockdown first and then treated with Doxorubicin before being harvested for immunoprecipitation. The cell lysates were immunoprecipitated with antibodies against total MDM2 or IgG control. The samples were then analysed using western blot analysis. Increased p53/MDM2 association was observed on SETDB1 knockdown. (f) SETDB1 knockdown reduces p53S15 phosphorylation. HCCLM3 cells were treated with doxycycline as described above, and the cells were harvested for western blot analysis on p53 phosphorylation (S15). activated rat sarcoma virus oncogene (RAS) 23 . However, the underlying molecular mechanism is not fully understood. Our study provides a novel mechanism by which SETDB1 regulates cancer cell growth through the modulation of p53 methylation. These results, however, do not exclude the possibility that the transcription regulation by SETDB1 via histone H3K9 methylation may also contribute to cancer cell growth control.
Recently, it was also reported that p53 suppresses SETDB1 gene expression during paclitaxel-induced cell death 35 . Comprehensive epigenomic and transcriptional profiling to understand how SETDB1 regulates gene expression in cancer cells warrants future study. Aflatoxin, a known mycotoxin, is one of the strongest carcinogens for liver cancer 36,37 . The risk for Hepatitis B virus carriers with aflatoxin exposure to develop liver cancer compared with unexposed individuals is much greater than aflatoxin or hepatitis B virus alone 38 . It is known that Aflatoxin B could cause TP53 mutations, including the hotspot R249S mutation 4,39 . Our data are consistent with previous reports that R249 is highly mutated in Chinese liver cancer patients. The high-frequency missense mutations of TP53 in cancers support a critical and complex role of TP53 in tumorigenesis. The GOF TP53 mutations are functionally independent of wild-type p53, and the accumulation of mutated p53 contributes to tumorigenesis 1,6 . Therefore, clearance of mutated p53 is of critical importance for prevention or treatment of cancer. Using exome sequencing and genomic profiling, we found that moderate SETDB1 copy number gain is associated with enrichment of TP53 mutations, with the GOF R249S being the most frequent. These data suggest that tumour cells with SETDB1 copy number gain/overexpression along with R249S TP53 mutation may acquire growth advantages through genetic mechanisms. This hypothesis is consistent with our findings that R249S mutant, but not wild-type p53, confers cell sensitivity to SEDTB1 attenuation. Together, our data revealed a novel function of SETDB1 to regulate HCC cell growth through p53K370 di-methylation.
Methods
Cell culture and proliferation assay. Human cancer cell lines HCCLM3, SNU182, Hep3B and HCT116 were obtained from ATCC and were cultured as instructed. Hep3B and SNU182 cells were maintained in DMEM/F12 medium. HCCLM3 cells were maintained in DMEM medium. HCT116 cells were maintained in MyCoy 0 5a medium. All the media were supplemented with 10% fetal bovine serum and 1% penicillin and streptomycin (all from Invitrogen). Cells were maintained under a standard gas atmosphere of humidified air/5% CO 2 . All the cell CellTiter-Glo (Promega) was used for cell growth measurement. Cells were seeded in 96-well plate in 100 ml medium for each well. For inducible SETDB1 knockdown, 1 mg ml À 1 doxycycline was added in the medium. After cells were cultured for a given period, 100 ml per well CellTiter-Glo reagent was added to measure cell growth according to the manufacturer's instruction.
Immunoprecipitation. Immunoprecipitation after formaldehyde crosslinking was performed as previously described 41 with minor modifications. Briefly, cells were treated with 1% formaldehyde for crosslinking for 10 min at room temperature. Cells were harvested and lysed in 1 Â RIPA buffer. The samples were sonicated until the lysate became clear, followed by centrifugation at 15,000g for 15 min at 4°C. The supernatant was collected for immunoprecipitation (IP) using antibodies against SETDB1 (H-300; Santa Cruz, 1:250) and p53 (7F5; Cell Signaling, 1:500). The magnetic protein G beads after IP were washed three times with 1 Â RIPA buffer and proteins were eluted with 2 Â SDS-loading buffer. The samples were then incubated at 99°C for 20 min before sample loading for SDS-PAGE. For noncrosslinking p53 immunoprecipitation, it was performed similarly except for the crosslinking procedure. Cells were transient transfected with wild-type p53 or p53R249S mutants. Forty-eight hours after transfection, cells were harvested and processed for IP using the Direct IP Kit (Pierce). For endogenous SETDB1 complex isolation with IP, the Nuclear Complex Co-IP Kit (Active Motif) was used for the procedure, with anti-SETDB1 antibody (Santa Cruz, 1:250) being added to the IP reaction.
RNA isolation and real-time RT-PCR. RNA was extracted using the RNeasy Mini Kit (Qiagen). RNA quality was confirmed with Nanodrop. Real-time RT-PCR analysis was performed on an ABI Prism 7900 Sequence Detection System using the SYBR Green PCR Master Mix (Applied Biosystems, Foster City, USA). The relative expression of each gene was normalized against GAPDH. The primers used for the quantitative RT-PCR are shown in Supplementary Table 1. TMA and immunohistochemistry. For human HCC, TMAs were constructed from 59 Chinese HCC cases. Out of the 59 cases, 35 have both tumour and adjacent samples (2 cm away from tumour structure) and 4 of the 35 cases have both adjacent and distal samples (5 cm away from tumour structure). Human samples were obtained with patient-informed consent under protocols approved by the Changhai Hospital. Formalin-fixed, paraffin-embedded tissue sections (4-mm thick) are prepared with tumour and adjacent/distal samples from a given case are placed side by side on the TMA. Haematoxylin and eosin staining of the TMA section was conducted and reviewed to confirm that the presence of tumour cells is more than 80% and that no tumour structures included in the adjacent/distal samples. TMA was subjected to immunohistochemistry by using Ventana Discovery-automated slide stainer (Ventana Medical Systems) with SETDB1 antibody (4A3; Sigma, 1:1,000). For EdU immunofluorescence, the labelling and detection were performed using the Click-iT EdU Imaging kit according to the manufacturer's instruction (Invitrogen).
TP53 sequencing and mutagenesis. Total RNA was extracted from HCCLM3 cell pellets. cDNA was made using Superscript III (Invitrogen, Cat. No. 18080-051) and used as the template for p53 PCR. The sequences of the primers are listed in Supplementary Table 1. The PCR product was purified using the PCR purification Kit (Qiagen, Cat. No. 28106) and sequenced by Invitrogen. The sequence analysis covers the whole p53 open reading frame. For mutant p53 construction, p53 fulllength coding region was isolated from HCCLM3 by PCR with 5 0 -BamHI and 3 0 -XhoI restriction enzyme sites added to the flank. The mutant and wild-type p53 was placed into pcDNA3.1( þ ) with Myc and His tags. The recombinant plasmids were confirmed with DNA sequencing.
PCR-based mutagenesis was applied to generate p53K370 mutation. Wild type or p53R249S constructs in pcDNA3.1 were used as the template for PCR using Phusion HF 2* master mix (NEB), with specific primers to introduce the K370A mutation. After 15 cycles of PCR, 1 ml DpnI (NEB) was added into the PCR reaction and further incubated at 37C for 1 h. The reaction mixture was transformed top10 competent cells (Invitrogen). Single colonies were selected and the mutant sequence was verified with DNA sequencing.
Fragment length polymorphism genotyping of human HCC samples. Formalinfixed paraflin embedded (FFPE) blocks were collected from a local hospital (Changhai Hospital). Human samples were obtained with patient-provided informed consent under protocols approved by the Changhai Hospital. The QIAamp DNA FFPE Tissue Kit (Qiagen) was used for DNA extraction from the tissues. The fragments around codons 249 and 72 were amplified with PCR using the Platinum PCR SuperMix High Fidelity (Invitrogen). For condon 72, nest PCR was adapted to increase specificity using p53 9 and 12 primers. The condon 249 and codon 72 PCR products were digested with HaeIII and BstUI, respectively, and analysed using electrophoresis in 10% TBE polyacrylamide gel. The primers are listed in Supplementary Table 1.
In vitro methylation assay and LC-MS/MS analysis. The in vitro methylation assay was similar to previously described literature 10,32 . Briefly, full-length p53 protein (100 nM) or peptides (10 nM) were incubated with recombinant SETDB1 protein (400 nM) or endogenous SETDB1 complex at room temperature or 37°C. For the assay using the full-length p53 protein, the reaction products were Figure 9 | Working model of how SETDB1 regulates cancer cell growth through methylation of p53. GOF TP53 mutations, such as R249S, are relatively stable and often oncogenic. The stability of these GOF p53 can be enhanced through interaction with and being di-methylated by SETDB1 at K370, which results in less MDM2 association. In cancer cells bearing GOF p53 mutation and SETDB1 overexpression, attenuation of SETDB1 leads to less p53K370me2, enhanced p53 turnover and growth inhibition. Although SETDB1 can also interact with and methylate wild-type p53 at K370, it has little effect on cell growth as the wild type p53 is inherently very unstable and acts as a tumour suppressor. Therefore, SETDB1 can regulate cancer cell growth, at least in part, through methylation of GOF p53 at K370. separated on SDS-PAGE and detected with western blot analysis using antibody against p53 (Cell Signaling), or p53K370me1 or p53K370me2 (in-house made).
For DNA copy number analysis, the GISTIC algorithm 24 was used to calculate a G-score for each amplification or deletion region based on its frequency and amplitude. The statistical significance (q value) was computed to indicate the likelihood of these regions to represent a driver aberration, as opposed to a random aberration.
The TP53 mutation status was extracted from the Asian gastric primary tumour xenograft models and fresh Asian HCC primary tumour tissue samples by using exome sequencing. Genomic DNA was extracted from frozen tumour tissues using the Qiagen DNeasy kit (Cat. No. 69504). Gene exons were then captured with Agilent SureSlect (50 Mb) reagents and were further subjected to 100 Â coverage of deep sequencing on Illumina MiSeq platform following the manufacturer's instruction. Whole-exome sequencing data were aligned and mapped to the human genome using Burrows-Wheeler Aligner. Single-nucleotide variants and small insertion/deletions were called using GATK 1.0.
In vivo xenograft study. The in vivo animal procedure was approved by the Animal Ethics Committee at the Novartis Institutes for Biomedical Research. HCCLM3 stably infected with the inducible shRNA against SETDB1 was grown to the log phase. 3 Â 10 6 mycoplasma-free cells were subcutaneously injected to the flank region of the 6-week female athymic nude mice. When tumour size reaches 100 mm 3 , mice were randomized and fed with doxycycline-containing drinking water (5% sucrose with 0.5 mg ml À 1 doxycycline, n ¼ 12) or sucrose only as the control (n ¼ 12). Doxycycline was changed twice every week. Tumour growth was measured using caliper. Body weight was monitored simultaneously. | 2018-04-03T05:35:58.486Z | 2015-10-16T00:00:00.000 | {
"year": 2015,
"sha1": "c35fadbfb899bc5a4896c8bc725df4d19f9a4d62",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms9651.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c35fadbfb899bc5a4896c8bc725df4d19f9a4d62",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119360318 | pes2o/s2orc | v3-fos-license | Radiation as a Constraint for Life in the Universe
In this chapter, we present an overview of sources of biologically relevant astrophysical radiation and effects of that radiation on organisms and their habitats. We consider both electromagnetic and particle radiation, with an emphasis on ionizing radiation and ultraviolet light, all of which can impact organisms directly as well as indirectly through modifications of their habitats. We review what is known about specific sources, such as supernovae, gamma-ray bursts, and stellar activity, including the radiation produced and likely rates of significant events. We discuss both negative and potential positive impacts on individual organisms and their environments and how radiation in a broad context affects habitability.
INTRODUCTION
There are several factors that can constrain the existence of life on planetary bodies. To determine the possibility of existence and emergence of life, it is essential to consider astrophysical radiation which itself can be a constraint for the origin of life and its development. Additionally, the radiation received by the planetary body and the plasma environment provided by the parent star play a crucial role on the evolution of the planet and its atmosphere. Therefore, radiation can determine the conditions for the origin, evolution, and existence of life on planetary bodies.
TYPES OF RADIATION
Several types of radiation are relevant to life in the universe. The word "radiation" itself may first need some definition. We use this term very broadly, to cover both electromagnetic radiation and energetic particles. The electromagnetic radiation of interest to us is the higher energy end of the spectrum -gamma-ray, X-ray, and ultraviolet. Gamma-ray and X-ray forms of light are ionizing, and along with UV, have the potential to destroy or damage essential biological molecules of life "as we know it," including DNA and proteins. Energetic particles that may cause damage include electrons, protons, neutrons, and muons. The energy of these particles is determined mainly by their kinetic energy. A muon is an elementary particle similar to an electron, but with a greater mass. Muons are highly penetrating and ionizing, but not very much is known about their biological effects. Unlike gamma-rays or neutrons, or even electrons, muons are not produced in most artificial sources of radiation, and so their biological effects have not been studied (Atri and Melott, 2011;Rodriguez et al. 2013). They are normally assumed to have effects similar to electrons, but could be more severe due to greater penetration ( Fig. 1).
High-energy electromagnetic radiation is produced by many different processes. UV light is produced by blackbody emitters with sufficiently high temperatures, including our own Sun. In general, the ultraviolet region of the electromagnetic spectrum can be subdivided into bands. These subdivisions are arbitrary and can differ depending on the discipline involved (Diffey, 1991), but in biology it is possible to distinguish three main bands: UVA (400-315 nm), UVB (315-280 nm), and UVC (280-100 nm) and additional bands for shorter wavelengths as VUV (200-10 nm) or EUV (121-10 nm). Gamma-and X-rays can be produced by radioactive decay of certain elements, by electron-positron pair annihilation, inverse-Compton scattering, some intra-atom electron transitions, and Bremsstrahlung radiation.
Charged particles (e.g., electrons, protons) can be accelerated to high energies by various processes, especially involving plasma shocks and interactions with magnetic fields. (For readers interested in more of the physics involved, we suggest Rieger et al. (2007) and Balogh and Treumann (2013).) Neutrons and muons are generated by nuclear reactions. Neutrons may be ejected from nuclei that radioactively decay (e.g., a Uranium nucleus). Both neutrons and muons can be generated by nuclear reactions of atomic nuclei with high-energy "primary" protons that enter a medium such as a planetary atmosphere. The primary protons induce a so-called air shower of secondary particles, which includes electromagnetic and particle constituents, including neutrons and muons. In addition, helium nuclei (termed "alpha particles") and electrons are produced in some radioactive decay processes. Electrons with relatively high energy are also found in the magnetospheres of planets with significant magnetic fields. This is the case for some terrestrial planets (e.g., Earth) as well as giant planets. The moons of giant planets may experience significant irradiation due to the planet's magnetospheric electrons; this is the case for Jupiter's four largest moons, for instance. In this case, how-ever, the electron radiation is not very penetrating, so some shielding by ice/rock will prevent significant impacts below the surface.
STELLAR EMISSIONS
Stars are of course sources of visible light, but they also emit UV, X-ray, and even gamma-ray light. The emission of stellar radiation depends on their surface temperature and also on their activity. Therefore, emission is related to the evolution and spectral type of the star and it can be highly variable. A star's surface temperature determines its blackbody emission. A high-mass star on the main sequence will have a high surface temperature and emit a significant amount of UV. However, from the perspective of habitability, such high-mass stars are short-lived (as short as a few million years), probably too short-lived to host habitable planets (Turnbull and Tarter, 2003). Stars that are of most interest for habitable planets include those similar to the Sun's mass (types G and K), but also those of much lower mass (type M) (Tarter et al., 2007;Kopparapu et al., 2013).
The Sun emits enough UV light to be problematic for planets without a UV shield, such as ozone in the atmosphere. Lower mass stars, on the other hand, do not emit very much UV, but are more active, with frequent energetic flares. These flares themselves are sudden and energetic explosive events and emit UV and X-ray (and possibly some gamma-ray) light. They originate in magnetic processes affecting all the layers of the stellar atmosphere (photosphere, chromosphere, and corona), which heat the stellar plasma and accelerate its protons, electrons, and heavy ions, to velocities near the speed of light. Flare emissions (usually being several magnitudes higher compared to the quiescent state) undergo interactions with planets and it is not well-understood whether it could be lethal or unfavorable for life. In general, it is known that the strongest stellar flares exceed the strongest solar ones by a factor of 100 in X-ray and EUV flux. The quiescent X-ray and EUV radiation of young stars are up to a factor of 1000 higher than on the present-day Sun (Guinan and Ribas, 2002;Ribas et al., 2005). They also are likely to eject plasma through processes similar to those that produce Coronal Mass Ejections on our own Sun. CMEs and flares represent important sources of both electromagnetic and particle radiation. The intermittent nature of the emission from low-mass stars may present more of a hazard than a steady background emission (such as UV from a Sun-like star), since it may be harder for life to adapt to the varying levels of radiation, as opposed to a more constant value (e.g., Ayres, 1997;Gershberg, 2005;. Over a star's lifetime, its radiation emission changes. A young star tends to be less luminous overall, but often more active, producing more frequent and more intense flare/CME events. As a star ages its luminosity slowly increases, but the activity tends to decrease; this decrease is more pronounced for stars of higher mass, while low-mass (e.g., M-dwarf) stars continue to be highly active.
STELLAR EXPLOSIONS
Explosions on the scale of whole stars fall into a couple of major categories: individual stars that explode and pairs of stars that interact leading to explosions. These events are usually categorized by how they are observed. A supernova is typically observed as a rapid brightening in visible light. Observations can also be made in UV and, for a few cases, there are observations in X-ray and gamma-ray; the data is limited in these wavebands due to the lower luminosity, but also the relative lack of observational equipment.
Supernovae are categorized by features in their light curve (the variation in luminosity with time) and the strength of the hydrogen absorption lines in their spectra. Type I events have a sharp increase in luminosity followed by a steady, gradual dimming, and show little to no H absorption, while Type II have a sharp increase in luminosity followed in most cases by a plateau lasting a few months and then a gradual dimming, and show stronger H lines; each type also has subtypes determined by other details in the spectrum. For a recent review see Hillebrandt (2011).
Broadly, Type II events are explosions of individual high-mass stars that undergo core collapse. This progenitor is also responsible for Type IB and Type IC supernovae, but in these cases H absorption is weak. Type IIL, IIP, IIN, and IIB are defined by differences in the spectra, except that a Type IIL does not show the light curve plateau that Type IIP supernovae do.
A Type Ia supernova, on the other hand, is thought to be the explosion of a white dwarf that has accreted matter from a companion (larger, main sequence, or giant) star. In this model, the white dwarf is near the critical mass of 1.4 solar masses (the Chandrasekhar limit), which is the most mass that can be supported by the electron degenerate matter that makes up a white dwarf. When more mass is accreted, the star collapses and explodes.
All supernovae produce visible and UV light and likely all produce higher energy light as well, though observations are limited. Gamma-rays emitted from supernovae are the result of radioactive decay of certain elements that are synthesized in the explosion process (see, for instance, Karam, 2002a,b). Supernovae emit much of their energy in neutrinos, almost massless elementary particles, which interact so weakly as to pose no threat to organisms (Karam, 2002b).
Supernovae also produce an ejecta blast wave that propagates outward. These blast waves form "remnants" that are visible for some time after the explosion and inject the progenitor and synthesized material into the interstellar medium. In addition, the shock front accelerates protons to high energies, producing at least a portion of the cosmic rays observed on Earth, which would also be present for most other habitable planets. An exception may be moons of giant planets which could be shielded by their host planet's strong magnetic field (in which case, however, those moons would be subject to the magnetospheric radiation of the planet, as noted above).
Another category of stellar explosions, again defined by how they are observed, is gamma-ray bursts (GRBs). As the name implies, they are observed initially as a "burst" of gamma radiation, which is followed by emission in lower-energy wavebands, all the way through radio. For an excellent review of GRBs, see Gehrels et al. (2009). These bursts fall into two subcategories, "long" and "short," defined by the duration of the gamma-ray emission. Long GRBs are of order 10's of seconds, while short GRBs are about 1 s or less (in both cases referring only to the gamma emission; the "afterglow" in other wave-bands may last much longer). The two types also show a difference in their spectra, with long GRBs having "softer" spectra, dominated by lower-energy gamma-rays (with a spectral peak around 100-200 keV), and short GRBs having "harder" spectra, with greater emission of high-energy gamma-rays (with a spectral peak closer to 1 MeV).
The progenitors of long GRBs are most likely individual stars that explode as core-collapse super-novae and are situated such that they launch an intense "jet" of material along their rotation axis which happens to be pointed at Earth, leading to the burst of high-energy light observed. The fact that the emission is strongly "beamed" allows for what may be a fairly normal supernova explosion to be observed as such an intense blast. While this scenario is the most widely accepted model, the full picture may be more complicated. (For a good review of GRBs, see Kouveliotou et al. (2012).) Short GRBs, while also thought to emit radiation along a jet, are most likely the result of the merger of two compact objects, such as neutron stars or black holes.
Other short-term stellar events also produce high-energy radiation, but are of low enough intensity as to not be significant on large scales. These include "soft gamma repeaters" thought to be powered by "magnetar" stars that periodically emit lower energy gamma-rays, but with relatively low luminosity.
Black holes are also a source of high-energy radiation, particularly X-rays and energetic protons, but only if they are actively accreting matter. This is most likely in the case of supermassive black holes associated with active galactic nuclei (AGNs). Emission from stellar mass black holes is rare enough and of small enough luminosity to not be significant from the point of view of habitability. AGN, on the other hand, may be significant, when the black hole is particularly active, and could have an effect on much of their host galaxy, primarily through accelerating particles to high energies, thereby increasing the background cosmic ray flux.
EFFECTS
In the previous sections, we described the main astrophysical sources of radiation in the universe and the different types of radiation that can be derived from them. Two main factors determine the effect of radiation on habitability: the total energy received by a given habitat and the "hardness" of the radiation (where hardness refers to the relative amount of higher-to lowerenergy photons or particles received from the source). That means that the effects of radiation on life will depend in fact on the kind of radiation (electromagnetic or particle and their energy), the amounts of radiation (dose or fluence), and the capability of the living beings to cope with radiation.
Biologically, damaging radiation could reach the surface of the planet, depending on the existence of a magnetic field and the presence of an atmosphere. Magnetic fields can shield the surface from charged particles, depending on the strength of the field and "rigidity" (a combination of momentum and charge) of the particles. An atmosphere can protect from both particle and electromagnetic radiation depending on the energy of the radiation and thickness of the atmosphere (Dartnell, 2011).
We can consider effects on life as being either direct or indirect. Direct effects involve the interaction of radiation directly from the event with biological material (cells, prebiotic molecules); mean-while indirect effects are those related to the interaction of the radiation with the environment (atmosphere), therefore favoring or limiting the possibility of life to arise and evolve (Abrevaya, 2013).
A fair amount of work has been done on the subject of astrophysical ionizing radiation and life. We cite much of that work below and also refer interested readers to the excellent reviews by Horneck et al. (2010), Olsson-Francis and Cockell (2010), and Dartnell (2011).
DIRECT EFFECTS
In general, radiation can be very harmful and even lethal to living beings, as it is capable of damaging DNA and other cellular components through different kinds of mechanisms. If we consider a planet with an atmosphere and magnetic field, UV radiation will be capable of reaching the surface, as well as muons and neutrons if sufficient energetic particles are incident at the top of the atmosphere.
In the case of UV, the most damaging effects are exhibited through direct interaction of UV photons with essential macromolecules such as DNA or proteins. As these molecules have a maximum of absorption of UV radiation at 260 nm and 280 nm, respectively, these effects are seen at UVC (100-280 nm) and UVB wavelengths (280-315 nm). The predominant kinds of damage on DNA are chemical modifications such as cyclobutane pyrimidine dimers (CPDs) and (6-4) photoproducts (6-4PPs). DNA single-strand and double-strand breaks can also be induced by UV, but these are produced as a consequence of failures during the DNA repair steps of CPDs and 6-4PPs, as was described in Bonura and Smith (1975a,b) and later by Bradley (1981).
Other kinds of damage are produced by indirect mechanisms, for example, at longer wavelengths as UVA (315-400 nm) where the absorption of DNA and proteins is null or very weak. In this case, free radicals such as reactive oxygen species are generated during the radiolysis of water molecules. The hydroxyl radical (OH) is the main damaging species producing a plethora of DNA lesions in the form of chemical modifications (e.g., 8-hydroxyguanine, DNA-protein crosslinks) (for more details see Kielbassa et al., 1997 and references therein).
Other cellular components can also be damaged by UV, such as proteins. Oxidation of prokaryotic proteins during irradiation was documented for different microorganisms (Daly et al., 2007;Qiu et al., 2006). It was also suggested that UV radiation can damage membrane proteins with the concomitant leakiness of membranes (Koch et al., 1976). Membrane damage was also documented for microorganisms exposed to the 200-400 nm UV range (Fendrihan et al., 2009).
UV is also capable of inhibiting metabolism, enzymatic activity, and several cellular processes in general, such as photosynthesis (Sinha et al., 1995;Renger et al., 1989;Neale et al., 1998;Neale and Thomas, 2016).
From the experimental point of view, few works analyzed the effects of stellar UV radiation on life considering planets orbiting habitable stars (G, F, K, and M-type stars). Fendrihan et al. (2009) exposed halophilic archaea to several UV doses over a wavelength range of 200-400 nm to simulate the Martian UV flux. Cells that were embedded in halite showed survival under UV exposure doses as high as 10 4 kJ m -2 (exposure at Earth's surface today is around 3-4 kJ m -2 ). Cockell et al. (2005) also exposed dried monolayers of Chroococcidiopsis sp. 029, a desiccationtolerant, endolithic cyanobacterium, to a simulated martian-surface UV and visible light flux, also equivalent to the worst-case scenario for irradiation conditions on the Archean Earth. They have found loss of viability after 30 min of exposure.
The probability of survival of radiation-tolerant microorganisms (halophilic archaea) was evaluated considering flare activity from the dM star EV-Lacertae (EV Lac, Gliese 873, HIP 112460) taking the UVC region (254 nm). Microorganisms survived the exposure to irradiation conditions (Abrevaya et al., 2011a). The same UV-resistant profiles were observed in experiments simulating radiation of the interplanetary environment or exposed in the low Earth orbit, where microorganisms have been exposed to EUV (e.g., Mancinelli et al., 1998;Abrevaya et al., 2011b;Mancinelli, 2015). Other works have analyzed potential effects on life of stellar UV radiation, but they are only based on theoretical modeling and do not consider their effects on microorganisms but on isolated DNA molecules (Cockell, 1998(Cockell, , 1999Cockell et al., 2005;Scalo and Wheeler, 2002;Rontó et al., 2003;Segura et al., 2003Segura et al., , 2010Cockell and Raven, 2004;Buccino et al., 2007;Cuntz et al., 2010;Rugheimer et al., 2015).
At wavelengths shorter than UV, the effects of X-rays and gamma-rays are also well known. In general, direct action on the DNA molecule produces both DNA single-strand and double-strand breaks. Additionally, damage through indirect mechanisms as free radicals by radiolysis of water molecules is generated. There is no direct experimental data on the effects of this kind of radiation in the planetary context. Theoretical modeling has made predictions concerning the effects of radiation on the Earth's biosphere and revealed the biological importance of UVflashes from GRBs delivered to the surface of the Earth, considering different present and prehistoric atmospheres (Galante and Horvath, 2007;Mart ín et al., 2009Mart ín et al., , 2010Horvath and Galante, 2012).
On the other hand, since the "flash" from a GRB lasts at most 10s of seconds, this may have only a small impact on the biosphere. It is likely that the more important aspect, in the long run, is severe depletion of stratospheric O 3 , caused by the formation of odd-nitrogen oxides after ionization induced by high-energy photons and cosmic rays (in the case of nearby supernovae). Thomas et al. (2005), for instance, estimated an increment in the DNA damage of up to 16 times the normal annual global average, which may be lethal for microorganisms such as phytoplankton. On the other hand, the bio-logical impacts of increased UV following a GRB or similar event are complicated and depend on the particular organism or impact considered (Thomas et al., 2015). For two important (modern day) oce-anic primary producers, Neale and Thomas (2016) found only a small impact on productivity. However, this study was limited in that it modeled only relatively short-term impacts and much remains to be learned about the long-term effects, including the level of mortality under post-GRB-type conditions.
Based on anticipated effects of reduced O 3 , it has been argued that GRBs are likely to have impacted the Earth during the last billion years and could be responsible for mass extinctions (Melott et al., 2004;Thomas, 2009, 2011).
If we consider the space radiation environment, high-energy charged particles are present and they can interact at multiple scales with biological structures. Additionally, they can produce secondary particles capable of interacting with biological material. This kind of radiation should be distinguished from X-rays or gamma-rays as their deposition energy is done through a different mechanism along a "linear" track. Therefore, this produces distinguishable biological effects, different from those generated by other kinds of radiation, as particles can induce instantaneous damage, which is not compatible with repair mechanisms on cells, for example, when damaging molecules such as DNA. A detailed description of this phenomenon can be found in Nelson (2003). Some biological effects of low-energy particle radiation are also described in Yang et al. (1991).
Taking into account charged particles in an astrobiological context, Paulino-Lima et al. (2011) replicated charged particles under laboratory conditions to simulate solar wind. The radioresistant micro-organism Deinococcus radiodurans was exposed to electrons, protons, and ions to test its probability of survival. The results indicated that low-energy particle radiation (2-4 keV) had no significant effects on the survival of this microorganism, even if the microorganisms were irradiated with an equivalent fluence of 1000 years of exposure at 1 AU. However, as the authors mention, the effect of high-energy ions as those we could find in solar flares (200 keV) could have more deleterious effects on microbial cells, with estimated 90% cell inactivation, considering a distance of 1 AU and several flare events in one year.
It should be noted, however, that life on Earth evolved to cope with radiation as cells have developed different strategies that allow repair or prevent damage. Different DNA repair systems depending on specific enzymes exist in all life forms "as we know it" and are necessary to recognize and rebuild the injured sites, to prevent cell death. These processes are diverse from the point of view of mechanisms, but globally are highly conserved from prokaryotes to eukaryotes (and also including some viruses such as bacteriophages) (Cromie et al., 2001). One of the most unique and relevant features in the radiation-resistant microorganism par excellence, D. radiodurans, is its extremely powerful DNA repair mechanism (e.g., Cox and Battista, 2005). Several hypotheses have been suggested to explain the evolution of DNA repair and can be found in O'Brien (2006).
During biological evolution, living beings also developed other physiological strategies not only to repair DNA damage, but also to prevent it. Pigments, for example, such as melanin (Brenner and Hearing, 2008;Cordero and Casadevall, 2017), can act as a radiation shield, in particular for UV. Scytonemin, a sheath pigment in cyanobacteria, was found to protect these microorganisms against UVC radiation (Garcia Pichel et al., 1992;Dillon and Castenholz, 1999). Carotenoid pigments have also shown to protect microorganisms from UV. In fact, a positive correlation between the presence of carotenoids and resistance to radiation in bacteria was already documented several decades ago (Mathews and Krinsky, 1965). Moreover, carotenoids could have a role in DNA repair mechanisms such as photoreactivation or act as protective agents against the effects of free radicals such as hydrogen peroxide (Shahmohammadi et al., 1998). A detailed review of UV screening com-pounds and its relevance can be found in Cockell and Knowland (1999).
In haloarchaea, high intracellular concentrations of KCl seem to also provide protection against radiation through interaction with free radicals (Kish et al., 2009). Other radio-resistant microorganisms such as D. radiodurans showed that high intracellular Mn/Fe ratio combined with desiccation contributes to ionizing radiation resistance (Paulino-Lima et al., 2016). Also, physiological mechanisms such as polyploidy present in haloarchaea seem to provide advantages against radiation damage (Breuert et al., 2006).
Additionally, highly resistant structures such as bacterial spores (dormant structures produced by some bacteria that are formed in response to adverse environmental conditions) have also been shown to offer effective protection against the effects of UV radiation. Results obtained by Risenman and Nicholson (2000) indicate that the spore coat in Bacillus subtilis endospores is necessary for spore resistance to environmentally relevant UV wavelengths. Spores have also been shown to be 10-to 50-times more resistant to UV than growing cells and also more resistant to gamma radiation than cells during the growing state (Nicholson et al., , 2005. Different kinds of photoproducts can be gen-erated in spores by UV irradiation than those acquired when B. subtillis is in its growing state (Setlow, 2006). A summary can be found in Horneck et al. (2014).
In addition to physiological mechanisms that provide protection against radiation, the habitat where life forms exist and develop can be also particularly protective, for instance, in the cases of endolithic microorganisms living inside rocks (a detailed description of different kind of endoliths can be found in Golubic et al., 1981) and evaporitic environments. In addition to the obvious case of shielding from UV by opaque rock materials, haloarchaea inhabiting fluid inclusions of halite crystals have also been shown to be protected, as these crystals absorb short UV wavelengths and reemit them at longer, less damaging wavelengths (Fendrihan et al., 2009). In a series of works, Horneck et al. (2001) and Rettberg et al. (2002Rettberg et al. ( , 2004 also showed that thin layers of clay, rock, or meteorite material are successful in UV-shielding. Aquatic ecosystems can also provide shielding from UV depending on the optical properties of the water that control light penetration, which is influenced by dissolved and suspended organic material (Diffey, 1991). Vertical mixing has also been found to be an important factor (Huot et al., 2000).
This chapter is focused mainly on negative effects of radiation, but astrophysical radiation has likely had positive effects for life, especially in the context of prebiotic molecules. UV radiation, for instance, could have played an important role during the polymerization of the first prebiotic organic molecules (Dauvillier, 1947;Ponamperuma et al., 1963;Sagan and Khare, 1971;Sagan, 1973;Pestunova et al., 2005). Ranjan et al. (2017) determined the UV environment on prebiotic Earth-analog planets orbiting M dwarfs such as the recently discovered Proxima Centauri, TRAPPIST-1, and LHS 1140. They obtained dose rates to quantify the impact of different host stars on prebiotically important photoprocesses. According to the results obtained in this study, M-dwarf planets have access to 100-1000 times less bioactive UV fluence than the young Earth. Therefore, it is unclear whether Earth-like planets orbiting M-dwarfs could host UV-sensitive prebiotic chemistry that may have been important to abiogenesis on Earth (e.g.: pyrimidine ribonucleotide synthesis). However, it is unknown if transient elevated UV irradiation due to flares may suffice. Experimental work under laboratory conditions is needed in order to constrain all these possibilities. Atri (2016b) has proposed that cosmic rays could provide energy for existing subsurface radiolysis-powered life. In general, ionizing radiation could have had an important role in the origin of life and is relevant for the generation of habitable planetary environments (Dartnell, 2011).
In the context of biological evolution, other positive effects can be considered if the radiation doses are nonlethal for microorganisms. In this case, they could induce mutations increasing the genetic variability, thus providing new raw material for all sorts of selective pressure. For example, UV can act as a selective pressure itself, leading to the appearance of organisms adapted to live under UV stress, such as those with pigments (Scalo and Wheeler, 2002;Wynn-Williams et al., 2002). It is also postulated that UV radiation could have influenced protistan evolution (Rothschild, 1999).
INDIRECT EFFECTS
Indirect effects can be seen through the interaction of radiation with the atmosphere. Life on Earth is currently shielded from most ionizing radiation from space. The atmosphere of the present Earth (which started to increase its levels of oxygen around 2.5 Gyr ago) is thick enough to screen out high-energy photons (gamma-and X-rays), O 2 absorbs short-wavelength UV (UVC), and ozone in the middle atmosphere absorbs most UV between 200 and 350 nm (the biologically damaging UVB). In the opposite way, primitive Earth, which had a different atmospheric composition (anoxygenic atmosphere), was unable to shield the surface of the planet from the effects of UV radiation through O 3 , but may have had instead "hazy" conditions that could have reduced the UV transmission (Wolf and Toon, 2010). Stellar X-rays could affect the atmospheric evolution and the chances for life to emerge Lammer et al., 2008). Theoretical modeling has shown that this radiation is capable of dissociating N 2 and O 2 in the atmosphere, releasing important quantities of very reactive species (atomic nitrogen and oxygen) which leads to the formation of nitrogen oxides that act as catalyzers of ozone dissociation, and therefore, increase the irradiation of the planet's surface with stellar UV radiation, among other important effects (Mart ín et al., 2010).
Similarly, high-energy charged particles (cosmic rays, mainly protons) interact with air molecules high in the atmosphere. On the other hand, those interactions lead to "showers" of secondary particles, some of which can be penetrating and damaging, in particular neutrons and muons, depending on the altitude considered. Charged particles with energy below about 10 GeV are deflected by Earth's large-scale magnetic field. Particles with energy below about 1 GeV are mostly deflected by the Sun's field, but that shielding varies with solar activity (more shielding when the Sun is more active). For a review of the effects on terrestrial life by cosmic rays, see Atri and Melott (2014).
On Earth then, life is mainly protected from direct effects of ionizing radiation by a thick atmosphere and large-scale magnetic field. In contrast, Mars has a thin atmosphere and no largescale magnetic field. Smith et al. (2004a,b) and Smith and Scalo (2007) performed detailed computations of radiative transfer of high-energy photons and found that the surface of Mars would be exposed to a substantial fraction of any incident gamma radiation, while X-rays are effectively blocked. Due to the lack of a large-scale magnetic field, a planet like Mars is exposed to charged particle radiation of all energies. While the atmosphere will shield the surface to some extent, there will still be a significant flux of damaging primary and secondary charged particle radiation at the surface (Dartnell et al., 2007;Pavlov et al., 2012).
Moons around giant planets are generally too small to hold a significant atmosphere or have a large-scale magnetic field. The surfaces of Europa and Enceladus, for instance, are exposed to any incident photons. A moon's host planet can have a strong magnetic field, which, if the moon is sufficiently within that field, provides protection from high-energy cosmic rays, but will at the same time subject the moon to magnetospheric ions and electrons with energy up to tens of MeV (Cooper et al., 2001). However, a few hundred meters of ice and rock are effective shields against both high-energy photons and charged particles, so habitats existing sufficiently deep under the surface of such moons should not be affected even by the most intense irradiation from outside.
While thick, Earth-like atmospheres protect life on the surface from direct radiation effects, that life may still experience increased ionizing radiation during rare but intense high-energy astrophysical events. As noted above, high-energy protons (with energy above a few GeV) generate "showers" of secondary particles. For life around sea-level, energetic muons are the greatest threat. These "heavy electrons" can penetrate several hundred meters of rock, ice, or water and damage biological material. An enhancement of cosmic rays due to, for instance, a nearby supernova can increase the background muon radiation level by several times (depending on factors such as the distance to the supernova; Thomas et al. 2016), lasting hundreds to thousands of years, due to the slow diffusion of charged particles through interstellar space.
In addition, a thick atmosphere can "redistribute" the energy of high-energy photons (gammaand X-rays) to UV photons, increasing the UV radiation at the surface for the duration of a gamma-ray event (Smith et al., 2004a,b;Smith and Scalo, 2007).
Finally, thick atmospheres can experience an increase in ionization due to both high-energy photons (gamma-and X-rays) and high-energy charged particles (above a few MeV), with higher-energy radiation affecting the atmosphere at lower altitudes. This ionization in an N 2 -O 2 dominated atmosphere can lead to production of nitrogen oxides that catalytically destroy ozone, leading to increased penetration of UV from the host star (Thomas et al., 2005(Thomas et al., , 2015. This indirect irradiation, in fact, appears to be the most significant effect for Earth-like planets following short duration, high-energy ionizing photon events such as GRBs. We now summarize what is known about the impacts of specific sources. In all cases, the severity of impacts depends on two main factors: (1) the total energy received, with more energy meaning greater impact and (2) the "hardness" of the radiation spectrum, with a "harder" spectrum having relatively greater flux of high-energy particles/photons, which tend to have a greater impact than lower energy particles/photons. Different types of event (SNe, GRBs, stellar activity) will have different spectra and total luminosity. The received energy depends on the intrinsic luminosity and the distance from the event (except in the case of a planet exposed to its host star's radiation, in which case the distance is negligible). The intensity of radiation decreases with the square of the distance in general, but the dependence may be more complicated for charged particles, which have significant interactions with magnetic fields in the Galaxy that cause diffusive instead of ballistic motion from the source.
GRBs are the simplest source to consider. All GRBs are relatively short in duration, ranging from tens of seconds to fractions of a second. They deliver a burst of high-energy photons, but do not appear to generate charged particle (cosmic ray) flux (Aartsen et al., 2016), at least at the highest energies (10 18 eV or more). On the other hand, long duration GRBs are known to be associated with supernovae, which are sources of cosmic rays. For planets with thick atmospheres, the high-energy photons lead to redistributed UV radiation at the surface, but this persists only as long as the gamma-and X-rays are incident on the atmosphere, so the effect is quite short-lived (Martín et al., 2009;Peñate et al., 2010). Longer-term atmospheric chemistry effects occur following the ionization induced by the gamma-/X-rays. For planets with significant O 2 , the chemistry changes lead to destruction of the ozone shield that is naturally present in the middle atmosphere of planets with O 2 and a stellar UV flux (Thomas et al., 2005). The destruction of O 3 then leads to unusual increases in stellar UV irradiance at the planet's surface and into the first 100 m or so of bodies of water, depending on their clarity Thomas et al., 2015). Overall, the depletion of O 3 can last for years to a decade. While there are two categories of GRB, both have essentially the same effect. Short GRBs have a harder spectrum but generally lower luminosity, while long GRBs have a softer spectrum but higher luminosity. Overall, they have very similar effects.
Supernovae are a more complicated source. First, they emit high-energy photons, which travel directly from the source with a 1/r 2 intensity dependence. The photons are for the most part in the X-ray range and lower, with emission lasting on the order of months. The X-rays will have effects similar to the photon radiation from a GRB, with again the most important result being the depletion of O 3 . The photons are not high enough in energy (above about 100 keV) to lead to redistributed UV as in the case of a GRB.
Supernovae also accelerate protons in the explosion blast wave. These protons travel outward from the SN ahead of, with, and behind the ejected stellar material. Charged particles follow more complex paths in regions of space with magnetic fields present. Lower-energy particles are more strongly affected and may take many thousands of years longer than the photons to arrive. Higher-energy protons will take a more direct path. If the space in between the SN and the receiving planet is essentially empty of material and magnetic field, then the travel will be more direct and the protons may arrive within a few hundred years of the photons (Kachelrieß et al., 2015).
The accelerated protons will have two main impacts on a planet. First, they will cause ionization in a thick atmosphere, in essentially the same way as high-energy photons. This can lead to depletion of O 3 , but depends strongly on the spectrum of the received protons. Harder spectra (with more of the higher energy particles) generate ionization closer to the ground and may therefore "miss" the ozone, which is concentrated in the middle atmosphere. However, highenergy protons generate showers of secondary particles, as discussed above, and these secondaries (especially muons) can be damaging at the surface and under hundreds of meters of water, ice, and rock. This is likely to be the most significant biological impact, since ozone depletion is likely to be associated mostly with the photons, which have a duration of months, while the high-energy proton flux will lead to increased biological damage for thousands of years.
For the case of a SN, the presence of a planetary magnetic field is generally not relevant, since the accelerated protons are of high enough energy to be only minimally affected (if at all) by the planet's magnetic field, unless it is much stronger than the present-day Earth's. This is true for isolated planets with their own magnetic fields as well as for moons of giant planets, which will be shielded by their host planet's field from most cosmic ray protons, but may not be shielded from the harder spectrum of pro-tons received from a nearby supernova. Stellar activity is most significant for close-in planets around lower mass (M type) stars (for an excellent collection of work on this topic see Lammer and Khodachenko, 2015). These stars are more active and the habitable zone is relatively close to the star (due to their low luminosity), meaning that a potentially habitable planet is more directly and more frequently exposed to radiation from stellar flares and CMEs. The relevant radiation in this case is mainly UV and protons. This radiation will mainly affect the atmosphere (see, e.g., Segura et al., 2010;Tabataba-Vakili et al., 2016). The protons will be of too low energy to generate significant showers of secondary particles, therefore not increasing the surface radiation significantly (Atri, 2016a). Another threat to habitability in this environment is atmospheric mass loss due to UV flux and the plasma stellar wind (see for instance Zendejas et al., 2010;See et al., 2014).
There may be some danger for planets located in a galaxy with an active supermassive black hole at its center (an AGN). AGN produce high-energy light (i.e., X-rays) and accelerate protons to very high energies. This may increase the background cosmic ray flux in a fairly steady way for as long as the black hole is active. This could put a constraint on habitability, but on the other hand, a steady enhancement could lead to greater radiation resistance adaptation.
RATES
The frequency of "dangerous" ionizing radiation events is relevant to their impact on habitability. Estimating such rates depends on a number of factors. First, as noted above, the most important parameters for determining impact are the total energy received and the hardness of the radiation spectrum. Details of the radiation spectrum depend on the particular type of event. For instance, short GRBs have very hard photon spectra, while supernovae tend to have softer photon spectra. However, the impact of SNe is also determined by the longer-lived and more spread out (in time) cosmic ray flux. For any event (except those of a host star on its planets), distance is the key factor in determining total energy received. The overall luminosity (total emitted energy) varies with event type. Short GRBs, for instance, are less luminous than long GRBs, but also have harder spectra.
Estimates of rates of "dangerous" events depends then on the basic rate of occurrence in some chosen volume (e.g., a single galaxy) as well as the distance at which that event may have a serious impact on a biosphere, which again is determined by the total energy received (in turn determined by event luminosity and distance) and spectral hardness. Existing estimates have mainly been made considering impacts on an Earth-like planet, with depletion of O 3 as the main "dangerous" effect. This is likely oversimplified. First, some recent work has indicated that O 3 depletion associated with a GRB (and events with similar total energy received and spectral hardness) may not be as disastrous as previously thought, at least on certain primary producers in the oceans (Thomas et al., 2015. If correct, this reduces the rate of dangerous events, since it requires either more energetic or closer events, both of which would be less common. On the other hand, very recent work has shown that SNe may be more damaging through the extended high-energy cosmic ray flux, not so much through O 3 depletion but through irradiation by secondary particles (muons), and possibly through increased atmospheric ionization at very low altitudes, which may impact global climate .
Estimates for the frequency of severe effects, using O 3 depletion as a measure of "severe," arrive at one dangerous event every few hundred million years for Earth for SNe and both types of GRBs, with SNe and short GRBs being slightly more frequent than long GRBs (Melott and Thomas, 2011). One could extend that to any Earth-like planet with oxygen-containing atmospheres, but the rates vary through cosmic time, as discussed below.
Of particular interest is the recent discovery that at least one, and probably several, core-collapse SNe occurred relatively near Earth a few million years ago (Fry et al., 2015;Thomas et al., 2016;Wallner et al., 2016). This has been very well established by geochemical evidence, but the distance to the SNe is large enough so that terrestrial effects were not very severe.
In all cases, it should be noted that "sterilization" of a habitat is an extreme condition. For every realistic event, refugia would exist in the deep ocean and under at least 100s of meters of ice or rock. While surface life may be dramatically affected and mass extinction may result, it is likely that some life would persist. In Earth's history, at least five major mass extinctions have occurred, including one that wiped out some 90% of species on Earth at the end of the Permian period. At least, one of these is statistically likely to have been connected with an astrophysical ionizing radiation event (a specific proposal has been made regarding the late Ordovician mass extinction, see Melott and Thomas, 2009). But in every case, life has returned and flourished. Therefore, talk of "sterilization" of planets is likely overblown except in the most extreme and rare of events.
On the other hand, when considering conditions in the universe before Earth's formation, such sterilization may be more realistic. In galaxies with very high star formation rate, planets formed within dense stellar areas could be exposed to intense and repeated supernova and even GRB events. A long-term exposure to very closeby events could indeed knock back or delay the development of life.
In addition, planets in the liquid-water habitable zone around low-mass stars may experience so much bombardment from stellar activity as to be stripped of their atmosphere which is quite likely to spell the end of any complex life there.
When considering the threats over cosmic time (the last 13 billion years), rate estimates need to take into account various factors. In particular, estimates of the rates of GRBs and SNe depend on star formation rate histories. Long GRBs and core-collapse SNe result from high-mass stars that are relatively short-lived (a few million years or so) and so track regions and periods of active star formation. Short GRBs require pairs of evolved objects such as neutron stars. These objects are generally considered to be the remnant of high-mass stars, and so depend in a similar way on star formation. Type Ia supernovae require a white dwarf, which is the remnant of a star with mass similar to that of the Sun or a few times higher. Such objects, then, require longer time periods to form, since a Solar lifetime is roughly 10 billion years. These events, then, will not directly track with active star formation. Simulations that track star formation and metallicity have been used to investigate where, as well as when, different regions of our own galaxy may have been habitable, as controlled by SNe and GRBs (see Gowanlock et al., 2011;Morrison and Gowanlock, 2015;Gowanlock, 2016). In general, they find that the inner part of the galaxy is more dangerous.
The picture for GRBs is complicated by the observation that long GRBs tend to occur in lower metallicity environments. This means that the long GRB rate would have been higher earlier in the universe's history. On the other hand, short GRBs do not show such a metallicity dependence. Recently, two groups have examined the role of long GRBs in the history of life in the Universe. Piran and Jimenez (2014) find that, due mainly to the metallicity dependence, the inner part of our galaxy is most dangerous and that the existence of life in any galaxy would be severely constrained by GRBs before about 5 billion years ago. If this is correct, then habitability before the rise of life on Earth may have been significantly limited by this kind of stellar explosion.
However, Li and Zhang (2015) come to a more optimistic conclusion, that about 50% of galaxies would be hospitable (considering only effects of GRBs) at about 9 billion years ago and 10% at about 11 billion years ago, and that the most hospitable galaxies are those similar to the Milky Way. These results make the earlier universe look much more likely to have been habitable, at least from the perspective of GRB threats. Li and Zhang (2015) also note that their results should be similar for SNe, though may not track exactly, since SNe do not have the same metallicity dependence as long GRBs.
Since AGN are powered by supermassive black holes at the centers of galaxies, there will be a "sweet spot" in cosmic history where they will be most active. First, enough time must have passed for the galaxy and its central black hole to form. Second, AGN appear to be active for some time and then become less active. This is likely due to the black hole clearing out material in the central part of its galaxy. Once most of the accessible matter has been consumed, the activity is likely to cease or at least become less intense and less sustained. In general, AGN are not thought to be a major constraint on habitability, except within the central regions of galaxies, which are already dangerous due to higher rates of SNe (Dartnell, 2011;Gobat and Hong, 2016;Dayal et al., 2016).
CONCLUSIONS
Here we have presented an overview of sources of biologically relevant astrophysical radiation, and effects of that radiation on organisms and their habitats. This chapter was focused on radiation as a constraint for habitability, due to the potential harmful effects of radiation on life "as we know it." Some of these effects have been known for a long time from studies of photobiology and radiobiology. The impact of radiation on life can be varied and complicated, and in some cases, by no means fully understood. From the astrobiological point of view, it is necessary to consider these effects in the context of astrophysical scenarios, which significantly may differ from the conditions of the present Earth. Even though some limitations may arise in reproducing or simulating these environments from the experimental point of view, these kinds of studies may provide an approximation of a real case scenario to estimate the probability of a planetary body to be habitable. Additionally, of particular interest is the potential for radiation to have positive effects, either for individuals or for the development and evolution of life. Some of them were briefly described in this chapter. This is an active area of research and it may well be that a future review such as this will find that radiation is as helpful as harmful, from the broad perspective of life in the universe.
Of necessity, our review does not cover all the details of particular impacts or the responses avail-able to organisms for dealing with radiation. We encourage the interested reader to followup with the sources cited for more details and to follow the continually changing landscape of this work. Surely, there is much more to be learned and we look forward to seeing what our community discovers over the next years and decades. | 2019-04-13T19:46:32.003Z | 2017-11-07T00:00:00.000 | {
"year": 2017,
"sha1": "b3a49381b0c5da61c15d415a821a14772c915762",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1711.02748",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fd652ce7c3122833f83c9b4f9c99c61138d73e86",
"s2fieldsofstudy": [
"Environmental Science",
"Physics",
"Biology"
],
"extfieldsofstudy": [
"Physics"
]
} |
6929091 | pes2o/s2orc | v3-fos-license | Patterns of Age-Associated Degeneration Differ in Shoulder Muscles
Shoulder complaints are common in the elderly and hamper daily functioning. These complaints are often caused by tears in the muscle-tendon units of the rotator cuff (RC). The four RC muscles stabilize the shoulder joint. While some RC muscles are frequently torn in shoulder complaints others remain intact. The pathological changes in RC muscles are poorly understood. We investigated changes in RC muscle pathology combining radiological and histological procedures. We measured cross sectional area (CSA) and fatty infiltration from Magnetic Resonance Imaging with Arthrography (MRA) in subjects without (N = 294) and with (N = 109) RC-tears. Normalized muscle CSA of the four RC muscles and the deltoid shoulder muscle were compared and age-associated patterns of muscle atrophy and fatty infiltration were constructed. We identified two distinct age-associated patterns: in the supraspinatus and subscapularis RC muscles CSAs continuously declined throughout adulthood, whereas in the infraspinatus and deltoid reduced CSA was prominent from midlife onwards. In the teres minor, CSA was unchanged with age. Most importantly, age-associated patterns were highly similar between subjects without RC tear and those with RC-tears. This suggests that extensive RC muscle atrophy during aging could contribute to RC pathology. We compared muscle pathology between torn infraspinatus and non-torn teres minor and the deltoid in two patients with a massive RC-tear. In the torn infraspinatus we found pronounced fatty droplets, an increase in extracellular collagen-1, a loss of myosin heavy chain-1 expression in myofibers and an increase in Pax7-positive cells. However, the adjacent intact teres minor and deltoid exhibited healthy muscle features. This suggests that satellite cells and the extracellular matrix may contribute to extensive muscle fibrosis in torn RC. We suggest that torn RC muscles display hallmarks of muscle aging whereas the teres minor could represent an aging-resilient muscle.
INTRODUCTION
Musculoskeletal disorders are highly prevalent in the elderly, leading to substantial hindering of functional mobility and daily functioning. Over half of the individuals above the age of 70 develop chronic shoulder diseases (Picavet and Schouten, 2003). Despite the high impact of RC pathology on daily functioning in the elderly, the effect of aging on the shoulder muscles is poorly understood (Hermans et al., 2013). In previous studies a strong correlation was found between the presence of a RC-tear and age, suggesting that RC muscles are under continuous age-associated stress (Feng et al., 2003;Fehringer et al., 2008;Yamamoto et al., 2010). However, how muscle degeneration in the shoulder changes during aging in the intact RC, as well as in RC-tears remains unclear. Ninety percent of these shoulder complaints are either diagnosed as subacromial pain syndrome (SAPS) or as tears of the stabilizing rotator cuff (RC; Steinfeld et al., 1999;Koester et al., 2005). Although past research on RC-tear mainly focused on the tendons, recently muscle degeneration was also considered to play a causative role (Laron et al., 2012). However, pathophysiology of the RC muscles in tear conditions are poorly understood.
Muscle atrophy, defined by the loss of muscle mass, is associated with loss of muscle strength and increase in fatty infiltration and inflammation (Evans, 2010). Muscle atrophy is highly prominent in the elderly and can distinguish between healthy and frail individuals (Taekema et al., 2012). Muscle atrophy in RC-tears is considered as a clinical determinant for surgical success in RC-tear repair (Tashjian et al., 2010;Mall et al., 2014) and long-term functionality after surgery (Shen et al., 2008). Recently, atrophy of the supraspinatus (SSp) RC muscle has been suggested to have a prominent role in RCtearing (Barry et al., 2013). The SSp is the foremost affected RC muscle, and so far is the major focus of most studies (Nakagaki et al., 1996;Ashry et al., 2007;Barry et al., 2013). Atrophy of the infraspinatus (ISp) RC muscle was also suggested to contribute in RC diseases (Henseler et al., 2015). As the interplay between all four RC muscles coordinates shoulder movements and stability, it is crucial to consider all four RC muscles to understand the pathogenesis of RC-tears. A description of muscle atrophy and fatty infiltration in the RC could be constructed from non-invasive radiological imaging (Shen et al., 2008;Mall et al., 2014). How muscle pathology changes with age in all RC muscles was not reported. In aging muscles, changes in muscle mass and muscle strength are accompanied by histological changes. Histological changes in torn muscles, albeit frequent in the aging population, are not well-studied. Histological features of aging muscles include extracellular matrix (ECM) thickening and fatty infiltration (Brack et al., 2007;Zoico et al., 2010). Whether those pathological marks are also exhibited in torn RC muscles is not fully understood.
The objective of this study was to assess muscle degeneration in both patients with intact and torn RC muscles. Muscle atrophy and fatty infiltration, obtained from Magnetic Resonance imaging with Arthrography (MRA) were used as measures for muscle pathology. We compared patterns of muscle atrophy between subjects without a RC-tear and subjects with a RC-tear for all four RC muscles: SSp, subscapularis (SSc), ISp, and teres minor (Tmi); and the adjacent deltoid (Del). Furthermore, marks of muscle aging were investigated in histological staining comparing torn ISp to non-torn Tmi and Del. Our study suggests that aging-associated histopathological changes differ in skeletal muscles and suggests the Tmi as an aging-resilient muscle.
Study Design and Participants
A retrospective cross-sectional study was performed on a consecutive series of shoulder MRAs at the orthopedics outpatient clinics in the Medical Center Haaglanden hospitals in the Netherlands between January 1, 2012 and February 13, 2013 (N = 442). All patients with atraumatic and chronic shoulder complaints or shoulder instability are routinely evaluated with MRA. Ethical approval was obtained from the Medical Ethics Committee of the Landsteiner Institute, Medical Center Haaglanden for the radiologic evaluations. Since the radiologic evaluations pertain to a retrospective study, the Medical Ethics Committee waived the need for informed consent from the participants included in this study. Four hundred and fortytwo shoulder MRAs were identified. Exclusion was based on poor image quality (N = 21), presence of a tumor (N = 5), isolated biceps tears (N = 4), subscapularis tears (N = 3), and fractures (N = 6). Subjects were grouped according to the absence (N = 294) or presence of a RC-tear (N = 109) on shoulder MRA. In total, 403 MRAs are included in this study. The RC-tear group included 40 partial SSp tears (53.5 ± 9.5 years old), 57 full thickness SSp tears (54.7 ± 11.7 years old), five full-thickness SSp tears with partial detachment of the ISp tears (63.2 ± 9.6 years old) and seven full-thickness SSp and ISp tears (61.0 ± 9.1 years old). Excluded from the analyses were: 12 images with motion artifacts of the SSc and 29 images with an incomplete field of view of the Del muscle.
Muscle biopsies were collected from two patients with a massive RC-tear of the SSp and the ISp. During tendon transfer surgeries (Henseler et al., 2013(Henseler et al., , 2014) muscle biopsies of the ISp, Tmi, and Del were obtained. Radiological characteristics of these two patients are detailed in Table 4. Medical Ethical approval was obtained from the Medical Ethical Committee of the Leiden University Medical Center for the collection and analyses of the biopsies and informed consent was obtained from the patients involved.
MRA Imaging Procedure
Fifteen minutes before MRA, contrast fluid was injected under fluoroscopic guidance into the glenohumeral joint from posterior. All MRAs were performed on Avanto or Symphony MRI units (Siemens AG, Erlangen, Germany) using a dedicated shoulder coil and turbo spin-echo sequences.
Analyses of the images were performed on a PACS Workstation with Sectra IDS5 (Sectra Medical Systems AB, Linköping, Sweden) as monitor readings. As multiple planes and sequences were obtained following the institutional standard shoulder MRA protocol, the T1-weighted transversal and sagittal plane (TR/TE 500-600/11-15, matrix 256; slice thickness 4 mm, inter-slice gap 1 mm, field of view of 15 cm) were systematically evaluated.
Muscle cross sectional area (CSA) quantification, was described previously (Henseler et al., 2015), and examples are shown in Figure 1. In brief, the radius (r) of the humeral head at the widest point was measured from its widest point using a circle fit in the transversal plane, and is reported in millimeters (mm). The RC muscles (i.e., SSp, SSc, ISp, Tmi) CSA were measured from the sagittal slice with the anatomical glenoid neck and base of the coracoid present, as illustrated in Figure 1A, and reported in mm 2 . The Del was measured from the transversal slice with the humeral head at its widest point, as illustrated in Figure 1B, and reported in mm 2 . Muscle CSA was normalized to the humeral head surface (in mm 2 ), in order to correct for inter-individual anthropometric differences.
The presence of fatty infiltration of the RC was evaluated by examining the presence of intramuscular fatty infiltration in the SSp, SSc, ISp, and Tmi muscles on the sagittal T1-weighted images, and was scored according to the Goutallier score (1, no fatty infiltration; 2, <50% fatty infiltration; 3, about 50% fatty infiltration; 4, more than 50% fatty infiltration; Goutallier et al., 1994).
Statistical Analyses
Differences in characteristics between subjects without or with RC-tears were evaluated with independent t-tests and χ 2 -tests. Age-association of CSA was carried out on standardized scores of the normalized CSA. Standardization was performed on the group without and with RC-tears separately. Correlations between standardized scores of the normalized CSA and fatty infiltration (Goutallier score) were evaluated with Pearson correlation tests. Correlations were performed within and in between muscles. Age distribution of subjects with and without RC-tear and in RC-tear shows both a normal distribution and therefore a simple linear regression model corrected for gender was applied to assess age-associated changes. The beta (β) and Pearson correlation coefficient (R) were calculated. Visualization of age-related trends in standardized CSA and fatty infiltration is provided in four age groups for subjects without tears, and similar age groups in RC-tear. Statistical significance was considered with a p < 0.05 (two-sided). Statistical analyses were performed with SPSS Statistics (IBM Inc., Armonk, New York, USA).
Subject Characteristics
Muscle CSA and fatty infiltration were measured in five shoulder muscles from 403 individuals. Subject characteristics were stratified for diagnosis (without or with RC-tear), as the mean age in RC-tear was significantly higher compared to those without RC-tear ( Table 1). CSAs of the SSp, ISp, and SSc muscles were significantly lower, and in all five muscles fatty infiltration was significantly higher in the RC-tear group compared with the group without RC-tear.
Correlation Between Muscle Atrophy and Fatty Infiltration
We assessed the correlation between a decrease in muscle CSA and an increase in fatty infiltration as a robust measure for muscle degeneration. Within subjects without RC-tear a significant correlation was found only for the SSp and in the SSc muscles (Table 2A). However, in the RC-tear group, significant correlations were found for all four RC muscles (Table 2B). A decrease in CSA of the SSp correlated with increase in fatty infiltration in the other three RC muscles both in the group without RC-tear and in RC-tears (Tables 2A,B). Additionally in the RC-tear group, a decrease in the CSA of the ISp correlated with an increase in fatty infiltration in the other three RC muscles (Table 2B).
Age-Association of Muscle Atrophy and Fatty Infiltration
Age-associated trends of muscle atrophy and fatty infiltration were assessed using a linear regression model, adjusted for gender. In subjects without RC-tears, age-associated decline of SSp and SSc CSA was significant (Table 3). In the SSp and SSc muscles the CSA decreased constantly between 14 and 85 years (Figure 2Ai). The CSA decline in the SSp was 2.6-fold higher compared to that in the SSc. In contrast, in the ISp, Tmi, and Del a decline in muscle CSA was only found from midlife onwards (Figure 2Ai). Fatty infiltration showed an age-associated increase in all five muscles (Table 3), however it was most prominent in the older age group (61-85 years; Figure 2Aii).
In the RC-tear group, an age-associated decline in muscle CSA was found in the SSp, SSc, ISp, and Del muscles, whereas the Tmi CSA was unaffected ( Table 3). In this RC-tear group the SSp muscle was consistently torn in all individuals. However, the decline in the muscle CSA was comparable between the SSp, SSc, and ISp (Table 3). Same as in the non-tear group, also in the RCtear group CSA declined continuously throughout adulthood, whereas in the ISp and Del it started after midlife (Figure 2i). Moreover, in the RC-tear group an age-associated increase in fatty was found in all five muscles (Table 3, Figure 2ii). Overall, a decline in muscle CSA and an increase in fatty infiltration were both more pronounced in the RC-tear group as compared with the group without RC tear. However, the age-associated patterns were similar between the two groups: a continuous decline in muscle CSA was found for SSp and SSc, but in the ISp and Del the decline started only from midlife onwards (Table 3, Figure 2).
Histological Analyses for Muscle Degeneration
To explore whether the radiological features in RC-tear conditions are accompanied by aging-associated tissue degeneration, we performed histological analyses of muscle biopsies with known aging histopathological marks. Muscle biopsies were obtained from two subjects at comparable age. Both patients had a massive RC tear involving the SSp and ISp. Their radiological characteristics were more severe than the average of the entire RC tear group ( Table 4). CSA of SSp and ISp from Patient A were more than 1.5 standard deviations (SD) smaller than the mean of the RC-tear group of the radiological study (Table 4). However, CSA of these muscles from Patient B were within 1 SD of the mean of the RC-tear group ( Table 4). Overall patient A had more severe muscle atrophy in all five muscles compared to patient B. Histological staining of the torn ISp from both patients showed severe disruption of myofiber orientation, accompanied with fibrosis and fat cells (Figure 3A). In the non-torn Tmi from both patients, fibrosis and fat cells were less prominent compared with the ISp (Figure 3A). In the Del muscle histology was not pathological (Figure 3A). We confirmed extensive fibrosis and thickening of the ECM in the ISp using collagen-1 immunostaining in both patients ( Figure 3B). However, ECM thickening was limited in the Tmi and Del from both patients (Figure 3B). This suggests ECM thickening is among the pathological hallmarks of torn RC muscles. We validated fatty infiltration using Nile-red staining. This confirmed fatty infiltration in the ISp from both patients, but only limited fatty droplet staining was found in Tmi or Del muscles ( Figure 3C).
Muscle degeneration is assessed by the Pearson correlation between standardized-scores of normalized muscle cross sectional area (CSA) and Goutallier scores of fatty infiltration for each muscle. Panel A shows analyses in subjects without RC-tear; Panel B shows analyses in
To determine contractile features of those muscles we employed immunostaining for three MyHC isotypes. Laminin staining was used to identify myofiber contour. Laminin staining revealed disruption in myofiber orientation in the torn ISp ( Figure 3D). Additionally, we found that while both MyHC-1 and -2a, were expressed in Del muscles, in the Tmi and the torn ISp the expression of MyHC-1 was dramatically reduced. Furthermore, MyHC-2x was co-expressed with MyHC-2a in the torn ISp of patient B (Figure 3D).
We also explored the regenerative capacity of torn RC muscles. Muscle sections were immunostained with an anti-Pax7 antibody, marking satellite cells. We found a two-to threefold increase in the fraction of Pax7-positive nuclei in torn ISp compared to Del or Tmi (Figures 4A,B). The fraction of Pax7positive nuclei between Del and Tmi was similar ( Figure 4B). Furthermore, in the Del and Tmi all Pax7 staining overlaid within myonuclei, whereas in the torn ISp Pax7 staining was also found outside myonuclei ( Figure 4A).
Overall, histological analyses confirmed that in both patients the torn ISp is severely degenerated compared to the nontorn Tmi and Del muscles. Some histological differences were found between the two patients, especially with MyHC isotypes expression. Overall in both patients histological features of the Tmi RC muscle were comparable to Del muscles rather than to ISp RC-muscle.
DISCUSSION
Aging-associated changes in skeletal muscles are prominent in part because it is the most abundant in the human body. There FIGURE 2 | Age-associated changes in muscle cross sectional area and fatty infiltration in shoulder muscles. Age-associated analyses were performed in five shoulder muscles (supraspinatus, subscapularis, infraspinatus, teres minor, and deltoid) in subjects without RC-tear (A) and in RC-tear (B). (i) Age-associated trends of standardized muscle cross-sectional surface areas. p-values for age-association were calculated using linear regression and are adjusted for gender. Significant trends are depicted in dark purple, non-significant trends are depicted in light purple. (ii) Age-associated increase in fatty infiltration. Fatty infiltration was evaluated according to the Goutallier score.
are over 400 skeletal muscles in the human body and how their pathology changes during aging is largely unknown. So far most studies are carried out on vastus lateralis using cross-sectional studies. In vastus lateralis an aging-associated decline in muscle strength and muscle mass starts at the sixth decade (Williams et al., 2002;Faulkner et al., 2007). Here we found that muscle atrophy in ISp and Del muscles starts only after the age of 45, while a continuous decline throughout adulthood was found in the SSp and SSc. This indicates that mechanisms leading to muscle atrophy could be similar between vastus lateralis, ISp and Del muscles, but are likely to differ from those regulating muscle atrophy in SSp and SSc. In contrast, in the Tmi, muscle atrophy did not change significantly with age, suggesting this muscle is less susceptible for age-associated changes. Since the Tmi is unaffected in RC-tears (Melis et al., 2011), it is poorly studied. We suggest that the Tmi could represent an aging-resilient muscle. The histology of Tmi muscle from two patients with a massive RC-tear was also showed a healthy histology, similar to that of Del muscles within the same patient. In contrast, the torn ISp exhibited aging and degenerated muscle pathology. The Tmi muscle could be used to identify potential molecular regulators that protect skeletal muscles from damage during aging. Torn muscles are often characterized by atrophy and fatty infiltration (Goutallier et al., 1994;Barry et al., 2013). We confirmed this increase in fatty infiltration in torn muscles using fatty droplets staining. Additionally, we found ECM thickening in torn muscles, indicating fibrosis. These features are also common in aging muscles (Brack et al., 2007;Zoico et al., 2010), suggesting that torn RC muscles and aging muscles share pathological mechanisms. However, this should be confirmed by additional studies. Changes in the contractile function is marked by the expression of MyHC isotypes, which can be changed in aging and in disease (Ciciliot et al., 2013). Fiber type transitions can vary between skeletal muscles, presumably due to different functions (Ciciliot et al., 2013). A transition from fast (type 1) to slow (type 2) myofibers is often found in muscular dystrophies and in metabolic disorders (Ciciliot et al., 2013). In the lower limb, type-2 myofibers decrease in aging vastus lateralis muscles (Verdijk et al., 2007). However, a transition from slow (type-1) to fast (type-2) myofibers is found in muscle disuse conditions, including denervation and loss of tensile strength (Ciciliot et al., 2013). The myofiber transition in torn RC is not well-characterized. In the intact RC MyHC type-1, -2a, and -2x are expressed in all four RC muscles (Lovering and Russ, 2008). Reduced MyHC-1 has been reported in cases with a severely torn SSp (Lundgreen et al., 2013). In agreement with that study, we also found a prominent loss of type-1 MyHC in torn ISp. Moreover, in severe denervation conditions (e.g., spinal cord injury) a myofiber switch from slow to fast fiber type was found (Verdijk et al., 2012) and is also consistent with our findings in torn ISp. Interestingly, the axillary nerve innervates the Tmi, but the SSp and ISp are innervated by the suprascapular nerve. Recent hypotheses suggest a role for denervation in torn RC muscles (Gigliotti et al., 2015). This calls for additional studies on the role of denervation in RC-tear and in muscles preserved from tearing.
In addition, we found that severely degenerated ISp muscles have an increased number of Pax7-positive cells. However, this increase may not represent an increase of satellite cells as some of the Pax7-staining was not within myonuclei. An increase in Pax7-positive cells was also reported in affected muscles from oculopharyngeal muscular dystrophy (OPMD; Gidaro et al., 2013). OPMD is a slow progressive myopathy, which could represent accelerated muscle aging (Raz and Raz, 2014). Satellite cells in chronic and slow progressive conditions were suggested to suppress muscle regeneration, possibly by remaining dormant and by differentiating into fibrogenic cells (Brack et al., 2007;Sciorati et al., 2015). Moreover, an adverse local environment could contribute to decreased regeneration by satellite cells (Meng et al., 2015). Although we analyzed biopsies from only two patients, as comparisons were performed within the same patient our findings likely represent degenerative changes between torn and non-torn RC muscles. Future studies with a larger sample size should investigate degenerative changes in torn RC muscles.
We also found an age association of fatty infiltration in subjects without an RC-tear however this was mostly contributed by the elderly group. Consistent with previous studies (Goutallier et al., 1995;Gerber et al., 2007;Gladstone et al., 2007), our results also show that fatty infiltration in RC-tear is highly prominent. In this cross-sectional study muscle atrophy appears at earlier age compared with fatty infiltration. This suggests that muscle atrophy in the RC develops earlier than fatty infiltration, while fatty infiltration in unaffected muscles is possibly systemic. Although our evaluation of fatty infiltration from MRA is qualitative, it is in agreement with a quantitative radiological study, where similar age-association of fatty infiltration was found in torn SSp (Nozaki et al., 2015).
Although muscle atrophy and fatty infiltration are both increased in aging how they are interrelated is not fully understood. In subjects without RC-tear we found that muscle atrophy correlated with fatty infiltration only in the SSp and SSc muscles. However, in RC-tear due to higher atrophy and fatty infiltration, correlations were found within all four RC muscles. In subjects without RC-tear we found that the SSp atrophy correlated with fatty infiltration in all other three RC muscles. Those correlations between muscle atrophy and fatty infiltration with the other three muscles were expanded to the ISp in RC-tear. This suggests that muscle atrophy in the SSp may affect muscle degeneration in the adjacent RC muscles.
Comparable trends of muscle atrophy were found in subjects without as well as with a RC-tear. Although the trends in muscle atrophy with age are similar between subjects without and with a RC-tear, the atrophy of the RC muscles overall was larger in the RC-tear group. In agreement, age-associated muscle atrophy in the SSp in both groups was reported, but was more pronounced in RC-tears (Barry et al., 2013). This suggests accelerated muscle atrophy in RC-tears. It is unclear whether muscle wasting in the RC is a consequence or cause of RC-tear. Longitudinal studies could further reveal the causality of muscle atrophy in RC-tear.
We conclude that patterns of age-associated degeneration differ in skeletal muscles of the RC. While some RC muscles show continuous changes throughout adulthood, in others changes start only from midlife onwards. Whereas, the majority of RC muscles show age-associated changes, the teres minor did not show significant age-associated changes. In torn RC muscles satellite cells and the ECM are increased compared to the intact teres minor. We propose that torn RC muscles display hallmarks of muscle aging whereas the teres minor could represent an aging-resilient muscle, suggesting a role of muscle pathology in RC tear pathogenesis.
AUTHOR CONTRIBUTIONS
YR and JH measured and analyzed MRA images and wrote the MS. MRA images were provided by PvdZ. Biopsies were collected by JFH, AK, and JN. Sectioning of biopsies, histological staining and imaging were performed by YR, MR, and VR. RN and VR supervised the project. All authors contributed to the writing and discussions of the results. All authors read and approved the final manuscript.
FUNDING
This study is partly funded by the Dutch Arthritis Association (DAA), grant number RF 13-1-303. | 2016-05-12T22:15:10.714Z | 2015-12-22T00:00:00.000 | {
"year": 2015,
"sha1": "648d10519f7e8cc0e401e201706a63b2f876af0c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2015.00236/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "648d10519f7e8cc0e401e201706a63b2f876af0c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1507689 | pes2o/s2orc | v3-fos-license | Socioeconomic position, stage of lung cancer and time between referral and diagnosis in Denmark, 2001–2008
Introduction: We investigated the association between socioeconomic position, stage at diagnosis, and length of period between referral and diagnosis in a nationwide cohort of lung cancer patients. Methods: Through the Danish Lung Cancer Register, we identified 18 103 persons diagnosed with lung cancer (small cell and non-small cell) in Denmark, 2001–2008, and obtained information on socioeconomic position and comorbidity from nationwide administrative registries. The odds ratio (OR) for a diagnosis of advanced-stage lung cancer (stages IIIB–IV) and for a diagnosis >28 days after referral were analysed by multivariate logistic regression models. Results: The adjusted OR for advanced-stage lung cancer was reduced among persons with higher education (OR, 0.92; 95% confidence interval (CI), 0.84–0.99), was increased in persons living alone (OR, 1.06; 95% CI, 1.01–1.13) and decreased stepwise with increasing comorbidity. Higher education was associated with a reduced OR for >28 days between referral and diagnosis as was high income in early-stage patients. Male gender, age and severe comorbidity were associated with increased ORs in advanced-stage patients. Interpretation: Differences by socioeconomic position in stage at diagnosis and in the period between referral and diagnosis indicate that vulnerable patients presenting with lung cancer symptoms require special attention.
Survival after lung cancer remains low in Denmark and is lower than in other western and northern European countries (Berrino et al, 2007). The incidence rate of lung cancer is generally strongly associated with socioeconomic position, largely due to differences in smoking patterns (Menvielle et al, 2009;Sidorchuk et al, 2009). A nationwide Danish cohort study recently demonstrated a difference by socioeconomic position in short-term survival after lung cancer; for instance, the 1-year relative survival was 28% (95% CI, 27 -30%) for men with short and 34% (95% CI, 32 -37%) for men with higher education (Dalton et al, 2008a). This study did not, however, include information on stage of disease, which is a strong prognostic factor in cancer; social differences in stage at diagnosis might therefore explain the social inequality in survival.
Few studies have evaluated the effect of socioeconomic position on lung cancer stage at diagnosis, and the results have been inconclusive (McCarthy et al, 2007;Halpern et al, 2008;Berglund et al, 2010;Booth et al, 2010); further, it has never previously been studied in the Danish setting. The Danish tax-funded health-care system provides free access to general practice, outpatient and hospital care. The general practitioners act as gatekeepers to the rest of the health-care system, and carry out initial diagnostic tests and refer to practicing specialists, hospitals or outpatient clinics as needed.
It may be that affluent lung cancer patients benefit more from lung cancer awareness campaigns, leading to shorter delays in seeking treatment or in diagnosis of the disease, which might result in a different stage distribution by socioeconomic position. Further, the differences in the distribution of smokers and comorbidity by social group might lead to differences in the interpretation of warning symptoms, such as cough and dyspnoea, with a corresponding difference in presentation delay.
In a nationwide, population-based cohort of 18 103 patients with lung cancer diagnosed between 2001 and 2008 in Denmark, we investigated the relationship between socioeconomic position and (1) tumour progression, measured as advanced-stage (stages IIIB-IV) vs early-stage (stages I -IIIA) lung cancer at the time of diagnosis and (2) the length of the period between referral and diagnosis. We hypothesised that patients' overall knowledge, reflecting their ability to interpret symptoms, communicate and access health services, is closely related to their educational status. We, therefore, chose the highest attained educational level as the primary socioeconomic variable.
MATERIALS AND METHODS
In the files of the Danish Lung Cancer Registry, we identified 25 648 persons born between 1920 and 1982 in whom lung cancer was diagnosed between 2001 and 2008 and who were aged X30 years at the time of diagnosis. The Lung Cancer Registry was established in 2001; estimated registration covers 485% and, since 2003, 490% of all lung cancer cases in Denmark (DLCG and DLCR, 2009;Jakobsen et al, 2009). We identified 24 229 persons (95%) in the files of Statistics Denmark 2 years before the year of lung cancer diagnosis in order to retrieve their socioeconomic characteristics; we assumed 2 years' latency to minimise a possible reverse effect of early symptoms of the disease on socioeconomic position. Additionally, we excluded persons who had no information on histological type (N ¼ 4198; 17%) or hospital ward (N ¼ 39).
Classification of stage
Of the 19 992 persons with NSCLC or SCLC, 16 720 persons (84%) were classified as early or advanced stage; 85 cases of clinical stage 0 were excluded. For the 3187 persons with no recorded clinical stage, we classified 1153 persons who had undergone intended curative surgery and 33 persons who were referred to oncological treatment for stage I -IIIA diseases as early-stage lung cancer and 197 persons referred to oncological treatment for stage IIIB -IV diseases as advanced-stage lung cancer. Of the 18 103 persons thus eligible for analysis, 7177 (40%) had a diagnosis of early-stage lung cancer and 10 926 (60%) advanced-stage lung cancer.
Waiting time
In Denmark, the National Cancer Plan defines the preferable delay between referral and diagnosis of lung cancer as o28 days (National Board of Health, 2010). Waiting time was calculated from the date of initial referral (from general practitioners, private specialists or other hospital wards) to the date at which clinical stage was registered (date of diagnosis). Among the 16 720 patients with a registered stage, 7 had no date of diagnosis and thus the analysis using waiting time as outcome was performed on a data set restricted to 16 713 patients with both a registered stage and date of diagnosis.
Socioeconomic factors
Information on socioeconomic position was obtained for each lung cancer patient from the Integrated Database for Labor Market Research, which contains annually updated data since 1980 and is run by Eurostat/Statistics Denmark (1995). Education was categorised into short education (i.e., mandatory education of up to 7 and 9 years for patients born before and after 1 January 1958, respectively), medium education (between 8 -10 and 12 years, the latest grades of primary school, secondary school, and vocational education) and higher education (412 years). Disposable income was calculated from household income after taxation and interest per person, adjusted for the number of persons in the household and for the 2000 value of the Danish crown, according to a formula from the Danish Ministry of Finance. We grouped disposable income into low (first quartile), medium (second and third quartiles), and high (fourth quartile). Affiliation to the work market was categorised into working, unemployed, early retirement pension (formerly known as disability pension, which is granted if a person is unable to work permanently due to mental or physical disability and if the disability reduces the ability to work by at least 50%) and age pension (anticipatory pension available from age 60 years and age pension from 65 years). Cohabitation status was defined as living with a partner, irrespective of marital status, or single.
Comorbid disorders
By linking the personal identification numbers to the files of the Danish National Patient Register, we obtained full histories of diseases leading to hospitalisation from 1978 and, from 1995, outpatient visits for each cohort member to 1 year before the diagnosis of lung cancer (Andersen et al, 1999). Based on information on hospital contacts, including dates of discharge and diagnoses coded according to Danish modified versions of ICD-8 and, from 1994, ICD-10, we defined the Charlson comorbidity index, grouped on the basis of the cumulated sum of scores of 0, 1, 2, and X3 (Charlson et al, 1987;Dalton et al, 2008b).
Statistical analyses
Logistic regression models were used to examine the simultaneous influence of all socioeconomic and demographic factors of interest on the likelihood of receiving a diagnosis of advanced-stage lung cancer with the GENMOD procedure of SAS 9.1 (SAS Institute Inc., Cary, NC, USA). To account for possible clustering within hospital wards, we used generalised estimating equations with the exchangeable working correlation structure and robust variance estimates.
A three-step model was used. In the first model, each socioeconomic or comorbidity variable was entered alone and adjusted for age and gender. In the second models, the individual exposure variables were additionally adjusted for variables further upstream in the causal pathway (education, cohabiting status, and income). In the final models, analyses were adjusted for age, gender, education, cohabitation status, income, and comorbidity. As there were only minor differences in the estimates obtained with the three models, only data from the first and the final models are shown. In order to explore the influence of affiliation to the working market on the association between socioeconomic position and advanced-stage lung cancer, we ran the logistic regression analyses separately for patients aged o65 years, including work market affiliation in the models.
Tests for interaction (effect modification) between covariates were performed with the Wald test statistic. Investigations of interactions between education and gender, comorbidity, and age, respectively, as well as between comorbidity and sex, and age, respectively, were performed. For the group of patients o65 years, we also tested for an interaction between affiliation to the work market and gender.
For the analysis of waiting time, we investigated the likelihood of a diagnosis 428 days after initial referral in logistic regression models. We used the same three-step model described above, also including stage (advanced or early) as a variable. The analyses were separated by early and advanced stage. Again, as minor differences in estimates were observed between models, only the results of the first (adjustment for age and gender) and the final model are shown. Table 1 gives the descriptive and diagnostic characteristics of the 18 103 lung cancer patients, overall and by educational level. More men than women were diagnosed with lung cancer among persons with medium or higher education whereas the proportions of patients with low income or who were retired or single were higher among those with short education. There were no substantial differences in stage, histological type, or median waiting time by educational group (Table 1).
Socioeconomic position and stage of lung cancer
In general, there were only very slight differences in risk estimates between the age and gender-adjusted and the mutually adjusted analyses (Table 2). No interactions were observed. The odds ratio (OR) adjusted for age, gender, education, income, cohabitation status, and comorbidity for a diagnosis of advancedstage lung cancer was reduced by 8% for persons with higher education (OR, 0.92; 95% CI, 0.84 -0.99) and increased by 6% for persons living alone (OR, 1.06; 95% CI, 1.01 -1.13; Table 2). There was a slightly reduced OR of borderline significance for a diagnosis of advanced-stage lung cancer of 0.98 (95% CI, 0.97 -1.00) per 5 years increment in age. Having comorbid disorders reduced the OR to 0.88 (95% CI, 0.83 -0.92) in persons with a Charlson comorbidity score of 1, to 0.84 (95% CI, 0.77 -0.92) in persons with a score of 2 and to 0.73 (95% CI, 0.65 -0.81) in those with a score X3 when compared with persons with no comorbidity (Charlson comorbidity score 0; P-value for trend o0.001).
For the 7053 patients under 64 years of age, a statistically significant interaction between work market affiliation and gender was observed (P ¼ 0.01) and models were separated by gender. Similar associations were found between age, education, cohabitation status, and comorbidity and advanced-stage lung cancer (data not shown). In comparison with working men, increased (although of borderline significance) adjusted ORs were found for men who were unemployed (OR, 1.20; 95% CI, 1.00 -1.44) or who had retired early because of ill health (OR, 1.18; 95% CI, 0.96 -1.44). Unemployed women had a non-significantly reduced OR (0.90; 95% CI, 0.72 -1.12), and women who had retired early had a reduced OR (0.80; 95% CI, 0.69 -0.92) in comparison with working women. Separate analyses of the data set after exclusion of the 1386 patients classified as having early-or advanced-stage lung cancer solely on the basis on referral to surgery or oncological treatment gave similar results to the overall analyses (data not shown).
Socioeconomic position and waiting time
For the analysis of socioeconomic position and waiting time, tests for interactions revealed a significant interaction between stage and age (Po0.001) and the analyses were separated by early and advanced stage. The OR adjusted for age, gender, education, income, cohabitation status, and comorbidity for a diagnosis 428 days after referral to hospital increased with age (OR, 1.03; 95% CI, 1.01-1.06 per 5 years) and was higher in men with advanced-stage cancer than in women (OR, 1.11; 95% CI, 1.03-1.21), whereas age and gender did not affect the OR for persons with early-stage lung cancer (Table 3). Higher education was associated with a reduced OR for a diagnosis 428 days after referral among patients with both early-and advanced-stage cancer (OR, 0.82; 95% CI, 0.70-0.96 and OR, 0.82; 95% CI, 0.72-0.93, respectively), as was medium education among early-stage patients (OR, 0.87; 95% CI, 0.81-0.94) with P-values for trend of o0.001 and 0.01 in early-and advanced-stage patients (Table 3). High income was associated with lower ORs for a diagnosis 428 days after referral although failing to reach statistical significance for patients with advanced-stage cancer (Table 3). Male gender and severe comorbidity (Charlson comorbidity scores of 2 or higher) were associated with increased ORs in advanced-stage lung cancer patients (P-value for trend o0.001) but not significantly so among patients with early-stage cancer (Table 3).
To check for co-linearity between education and income, all models were run both with and without income and very little change was observed in risk estimates indicating no co-linearity (data not shown). Some 17% of the material was excluded due to missing histology; mutually adjusted regression models revealed that older age, living alone and having comorbidity was significantly associated with the OR for having no histology while there was no association between gender, education, or income and having no histology (Table 4).
DISCUSSION
In this nationwide population-based study of stage at the time of diagnosis of lung cancer, short education and living alone were associated with higher risks for a diagnosis of more advanced disease. Furthermore, short education was associated with a longer than recommended time period between referral and diagnosis. Longer than recommended periods between referral and diagnosis were found for low income patients with a diagnosis of early-stage lung cancer, and for patients with advanced-stage lung cancer who were male, older and had severe comorbidity.
A recent population-based study in mid-Sweden of 3370 patients with NSCLC diagnosed in 1996 -2004 showed no association between education and stage at diagnosis (Berglund et al, 2010). A Canadian study of 12 276 NSCLC patients diagnosed in 2003 -2007 showed no difference in stage distribution by quintile of median area-based household income, but this study did not include information on education (Booth et al, 2010). We found evidence of an education gradient in stage at diagnosis among Danish patients with either NSCLC or SCLC, both of which were included because of the similarity in symptoms, the diagnostic procedures and the comparability of the staging of these groups of Socioeconomic position and stage of lung cancer SO Dalton et al lung cancer; however, exclusion of SCLC from the data set resulted in similar results (data not shown). In line with our findings, a study in the United States of almost 700 000 patients with lung cancer diagnosed in 1998 -2004 showed ORs of 1.3 (95% CI, 1.3 -1.4) for a diagnosis of stages III -IV rather than stage I for persons insured by Medicaid (for low income or the medically needy) and 2.2 (95% CI, 2.1 -2.3) for persons with no health insurance when compared with persons who were privately insured (Halpern et al, 2008). The present study is the largest population-based study outside the United States to be published, and our results support the notion that social differentials in stage at diagnosis might contribute to the social inequality in lung cancer survival, as has been demonstrated in countries with different levels of social security and welfare (Rachet et al, 2008;Dalton et al, 2008a;Berglund et al, 2010;Booth et al, 2010).
In accordance with some (Osborne et al, 2005;McCarthy et al, 2007;Frederiksen et al, 2008) but not all (Dalton et al, 2006;Berglund et al, 2010) studies of social position and stage of cancer, we found that living alone was associated with higher odds for late diagnosis of lung cancer than if living in a relationship. Living with a partner might reduce the delay in seeking medical help after symptoms are experienced and might help in navigating the diagnostic pathway, which includes several sectors of the health system. Furthermore, higher smoking prevalence and lower cessation rates have been observed among persons living alone (Broms et al, 2004;Osler et al, 2008;Giordano and Lindstrom, 2010).
The factor most strongly associated with stage in the fully adjusted analyses was comorbidity, which was associated with lower ORs for advanced-stage disease. This is plausible from a clinical point of view, because persons with chronic conditions are more likely to require frequent, periodic medical care, resulting in closer clinical monitoring than healthier persons. As a substantial proportion of lung cancer patients have physical disabilities that compromise their lung function, they may have more frequent X-rays or CT scans, which could detect early lung cancers.
The finding that comorbid disorders might lower the odds for late-stage disease at diagnosis is in line with the findings of a study in the United States based on SEER data, of 4626 persons with social security disability insurance entitlement to Medicare, who received a diagnosis of NSCLC, in which the adjusted OR for stages III -IV was 0.76 (95% CI, 0.72 -0.81) in comparison with people without social security disability insurance (McCarthy et al, 2007).
We observed a difference by gender among lung cancer patients of working age. Comorbidity overall was associated with a lower OR for a diagnosis of advanced-stage lung cancer, but men who were unemployed or early retirement pensioners (many of whom have a Charlson comorbidity score, as early retirement can be granted in Denmark only if working ability is permanently reduced by 450%) were at increased odds for a diagnosis of advancedstage lung cancer, whereas unemployed women or female early retirees were not. This finding indicates a vulnerable group of men with severe chronic comorbidity (Dalton et al, 2008b), who might not receive as much surveillance as their female counterparts or other persons with comorbid conditions who are working. A similar finding has to our knowledge not been reported earlier and might be a chance finding; however, if it can be replicated in further studies, the identification of a complex association between comorbidity, gender and lung cancer stage adds valuable information to our understanding of how socioeconomic position influences health outcomes.
We also observed that level of education is associated with time between referral and diagnosis of either early-or advanced-stage Table 3 Age and gender adjusted and multivariate adjusted odds ratios (ORs) with corresponding 95% confidence intervals (CIs) for a diagnosis 428 days after referral, among 16 713 persons with non-small cell or small cell lung cancer aged X30 years, Denmark, 2001Denmark, -2008 (Berglund et al, 2010). As the authors found no difference by education in stage at diagnosis, however, they concluded that their finding was of no clinical significance. Few other studies have explored the association between socioeconomic position and delay. A British study found that age and marital status were associated with longer overall diagnostic delay in lung cancer, but that age, gender, marital status, or social or ethnic group did not influence the delay to referral or secondary care (Neal and Allgar, 2005), which would encompass the period between referral and diagnosis that we investigated. We were unable to investigate how socioeconomic position influences the delay between symptom debut and contact with a doctor or the delay between first contact with a doctor and referral; however, our finding of a difference by education and to some degree income in the length of the diagnostic process and the stage at time of diagnosis -in a country with equal, free access to the health-care system -draws attention to practices of care in both referral and the diagnostic work-up of lung cancer. The strengths of this study include the availability of highquality clinical information on a population basis, from a clinical database with national coverage of about 90% of lung cancers diagnosed in the period. The detailed information in the clinical database enabled us to investigate the early disease trajectory, as we were able to retrieve information on the interval between referral and diagnosis as well as on clinical stage. The range of information in the database enabled us to infer clinical stage from referral to treatment for almost half of the 15% of patients for whom a primary clinical stage was not reported, thus increasing the external validity of the study. We excluded a substantial part of the patient group (17%) due to lack of information on histology. However, our finding that factors like age, cohabitation status, and comorbidity were associated with missing information on histology indicates that the strength of the associations observed between these factors and stage or waiting time might be underestimated. Linkage with other administrative registries with information collected for purposes independent of the study hypotheses and covering the entire Danish population ensured minimal selection and information bias, whereas the availability of individual-level socioeconomic position indicators reduced the likelihood of misclassification of exposure, which could arise if area-based socioeconomic position measures were used (Galobardes et al, 2006).
The inclusion of information on comorbidity from the Charlson comorbidity index, which is a validated instrument (Charlson et al, 1987) is clearly another strength of the study. It is, however, not possible to distinguish between the mildest and the most severe cases in the categories of diseases, because the index is based on discharge diagnoses from inpatient or outpatient admissions only. Furthermore, patients who were treated for their disease solely by their general practitioner score 0 in this index, which could lead to misclassification of exposure and residual confounding when comorbidity is treated as exposure or confounder. We were furthermore unable to explore the mechanisms underlying the association between short education, living alone, older age and advanced stage of lung cancer at diagnosis as we had no information on the time of symptom onset, first visit to the general practitioner or delay before diagnostic procedures. Finally, we were unable to adjust for smoking status. There may be an educational gradient among people who have stopped smoking (Osler et al, 2001;Pisinger et al, 2008). Among former smokers one would expect increased awareness, if symptoms as cough or dyspnoea arise due to a developing lung cancer as these symptoms might be interpreted as smoking related among smokers. An educational gradient in smoking cessation might therefore lead to differential awareness of symptoms by education related to both smoking and lung cancer, and thus possibly have a role in an educational gradient in stage at diagnosis of lung cancer.
In spite of the insufficient evidence of a positive association between short diagnostic delay, early stage and prognosis of lung cancer, it is reasonable to assume that better survival rates can be achieved when lung cancer is diagnosed at an early stage, given the potential for curative treatment. The strengths of the associations we observed on stage at diagnosis and waiting time suggest that compared with the effect of socioeconomic position on lung cancer incidence the social gradient on these end points is moderate. Still, our results indicate that the pathway from referral due to a suspicion of cancer to diagnosis differs by socioeconomic position for lung cancer patients in Denmark. The finding that patients with short education, low income or who live alone have a higher risk for a longer period than recommended between referral and diagnosis and for advanced-stage lung cancer calls for greater attention to these groups of patients when they enter the healthcare system with symptoms indicative of lung cancer.
Implications of such findings could be that optimised diagnostic processes securing early referral and navigation of vulnerable patients through different sectors of the health system should be offered to groups defined by low socioeconomic position or who live alone. ORs are mutually adjusted as well as adjusted for hospital ward by generalised estimating equations. | 2017-08-17T06:55:16.875Z | 2011-09-06T00:00:00.000 | {
"year": 2011,
"sha1": "d79ad98d3b2451e8520ae9c6a224c860dccc32d8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/bjc2011342.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d79ad98d3b2451e8520ae9c6a224c860dccc32d8",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235683436 | pes2o/s2orc | v3-fos-license | Gauge Enhanced Quantum Criticality Beyond the Standard Model
Standard lore views our 4d quantum vacuum governed by one of the candidate Standard Models (SMs), while lifting towards some Grand Unification-like structure (GUT) at higher energy scales. In contrast, in our work, we introduce an alternative view that the SM arises from various neighbor vacua competition in a quantum phase diagram. In general, we regard the SM arising near the gapless quantum criticality (either critical points or critical regions) between the competing neighbor vacua. In particular, we demonstrate how the $su(3)\times su(2)\times u(1)$ SM with 16n Weyl fermions arises near the quantum criticality between the GUT competition of Georgi-Glashow (GG) $su(5)$ and Pati-Salam (PS) $su(4)\times su(2)\times su(2)$. We propose two enveloping toy models. Model I is a conventional $so(10)$ GUT with a Spin(10) gauge group plus GUT-Higgs potential inducing various Higgs transitions. Model II modifies Model I plus a 4d discrete torsion Wess-Zumino-Witten-like term built from GUT-Higgs field (that matches a nonperturbative global mixed gauge-gravity anomaly captured by a 5d invertible topological field theory $w_2w_3$), which manifests a Beyond-Landau-Ginzburg criticality between GG and PS models, with extra Beyond-the-Standard-Model (BSM) excitations emerging near a quantum critical region. If the internal symmetries were treated as global symmetries, we show a gapless 4d deconfined quantum criticality with new BSM fractionalized fragmentary excitations of Color-Flavor separation, and gauge enhancement including a Dark Gauge force sector, altogether requiring a double fermionic Spin structure named DSpin. If the internal symmetries are dynamically gauged, we show a 4d boundary criticality such that only appropriately gauge enhanced dynamical GUT gauge fields propagate into an extra-dimensional 5d bulk. The phenomena may be regarded as SM deformation or morphogenesis.
A.1 Embed the SM into the su(5) GUT, then into the so (10)
Introduction, Motivation, and Summary
It is a common ritual practice in high-energy physics (HEP) to regards our quantum vacuum in the 4-dimensional spacetime (denoted as 4d or 3+1d) governed by one of the candidate su(3) × su(2) × u(1) Standard Models (SMs) [1-4] as a quantum field theory (QFT) and an effective field theory (EFT) suitable below a certain energy scale, while lifting towards one of some Grand Unification-like structure (GUT) [5][6][7] or String Theory at higher energy scales, 1 see Fig. 1 (a). Although many non-supersymmetric GUT models had been ruled out by experiments due to no evidence yet on the predicted proton decay (proton lifetime > 10 34 years) [8], many physicists still speculate that GUT plays a certain crucial role in a higher energy unification [9]. How can we remedy the conventional GUTs other than seeking for their supersymmetry (SUSY) variants or String Theory modifications at higher energy?
������ �����
������� ����� ������ ���������� Figure 1: (a) Standard lore seeks for a single unified dynamically gauged internal symmetry at high energy. One probes the shorter distance and higher energy scales to look for the GUT, SUSY, or String Theory evidence. The vertical axis shows an energy scale, while the horizontal axis plays no physical role. (b) We propose an alternative view: SM is just one of many possible low energy phases of the quantum vacua of our universe. By introducing a horizontal axis that represent many possible quantum vacua tuning parameters, we can show that SM phase can tune to other GUT phases, even at a fix energy scale (without the necessity to go to higher energy) and at zero temperature. SM arises near the gapless quantum critical region (shown as the gray area).
To address the above question, we propose to seek for a new viewpoint. In our present work, instead of viewing GUT only as some higher-energy theory of SM, we suggest that various GUTs may be neighbor quantum vacua next to SM in an immense quantum phase diagram 2 shown schematically in Fig. 1 (b), with an underlying larger quantum vacua tuning parameter space (i.e., the horizontal axis in Fig. 1 (b), 2 and 3). We provide two explicit Toy Models in Fig. 2 and Fig. 3: SM arises near the gapless quantum critical point (for Fig. 2) or critical region (gray area for Fig. 3) between the competing neighbor GUT vacua. Readers may be puzzled: What precisely can be the quantum vacua tuning parameters? What can we gain from this viewpoint? What are the motivations? Let us address these issues one by one.
• Quantum vacua tuning parameters can be as familiarly simple as the tuning of the GUT-Higgs potential ( r R (Φ R ) 2 +λ R (Φ R ) 4 ) of some GUT-Higgs field Φ R that can induce a Higgs condensation 3 1 Throughout our article, we denote nd for n-dimensional spacetime, or n + 1d as an n -dimensional space and 1dimensional time. We also denote the Lie algebra in the lower case such as so (10), and denote the Lie group in the capital case such as Spin (10). For example, we follow the convention to call the model [7] as the so(10) GUT, but it requires the Spin(10) gauge group. 2 Here quantum phases mean that we focus on the zero temperature physics where the quantum effect is dominant, see for example an overview [10]. The quantum phase diagram at zero temperature behaves more quantum than the thermal phase diagram at finite temperature. 3 Throughout our work, whenever we mention Higgs field or Higgs transition, we normally mean the GUT-Higgs instead The parent EFT is a modified so(10) GUT with a Spin(10) gauge group, plus not only a GUT-Higgs potential but also a new 4d discrete torsion class of Wess-Zumino-Witten-like (WZW) term built from GUT-Higgs fields that saturates a nonperturbative global mixed gauge-gravity anomaly captured by a 5d invertible topological field theory w 2 w 3 (T M ) = w 2 w 3 (V SO (10) ), which manifests a Beyond-Landau-Ginzburg quantum critical region (shown in a gray area) between GG and PS models, with extra Beyond-the-Standard-Model (BSM) excitations emerging near the quantum criticality. The SM + BSM physics is denoted as SM * .
phase transition via tuning from r R > 0 to r R < 0. The quantum vacua tuning parameters can be those triggering a scalar condensation Φ R = 0 in the r R < 0 region. The possibility to access the GUT vacua from the SM vacuum by tuning certain model parameters has been largely overlooked in the existing literature, because some of these tuning parameters appear to be perturbatively irrelevant at the SM fixed point. A key proposal of this work is to investigate the non-perturbative effect of these tuning parameters in driving quantum phase transitions from the SM phase to adjacent GUT phases.
• Deformation class of QFT : Given the importance of symmetry and its associated 't Hooft anomaly of QFT, Seiberg [11] and others 4 conjectured that seemly different dd QFTs within the same symmetry G and same 't Hooft anomaly Z d+1 of symmetry G [14] can indeed be deformed to each other via adding degrees of freedom at short distances that preserve the same symmetry and that maintain the same overall anomaly. Namely, the whole system allows all symmetric interactions between the original QFT and any new symmetric QFTs brought down from high energy. This organization principle that connects a large class of QFTs together within the same data (G, Z d+1 ) via any symmetric deformation (possibly with discontinuous or continuous quantum phase transitions [10] between different phases) is called the deformation class of QFTs in dd [11], which is indeed controlled by the cobordism or deformation class of invertible topological quantum field theory Z d+1 in d + 1d [15]. One can further define the deformation class for 4d SM [16].
As we will see, our viewpoint in Fig. 1 (b) (also in Fig. 2 and Fig. 3) is not only compatible with this symmetric deformation class of QFT [11], but also that we allow symmetry-breaking deformations, along the quantum vacua tuning parameter space. We may refer to all these deformations of the SM to other neighbor vacua as "morphogenesis" of the SM.
• Proton decay: The aforementioned issue of GUT proton decay may be resolved in our framework by two ways. First, the change of viewpoint -instead of looking for GUT proton decay in our vacuum (or in a higher energy GUT along the vertical axis, as in Fig. 1), we may look for GUT proton decay by first moving to the appropriate quantum vacuum along the horizontal axis in Fig. 1 (b) that already lives this specific GUT. 5 Second, a modified parent EFT that controls all possible deformation of SM in the phase diagram may give rise to a different proton decay rate. 6 The experimental bound on proton decay rate only rules out the possibility to access nonsupersymetic GUT phases from the SM phases by thermal phase transitions (i.e. by raising the energy or temperature scales), but it does not say anything about accessing these GUT phases by quantum phase transitions (by tuning parameters near ground states at low-energy). This work exactly focuses on the later possibility of quantum phase transitions among the SM and GUTs. 4 In fact the related concept has been used in arguing that the fermion doubling problem (occurred in regularizing chiral fermions nonperturbatively on the lattice with a chiral G symmetry) can be resolved by gapping the mirror chiral fermion if and only if the chiral fermion is anomaly free in G (tautologically, the mirror fermion is also anomaly free in G), see [12,13] and reference therein. The argument follows directly from the fact that the gapless anomaly-free G-symmetric chiral fermion theory is in the same deformation class of the gapped anomaly-free G-symmetric theory. 5 Take Georgi-Glashow su(5) GUT [5] as an example. The conventional viewpoint may be problematic because this specific GUT may not be the correct higher energy theory of our vacuum along the vertical axis, in Fig. 2 and Fig. 3. If we want to detect any proton decay in su(5) GUT, hypothetically we may imagine to create a small bubble within the domain wall such that inside the bubble resides any possible deformation of the SM (e.g., any models along the horizontal axis in Fig. 2 and Fig. 3). Although changing the large-scale quantum vacuum structure of our SM universe is likely energetically impossible, changing the quantum vacuum inside a small-scale bubble is possibly feasible experimentally. 6 For example, two different toy-model parent EFTs in Fig. 2 and Fig. 3 respectively can give different proton decay rates. We do not attempt to compute the explicit proton decay rate in this work, because so far we only have two Toy Models that control a p = {0, 1} ∈ Z2 deformation class labeled by a Z2 nonperturbative global anomaly in 4d. The two Toy Models describe only a partial deformation class of the SM. There is also a Z16 deformation class for SM [16], etc. To compute a experimentally sensible proton decay rate for our vacuum, it will be the best that we (1) locate the specific point on the phase diagram that precisely labels our vacuum, and (2) compute from the general enveloping parent EFT that includes all physically relevant deformations.
The above three arguments summarize the motivation and philosophy behind our viewpoint. Namely, in our present work, we initiate and introduce an alternative complementary perspective -we propose that the SM vacuum can be a low energy quantum vacuum arising from the quantum competition of various neighbor GUT vacua in a quantum phase diagram. SM is just one possible phase allowed by the deformation class of SM [16]. Let us list down some key results of our work: • In general, we propose that the SM may arise as one adjacent phase from the vicinity of gapless quantum criticality (either a critical point for Model I in Fig. 2, or a critical region for Model II in Fig. 3) between the competing neighbor GUT vacua.
(1.1) 7 The wj is the j-th Stiefel-Whitney (SW) characteristic class. The wj(T M ) is the SW class of spacetime tangent bundle T M of manifold M . The wj(VG) is the SW class of the principal G bundle. This mod 2 class w2w3 global anomaly has been checked to be absent in the so(10) GUT by Ref. [12,17]. This mixed gauge-gravitational anomaly is tightly related to the new SU (2) anomaly [17] due to the bundle constraint w2w3(T M ) = w2w3(VG) with G can be substituted by SO(3) ⊂ SO (10) related to the embedding SU(2) = Spin(3) ⊂ Spin (10). However, as we will see, it is natural to introduce a new 4d WZW term (appending to the so(10) GUT) with this w2w3 global anomaly in order to realize the SM vacuum as the quantum criticality phenomenon between the neighbor SU(5) GUT and Pati-Salam vacua.
The w2w3 global anomaly also occurs on a certain Z2 gauge theory with fermionic strings [18] and all-fermion U(1) electrodynamics [19,20] which is a pure U(1) gauge theory whose electric, magnetic, and dyonic objects are all fermions. For these Z2 and U(1) gauge theories, they do have the spacetime tangent bundle constraints on T M , but do not have the analogous gauge bundle constraints on VG. So this w2w3 = w2w3(T M ) anomaly becomes a pure gravitational anomaly for these Z2 and U(1) gauge theories.
We recommend the following references [21][22][23][24] or this seminar video [25] for readers who wish to overview some modern perspectives about the anomalies of SM and GUT relevant gauge theories. In particular, we follow closely Ref. [24,25]. In summary, we may address anomalies with different adjectives to characterize their properties: invertible vs noninvertible: We only focus on the invertible anomalies, which follow the standard definition of anomalies (also in high-energy physics) captured by one higher-dimensional invertible TQFT as the low energy theory of invertible topological phases. The dd invertible anomalies (also the (d + 1)d invertible TQFTs) are classified by the cobordism group data Ω d G ≡ TP d (G) defined in Freed-Hopkins [26]. The partition function Z of a (d + 1)d invertible TQFT satisfies Z(M d+1 ) = 1 on a closed M d+1 -manifold.
In contrast, the noninvertible anomalies are non-standard (usually not named as anomalies in high-energy physics), characterized by non-invertible topological phases with intrinsic topological orders.
perturbative local vs nonperturbative global anomalies: Whether the anomalies are local (or global), is determined by whether the gauge or diffeomorphism transformations are infinitesimal (or large) transformations, continuously deformable (or not deformable) to the identity element. The classifications of local vs global anomalies are the integer Z vs the finite torsion Zn classes respectively.
gauge anomaly vs mixed gauge-gravity anomaly vs gravitational anomaly: The adjective, gauge or gravity, refers to the types of couplings or probes that we require to detect them -whether the probes depends on the internal gauge bundle/connection or the spacetime geometry.
background fields or dynamical fields: Anomalies of global symmetries probed by non-dynamical background fields are known as 't Hooft anomalies. Anomalies coupled to dynamical fields must lead to anomaly cancellations to zero for consistency.
Toy Model I as the p = 0 class without w 2 w 3 anomaly: Its parent EFT is the conventional so(10) GUT with a Spin(10) gauge group [7] plus a GUT-Higgs potential inducing various Higgs transitions to GG, PS, or SM, schematically shown in Fig. 2. The first model has no w 2 w 3 or any other anomaly within the Spin (10).
Toy Model II as the p = 1 class with w 2 w 3 anomaly and WZW term: To introduce non-trivial competitions between GG and PS phases, we consider a new parent EFT of a modified so(10) GUT with a Spin(10) gauge group, which includes not only the familiar so(10) GUT plus a GUT-Higgs potential, but also a new extra 4d discrete torsion class of Wess-Zumino-Witten-like (WZW) term that saturates a mod-2 class w 2 w 3 anomaly within the Spin(10).
The WZW term introduces nonperturbative interaction effects between different GUT-Higgs fields, which cause a substantial change of the deformation class of QFT vacuum that cannot be smoothly connected to the conventional so(10) GUT vacuum. There are distinct p ∈ {0, 1} = Z 2 deformation classes of QFT.
We propose a schematic quantum phase diagram, shown in Fig. 8, interpolating between different quantum vacua: the modified so(10) GUT + WZW term, the su(5) GG GUT, the su(4) × su(2) L × su(2) R PS model, and the su(3) × su(2) × u(1) SM. In fact, this w 2 w 3 global anomaly (hereafter w 2 w 3 as a shorthand for the precise bundle constraint w 2 w 3 (T M ) = w 2 w 3 (V SO(10) )) does not occur when the internal symmetry is within su(5) (for the GG su(5) GUT), nor occur within su(4) × su(2) × su(2) (for the PS model), nor occur within su(3) × su(2) × u(1) (for the SM). Alternatively, we can also regard this w 2 w 3 anomaly is matched in the GG, PS, and SM via the symmetry breaking. This w 2 w 3 global anomaly only occurs when the internal symmetry is Spin(10) (for the modified so(10) GUT + WZW term), but this anomaly still constrains the full quantum phase diagram (Fig. 8).
For Toy Model I without WZW term and without w 2 w 3 anomaly, we should remove the whiten quantum critical region in Fig. 8, but we are left with a quantum critical point at the origin.
For Toy Model II with WZW term and with w 2 w 3 anomaly, we encounter the whiten quantum critical region near the origin in Fig. 8.
Case (1). If the internal symmetries were pretended to be global symmetries (or weakly gauged by probe background fields), then we are dealing with the quantum criticality between Landau-Ginzburg global symmetry breaking phases in 4d. Conventionally, the global symmetry breaking pattern can be triggered by the GUT-Higgs fields. Surprisingly, for Model II (Fig. 3), we discover a gapless quantum phase with fractional excitations and deconfined emergent gauge structure in analogy to 4d deconfined quantum criticality 8 beyond the Landau-Ginzburg-Wilson-Fisher critical phenomena. Specifically, we propose a 4d mother effective field theory, where the GUT-Higgs bosonic fields can 8 The concept of deconfined quantum criticality was first developed in the condensed matter community [27], to describe a class of direct continuous transition between two distinct symmetry breaking phases with fractionalized excitations and gauge structures emerging in the low-energy spectrum at and only at the transition. It occurs when a quantum system with global symmetry G has the tendency to spontaneously break the symmetry to its distinct subgroups G sub,1 and G sub,2 , while the low-energy effective field theory has G-anomaly but not G sub,1 -or G sub,2 -anomalies, in terms of 't Hooft anomalies. Then the two symmetry breaking phases cannot share a trivial G-symmetric intermediate phase, paving ways for gapless phase transition and fractionalized excitations to emerge. Several recent works explore the possible deconfined quantum criticality in 4d spacetime (see [28][29][30][31] and References therein). A hint toward our construction of 4d deconfined quantum criticality between symmetry breaking phase is the fact that the Spin(10) (treated as global symmetry) can have a 't Hooft anomaly of gauge-gravity anomaly type (due to the aforementioned w2w3 anomaly); while the smaller subgroups with Lie algebras su(5) of GG, su(4) × su(2) × su(2) of PS, or su(3) × su(2) × u(1) of SM, have no such w2w3 anomaly. So the anomalous spacetime-internal Spin(10) symmetry hints a possible fractionalization of the GUT-Higgs field as a deconfined quantum criticality.
A crucial idea of deconfined quantum criticality construction is that "the GPS-symmetry-breaking topological defect of the GG GUT-Higgs model traps the fractionalized quantum number of unbroken GG internal symmetry group; while vice versa, the GGG-symmetry-breaking topological defect of the PS GUT-Higgs model traps the quantum number of unbroken be fractionalized to new fragmentary fermionic excitations, with extra gauge enhancement. An example of such gauge enhancement introduces a new U(1) gauge sector called [U(1) ] emergent gauge , different from the SM electrodynamics U(1) EM . We name such a new theory as a Fragmentary GUT-Higgs Liquid model with emergent new fermions and new gauge fields, emergent only near the quantum criticality. Case (2). If the internal symmetries are dynamically gauged (as they are not global symmetries but indeed are gauged in our quantum vacuum), we show the gauge-enhanced 4d criticality not merely has the emergent [U(1) ] emergent gauge , but also has the enhanced Spin(10) gauge group. The Spin(10) gauge group and [U(1) ] emergent gauge forms a gauge enhancement of the smaller gauge groups of the SM, GG or PS models, only near the quantum criticality, see Fig. 8. Because the 5d invertible TQFT has the bundle constraint w 2 w 3 (T M ) = w 2 w 3 (V SO(10) ), once the internal symmetries (such as the Spin (10)) are dynamically gauged, the 5d bulk is no longer an invertible TQFT. The Spin(10) gauge fields have also to be dynamically gauged in the 5d bulk. The Spin(10) gauge fields contribute deconfined gapless modes in 5d 9 (in contrast to the confined non-abelian gauge fields being gapped in 4d). Remarkably, the Spin(10) gauge fields in 5d turns the previous TQFT w 2 w 3 (T M ) = w 2 w 3 (V SO(10) ) into a 5d gapless bulk criticality! In summary, when the internal symmetries are dynamically gauged (as in our gauged quantum vacuum), -4d gauge fields: The gauge fields of SM, GG, and PS GUT (su(3) × su(2) × u(1), su (5), and su(4) × su(2) L × su(2) R ) are still restricted in 4d in their respective regions of quantum phase diagram (Fig. 8). There is still some emergent [U(1) ] emergent gauge gauge field, also restricted in 4d, as a 4d boundary deconfined quantum criticality (the same as the previous Case (1) when internal symmetry is not gauged).
-5d gauge fields: However, when and only when the GUT gauge fields are appropriately gauge enhanced (to the Spin(10) gauge fields in our Fig. 8), then they can propagate into the extra-dimensional 5d bulk, and they can induce a 5d bulk criticality.
The GUT-Higgs Φ is also the basic degrees of freedom for the 4d WZW term that saturates the w 2 w 3 anomaly. To rephrase what we had said, the GUT-Higgs Φ is split into the fractionalized PS internal symmetry group." Here GPS-symmetry-breaking and GGG-symmetry-breaking respectively refer to the internal symmetry groups G (i.e., gauge group) of PS and GG models are partly broken.
The terminology gauge enhanced quantum criticality is introduced in [31]. 9 The reason that the non-abelian gauge theory can become gapless in 5d can be understood simply by analyzing the renormalization group (RG) fragmentary colorons and flavorons. Just as the GUT-Higgs Φ can interact with the SM particles and SM gauge forces, the fragmentary colorons and flavorons can also interact with the SM particles and SM gauge forces. The colorons carries the SM's SU(3) c strong gauge charge, while the flavorons carries the SM's SU(2) L weak gauge charge. Just like the GUT-Higgs are made to be very heavy, these colorons and flavorons are also heavy and can also be the heavy Dark Matter candidates. This fractionalization accompanies the emergent dark gauge field a dark µ,gauge .
• The number of generations/families N f : So far we have not yet specified the role of the number of generations N f of quarks and leptons in our theory. If each generation of 16 SM Weyl fermions associates with its own GUT-Higgs field and its WZW term, then the generation number N f times of 16 SM Weyl fermions with N f GUT-Higgs field requires a constraint N f = 1 mod 2 to match the w 2 w 3 anomaly, where N f = 3 generation indeed works. However, regardless the N f of SM, in general, we can just introduce a single (or any odd number) of GUT-Higgs field and WZW sector to match the 1 mod 2 class of w 2 w 3 anomaly. In any case, it is inspiring to confirm our proposal on the gauge enhanced quantum criticality can really happen between our N f = 3 SM quantum vacuum and the neighbor GUT vacua. In this article, we focus on N f = 1 for simplicity, but we can also triplicate N f = 1 to N f = 3. (1, 1) 0,L ∼ 1 of SU(5), so the 16 Weyl fermions of SM are 5 ⊕ 10 ⊕ 1 of SU (5).
4, 1, 2 Weyl fermions of SM are (4, 2, 1) ⊕ (4, 1, 2) of su(4) × su(2) L × su(2) R , and the 16 of so(10) (or Spin (10)). These L and R are internal symmetry group indices. They are different from (but correlated with) the spacetime symmetry L and R. So (3, 2) 1,L ⊕ (1, 2) −3,L ∼ (4, 2, 1) L , and (3, 1) 2,L ⊕ (3, 1) −4,L ⊕ (1, 1) 6,L ⊕ (1, 1) 0,L ∼ (4, 1, 2) L of PS model. In the remaining part of Section 1, we start from an overview on the basic required ingredients of SM and GUT in Sec. 1.1. The outline of this article is given in the table of Contents. Here r R denotes the coefficient of the effective quadratic potential of Φ R field in the representation R. The corresponding GUT-Higgs Φ R field will condense in the representation-R if r R < 0. Relatively speaking, the infrared (IR) low energy is drawn with the red color (for SM), the intermediate neighbor phases are drawn with the green or blue color (for PS or SU(5) models), while the ultraviolet (UV) higher energy is drawn with the violet purple color (for Spin(10)); although the readers should keep in mind that we really explore the near-ground-state, zero-energy and zero-temperature quantum phase diagram. These colors are also designed to match the colors of partitions of representations in Fig. 4 to Fig. 7. For Toy Model I without WZW term and without w 2 w 3 anomaly, we should remove the whiten quantum critical region, but we are left with a quantum critical point at the origin. For Toy Model II with WZW term and with w 2 w 3 anomaly, we encounter the whiten quantum critical region near the origin. The quantum critical region can have dynamical consequences such as emergent deconfined dark gauge force
Various Standard Models and Grand Unifications as Effective Field Theories
Unification, as a central theme in the modern fundamental physics, is a theoretical framework aiming to embody the "elementary" excitations and forces into a common origin. Assuming without any significant dynamical gravity effect at the subatomic scale (i.e., we are only limited to probe the underlying quantum theory by placing the quantum systems on any curved spacetime geometry, but without significant gravity back-reactions), the quantum field theory (QFT) provides a suitable framework for such a unification. Furthermore, assuming that we look at the QFT description valid below a certain energy scale (thus we are ignorant above that energy scale), we shall also implement the effective field theory (EFT) perspective.
In fact, from the EFT perspective, we should remind ourselves the "elementary" excitations are only "elementary" respect to a given EFT quantum vacuum. Moving away from the EFT vacuum (by tuning appropriate physical parameters) to a new quantum vacuum, we shall see that the "elementary" excitations of the new vacuum may be drastically different from the original "elementary" excitations of the previous EFT. So the "elementary" excitations reveal the limitations of our EFT descriptions of quantum vacua. 10 Several examples of such 3+1d QFT and EFT paradigms for high energy physics (HEP) include Standard Model (SM) and Grand Unification (Grand Unified Theory or GUT) [1-7]: 1. Standard Model (SM) : Glashow-Salam-Weinberg (GSW) [1][2][3][4] proposed the electroweak theory of the unified electromagnetic and weak forces between elementary particles. The GSW theory together with the strong force [32,33] becomes the Standard Model (SM), which is essential to describe the subatomic particle physics. The SM gauge group can be with the mod q = 1, 2, 3, 6 so far undetermined by the current experiments (see an overview [34,35] on this global structure of SM Lie group issue). The subscript c is for color, the L is for the internal SU(2) (L for internal symmetry and its spinor) locked with the left-handed Weyl fermion (L for spacetime symmetry and its spinor) in the standard HEP convention, andỸ for electroweak hypercharge. The "elementary" particle excitations of this SM EFT, with 15n or 16n Weyl fermions, are constrained by the representation of su(3) × su(2) × u(1) as (see Fig. 4): 11 The 16th Weyl fermion (1, 1) 0,L is an extra sterile neutrino, sterile to the SM gauge force, also called the right-handed neutrino. We will focus on the 16n Weyl fermion model in this present work. 12 In our convention, we write Weyl fermions in the left-handed (L) basis which means that each is a 2-component 2 L spinor of the spacetime symmetry group Spin(1,3).
2. The su(5) Grand Unification (su(5) GUT): Georgi-Glashow (GG) [5] hypothesized that at a higher energy, the three SM gauge interactions merged into a single electronuclear force under a simple Lie algebra su(5), or precisely a Lie group gauge theory. The su(5) GUT works for 15n Weyl fermions, also for 16n Weyl fermions (i.e., 15 or 16 Weyl fermions per generation). The "elementary" particle excitations of this SU(5) EFT, with 15n or 16n Weyl fermions, are constrained by the representation of SU(5) as (see Fig. 5): 10 Prominent examples occur in various systems with the duality descriptions and the order/disorder operators, such as in the Ising model and Majorana fermion system in 1+1d. 11 Here we use the integer quantized U(1)Ỹ . If we use the phenomenology hypercharge U(1)Y which is 1/6 of U(1)Ỹ , namely q U(1) Y = 1 6 q U(1)Ỹ , to write (1.3), then we have instead: 12 In our present work, we shall focus on the SM or GUT with 16n Weyl fermions. In contrast, Ref. [36][37][38] considers the SM or GUT with 15n Weyl fermions and with a discrete variant of baryon minus lepton number B−L symmetry preserved. Ref. [36][37][38] then suggests that the missing 16th Weyl fermions can be substituted by additional 4d or 5d gapped topological quantum field theories (TQFTs), or by 4d gapless interacting conformal field theories (CFTs) to saturate a certain Z16 global anomaly. On the other hand, our present work does not introduce these Z16-class anomalous sectors, because we already have implemented the 16n Weyl fermion models that already make the Z16 global anomaly fully cancelled. again written all in the left-handed (L) Weyl basis. The 16th Weyl fermion is an extra sterile neutrino, sterile to the SU(5) gauge force, also called the right-handed neutrino.
3. The Pati-Salam model (PS model): Pati-Salam (PS) [6] hypothesized that the lepton forms the fourth color, extending SU(3) to SU (4). The PS also puts the left SU(2) L and a hypothetical right SU(2) R on equal footing. The PS gauge Lie algebra is su(4) × su(2) L × su(2) R , and the PS gauge Lie group is with the mod q = 1, 2 depending on the global structure of Lie group. The "elementary" particle excitations of this PS EFT, with 16n Weyl fermions, are constrained by the representation of G PS q as (see Fig. 6): (with a local Lie algebra so(10)). Thus, the 16n Weyl fermions can interact via the Spin(10) gauge fields at a higher energy. In this case, the 16th Weyl fermion, previously a sterile neutrino to the SU(5), is no longer sterile to the Spin(10) gauge fields; it also carries a charge 1, thus not sterile, under the gauged center subgroup Z(Spin(10)) = Z 4 .
Here we use the L and R to specify the left/right-handed spacetime spinor of Spin(1,3). We use the L and R to specify the left or right internal spinor representation of su(2)L × su(2)R.
Standard Models from the competing phases of Grand Unifications
In Sec. 2, we start by enlisting and explaining some group embedding structures from some of relevant GUTs to SM in Sec. 2.1.
2.1 Spacetime-Internal Symmetry Group embedding of SMs and GUTs, and the w 2 w 3 anomaly Here we use the inclusion notation G large ← G small to imply that: • G large ⊃ G small , namely the G large contains G small as a subgroup, or equivalently G small can be embedded in G large .
• G large can be broken to G small via symmetry breaking of Higgs condensation (which we will explore).
The internal symmetry group embedding structure has been explored, for example summarized in [39]: We further include both the complete spacetime-internal symmetry group embedding structure as follows: .
(2.3)
Some comments about (2.3) follow: 1. The Spin means the spacetime rotational symmetry group Spin ≡ Spin(1, 3) for 4d Lorentz signature (or Spin ≡ Spin(4) for 4d Euclidean signature). The Spin contains the fermionic parity Z F 2 at the center subgroup thus Spin/Z F 2 = SO where the SO is the bosonic spacetime (special orthogonal) rotational symmetry group (similarly, SO ≡ SO(1, 3) for 4d Lorentz signature, or SO ≡ SO(4) for 4d Euclidean signature). The notation G 1 × N shared G 2 ≡ G 1 ×G 2 N shared means modding out their common normal subgroup 2. The Z 4,X has the X-symmetry generator such that its square (X) 2 = (−1) F is the fermion parity operator, so Z 4,X ⊃ Z F 2 . Wilczek-Zee [40] firstly noticed that the X ≡ 5(B − L) − 4Y , with the baryon minus lepton number B − L and the electroweak hypercharge Y , is a good global symmetry respected by SM and the su(5) GUT. All known quarks and leptons carry a charge 1 of Z 4,X , in the left-handed Weyl spinor basis. The center of Spin(10) can be chosen exactly as Z(Spin(10)) = Z 4,X . We summarize how Z 4,X can be obtained in Table 3 and Table 4. See more discussions on Z 4,X in [21, 24, 36-38].
3. The (X) 2 = (−1) F relation is obeyed in the non-supersymmetric SM and GUT models, so it is natural to introduce the Spin × Z F 2 Z 4,X structure in (2.3). However, it is possible to have new fermions, such as in supersymmetric SMs or GUTs, which does not necessarily obey (X) 2 = (−1) F relation. In that case, we can introduce just Spin × Z 4,X structure. See a footnote for the alternative symmetry embedding with the Spin × Z 4,X structure. 14 4. In this (2.3), we keep a structure of Spin × Z F 2 Z 4,X which is essential to produce a mixed gauge-gravity nonperturbative global anomaly constraint of a Z 16 class. As already mentioned in footnote 12, in this article, we keep the 16n Weyl fermions in all our SM and GUT models, thus the Z 16 global anomaly is already cancelled by 16n chiral fermions. 5. In this (2.3), we also keep a structure of Spin × Z F 2 Spin(10) -the cobordism group Ω d G ≡ TP d (G) shows [12,22] TP 5 (Spin × Z F 2 Spin(10)) = Z 2 , but TP 5 (Spin × Spin(10)) = 0. (2.5) This implies only the Spin × Z F 2 Spin(10) structure offers a possible Z 2 class global anomaly in 4d that is captured by a 5d invertible TQFT with a partition function on a 5d manifold M 5 : 15 (10) ) . (2.8) But this mod 2 anomaly is absent and not allowed on the Spin × Spin(10) structure. The difference between Spin × Z F 2 Spin(10) and Spin × Spin(10) is the following: the fermion charge under (−1) F thus odd under Z F 2 must be in the Z 2 normal subgroup of the center subgroup Z(Spin(10)) = Z 4,X so (X) 2 = (−1) F in order to impose the spacetime-internal Spin × Z F 2 Spin(10) structure. However, in contrast, the Spin × Spin(10) allows other fermions to not obey the (X) 2 = (−1) F relation.
14 Another version of the spacetime-internal symmetry group embedding (that is more suitable for supersymmetric SMs or GUTs) isḠ .
(2.4) 15 The invertible TQFT means that the TQFT path integral or partition function Z(M ) on any closed manifold M has its absolute value |Z(M )| = 1. Thus the dimension of its Hilbert space is always 1 also any closed spatial manifold, there is no topological ground state degeneracy. Here Z(M 5 ) = (−1)´w 2 w 3 = ±1 on any closed M 5 thus it is an invertible TQFT, such that when M 5 is a Dold manifold CP 2 S 1 or a Wu manifold SU(3)/SO(3) generating a Z(M 5 ) = −1 [17,22]. Here the Spin × Z F 2 Spin(10) structure imposes the spacetime and gauge bundle constraint with G = Spin(10)/Z F 2 = SO (10). Moreover, the Steenrod square Sq 1 is an operation sending the second cohomology to the third cohomology class: H 2 to H 3 , which we can regard Sq 1 = 1 2 δ with δ as a coboundary operator (see for example [22]). Then, in the case G = SO(10), we can deduce another bundle constraint: (2.7) On the orientable spacetime, the first Stiefel-Whitney class w1(T M ) = 0, so w3(T M ) = w3(VG).
• the bosons must be in the integer isospin representation 0, 1, 2, . . . , etc. of SU(2) (namely, the odd-dimensional representations 0, 1, 3, . . . , etc. of SU(2)). 6. The last but the most important comment above all, is that in order to realize a possible continuous deconfined quantum phase transition, we do require to use the w 2 w 3 anomaly in (2.8), such that this anomaly occurs in the phase transition between the GG and PS models in Fig. 8. So we do aim to impose the Spin × Z F 2 Spin(10)-structure as in (2.3) in order to implement the w 2 w 3 anomaly. In short, the readers can ask: Why do we need the w 2 w 3 anomaly near the criticality for establishing a possible continuous quantum phase transition between the GG and PS models?
The answer is that: • The GG and PS models are Landau-Ginzburg symmetry breaking type of phases (when we treat the internal symmetry as global symmetry) or the gauge-symmetry breaking type of phases (when we treat the internal symmetry group as gauge group). The w 2 w 3 anomaly is matched on two sides of phases by GG and PS models via symmetry breaking. (In fact, no w 2 w 3 anomaly is allowed in GG and PS models.) • But the w 2 w 3 anomaly can protect a gapless quantum phase transition (or a gapless intermediate quantum critical region) between the GG and PS models when the Spin(10) symmetry is restored at their phase transition. Their phase transition can be protected to be Spin(10)-symmetry-preserving gapless due to the w 2 w 3 anomaly exists only in the enlarged Spin(10) internal symmetry group.
Because the conventional so(10) GUT is free from the w 2 w 3 anomaly [12,17], we will need to explicitly introduce a new WZW-like term built out of GUT-Higgs field in the mother EFT, which allows the GUT-Higgs sector (beyond the SM sector) to saturate the w 2 w 3 anomaly. To this end, we will start from writing down a GUT-Higgs model in the context of so(10) GUT, and then trying to modifying the GUT-Higgs model to saturate the w 2 w 3 anomaly. (That mother EFT will be the main achievement later in Sec. 3.)
Branching Rule of SMs and GUTs, and a GUT-Higgs model
In the following, we motivate the GUT model with GUT-Higgs as the gauge symmetry breaking pattern to go to the lower energy EFT (such as SM). Most of these breaking patterns are well-established and overviewed in [41]. The additional new input is that we try to unify several models into a GUT-Higgs model with as minimum amount of GUT-Higgs as possible. In Appendix B, we try to go through the logic again, and carefully examine the consequences and possibilities of the types of required GUT-Higgs. Later we will motivate the possible Lagrangian of the GUT-Higgs potential.
Here we summarize what we need from the analysis done in Appendix B: • We can use a Lorentz scalar boson with a 45-dimensional real representation of so (10) or Spin (10): (2.10) to break the Spin(10) of so(10) GUT to the SU(5) of GG model, also we can use this same Φ 45 to break G PS 2 ≡ Spin(6)×Spin(4) of the SM.
• We can use a Lorentz scalar boson with a 54-dimensional real representation of so (10) or Spin (10): (2.11) to break the Spin(10) of so(10) GUT to the G PS 2 ≡ Spin(6)×Spin(4) of PS model, also we can use this same of the SM.
• The combinations of the two facts above is summarized in Fig. 9, where we can use the Φ 45 and Φ 54 to write the GUT-Higgs model, that can induce the qualitative phase diagram similar to Fig. 8. Given the so(10) GUT, to induce the three other models in Fig. 9, we can add the GUT-Higgs potential , while the r 45 and r 54 are real-number tunable parameters shown in Fig. 8 and Fig. 10: A slice of Fig. 10 becomes the Fig. 8. (Temporarily now we get rid of the GUT-Higgs Φ 1 thus get rid of r 1 axis in Fig. 10. More on this Φ 1 later.) We can use this U(Φ R ) potential in (2.12) to induce these interior parts of four phases (the so(10) GUT, the su(5) GUT, the PS model, and the SM).
• If Φ 54 condenses, namely if r 54 < 0 so Φ 54 = 0, then the so(10) GUT becomes Higgs down to the PS model. Here the real parameter r R ∈ R denotes the coefficient of the effective quadratic potential of Φ field in the representation R. The corresponding GUT-Higgs Φ field will condense in the representation-R if r R < 0. Relatively speaking, the infrared (IR) low energy is drawn with the red color (for SM), the intermediate neighbor phases are drawn with the green or blue color (for PS or SU(5) models), while the ultraviolet (UV) higher energy is drawn with the violet purple color (for Spin (10)). These colors are also designed to match the colors of partitions of representations in Fig. 4 to Fig. 7.
All these above Higgs condensations induce continuous phase transitions.
The purpose of the next Section 3 is to design various EFT and to explore the possible phase structures and phase transitions (of Fig. 8 and Fig. 10). In particular, we will write down a mother EFT such that it saturates the w 2 w 3 global anomaly and it realizes an excotic quantum phase transition between the GG su(5) GUT and the PS model.
Elementary GUT-Higgs model induces the SM
In Section 2 (especially Sec. 2.2), we write down a GUT-Higgs potential U(Φ R ) in (2.12) appending to the so(10) GUT with 16n complex Weyl fermions ψ L . Let us write down the full path integral Z GUT of such so(10) GUT plus U(Φ R ), in a Lorentzian signature, evaluated on a 4-manifold M 4 : The action S GUT is: The S YM =´Tr(F ∧ F) part is the Yang-Mills gauge theory, with Lie algebra valued field strength and (ψ L · · · ) imply indefinite multiple numbers of Weyl fermion fields, so as to properly match the representation R of the Higgs field Φ R . For the so(10) GUT, we have to sum over the Spin(10) gauge bundle, whose 1-form connection is the spin-1 Lorentz vector and Spin(10) gauge field, written as There are 45 of such Lie algebra generators, T a , with: • rank-16 matrix representations that act on the quark-and-lepton matter representation 16 + of Spin(10).
• rank-45 matrix representations that act on the Φ 45 as the 45 of Spin(10).
• rank-54 matrix representations that act on the Φ 54 as the 54 of Spin(10).
Locally the Spin(10) Lie algebra is the same as the so(10) Lie algebra, but globally we really need to define the principal Spin(10) gauge bundle P A to sum over. So more precisely the path integral over the gauge field measure really means´[DA] · · · ≡ gauge bundle P A´[ DÃ] . . . , whereà are gauge connections over each specific gauge bundle choice P A . The θ term, θTr(F ∧ F), can be added or removed depending on the model. In this work, we shall set θ = 0 or close to zero.
The ψ L is a 2-component spin-1/2 Weyl fermion 2 L of Spin(1,3). The † is the standard complex conjugate transpose. Theσ µ = (σ 0 , −σ 1 , −σ 2 , −σ 3 ) and σ µ = (σ 0 , σ 1 , σ 2 , σ 3 ) are the standard spacetime spinor rotational su(2) Lie algebra generators for L and R Weyl spinors. The action S GUT also includes the Weyl spinor kinetic term and GUT-Higgs kinetic term, coupling to gauge fields via the covariant derivative operator D µ,A ≡ ∇ µ − ig A µ . The ∇ µ can contain the curve-spacetime covariant derivative data such as Christoffel symbols or the spinor's spin-connection if needed. The . . . are possible extra deformation terms to be added later. This subsection Sec. 3.1 mostly treats the spin-0 Lorentz scalar Higgs field Φ R with some representation R as the elementary Higgs field. We will however fractionalize this elementary Higgs field Φ R to other further elementary fermionic fields in the later Sec. 3.3 and Sec. 3.4.
Model I: Without Wess-Zumino-Witten term, and Symmetric Mass Generation
Follow the choice in Sec. 2.2 and in (2.12), we can further adjust it to The property (whether Φ 45 = 0 or Φ 54 = 0 condenses, or both condense, namely whether r 54 < 0 or r 54 < 0) still follows Sec. 2.2. The theory becomes Higgs down to the su(5) GUT, or the PS model, or the SM, see Fig. 9. Here are some extra comments for adding Φ 1 or other Φ R terms to Fig. 10: • We can introduce a Lorentz scalar boson with a 1-dimensional trivial but real representation of so (10) or Spin (10): -If Φ 1 = 0 does not condense, namely if r 1 > 0, the theory remains in the so(10) GUT.
-However, not only Φ 1 = 0 condenses, but when Φ 1 > Φ 1,c exceeds a critical value, it can drive to the Symmetric Mass Generation (SMG) phase and gap out all fermions while preserving the G-symmetry (if the theory is free from all 't Hooft anomalies in G). 16 How do we associate Φ 1 > Φ 1,c with the SMG effect? First notice that the four of the spinor representation 16 + of Spin(10) can produce the tensor product decomposition [56] 16 More systematically, with the symmetric (S) or anti-symmetric (A) matrix representation subscript indicated on the right hand side: From (3.6), we learn that four of 16 can produce two trivial representations 1 of so (10) or Spin(10), one from 10 ⊗ 10 and one from 120 ⊗ 120. Therefore, on the mean field level, we can deduce the expectation of the GUT-Higgs Φ 1 from some schematic effective four-fermion interactions of ψ in 16 of Spin(10): 17 16 The Symmetric Mass Generation (SMG) mechanism is explored in various references, for some selective examples, by Fidkowski-Kitaev [42] in 0+1d, by Wang-Wen [43,44] for gapping chiral fermions in 1+1d, You-He-Xu-Vishwanath [45,46] in 2+1d, and notable examples in 3+1d by Eichten-Preskill [47], Wen [48], You-BenTov-Xu [49,50], BenTov-Zee [51], Kikukawa [52], Wang-Wen [12], Catterall et al [53,54], Razamat-Tong [13,55], etc. 17 Here fermions are anti-commuting Grassman variables, so this expression ψψψψ is only schematic. The precise expression of ψψψψ includes additional spacetime-internal representation indices and also includes possible additional spacetime derivatives (for point-splitting the fermions to neighbor sites if writing them on a regularized lattice).
But we do not wish to impose the ordinary Anderson-Higgs quadratic mass term induced by ψψ = 0, otherwise this ψψ = 0 will lead to Spin(10) symmetry breaking, instead of the Spin(10) symmetry preserving SMG. This means that we have to impose ψψ = 0, so ψψ ψψ = 0, no conventional mass due to ψψ = 0. (3.9) Thus the above argument implies that above a critical condensation value Φ 1 > Φ 1,c as the interaction strength goes above a critical value, we do obtain the SMG effect in Fig. 10!
To implement the SMG to gap out the 16 Weyl fermions in 16, a necessary check is that the fermions are free from all 't Hooft anomalies in the Spin (10), or more precisely free from all 't Hooft anomalies in the spacetime-internal Spin × Z F 2 Spin(10) structure. This is true based on (2.5), because there is only a mod 2 class w 2 w 3 global anomaly, which the 16 Weyl fermions in 16 do not carry any w 2 w 3 global anomaly. So we are able to gap out the 16 Weyl fermions while preserving Spin × Z F 2 Spin(10)-symmetry.
But one of the mother EFTs (Model II) that we will propose later in Sec. 3 Spin(10)-symmetry, although we can gap out the Weyl fermions in 16, the extra GUT-Higgs WZW bosonic sectors will still induce additional symmetry-preserving gapless modes.
• In the standard Anderson-Higgs electroweak symmetry breaking mechanism, Higgs coupling is introduced in order to give quadratic masses to Weyl fermions. In this work, we may need to introduce more general GUT-Higgs fields Φ R with various representations R. For a generic representation R, the Higgs field may couple to a product of even number (not limited to two) of fermion operators (e.g. ψ † ψ † ψψ or ψψψψ), such that the fermion representation can combine to match the corresponding Higgs field representation. (We shall not get distracted to handle the Anderson-Higgs electroweak symmetry breaking masses of Weyl fermions in this article, as this effect is well-studied. But we make some comments in Appendix B.) • Scaling dimensions of tuning parameters r R . Because the GUT-Higgs field Φ 45 , Φ 54 , and Φ 1 all couple to four fermion operators (e.g. ψ † ψ † ψψ or ψψψψ + h.c.), the term r R Φ 2 R that tunes the Higgs transition will correspond to a eight-fermion interaction. At the SM fixed point, the matter fermion ψ has a scaling dimension 3/2 . So the eight-fermion interaction that drives the Higgs transition will have a scaling dimension 3/2 × 8 = 12, which is much higher than the space-time dimension 4. For this reason, such interaction is often ignored in the existing study of the SM. Although such interaction is perturbatively irrelevant at the SM fixed point, strong enough interaction will lead to non-perturbative effect that modifies the tuning parameters r R and eventually drives the Higgs transitions between the SM phase and its adjacent GUT phases (such as the PS and GG phases).
So taking into account the GUT-Higgs condensation or non-condensation, we obtain a qualitative phase diagram in Fig. 10.
Model II: With Wess-Zumino-Witten term, and Deconfined Quantum Criticality
Now we propose a new mother EFT path integral by modifying the action S GUT to S WZW GUT via adding the WZW term and other terms, in a Lorentzian signature path integral: The purpose of the new discrete torsion class 4d WZW-like term (written on a 5d manifold with 4d boundary), that we will introduce in details later, is to saturate the w 2 w 3 global anomaly. The mother EFT contains the following detailed ingredients: 1. There are 16n complex Weyl fermions, each ψ L is the 16 of Spin(10) minimally coupled to Spin(10) gauge field in the covariant derivative. Properties of the Spin(10) gauge field A and other familiar terms in S GUT had been explained in the earlier Sec. 3.1.
3. An SO(10) real bivector field Φ bi ∈ R is obtained from the tensor product of the two φ, in the 10 ⊗ 10 = 1 S ⊕ 45 A ⊕ 54 S of so(10) also of Spin (10). To be explicit, Φ bi contains two vector indices, Φ bi ab with a, b ∈ {1, 2, . . . , 10}. We can arrange Φ bi ab into three different representations R of Φ R as the three GUT-Higgs fields Φ 1 , Φ 45 and Φ 54 (which appeared in Sec. 3.1.1): (3.13) For brevity, we also denote the anti-symmetric bivector Φ bi [a,b] or Φ 45 asΦ bi , and denote the symmetric bivector Φ bi {a,b} or Φ 54 asΦ bi . 4. GUT-Higgs field kinetic term and covariant derivative: The kinetic term for the GUT-Higgs fields is written as , with the complex conjugate transpose written as dagger †.
Moreover, we can also combine the kinetic terms for Φ 1 , Φ 45 and Φ 54 in terms of the kinetic term for the bivector Φ bi . This kinetic term becomes Tr (D µ A Φ bi ) (D µ,A Φ bi ) , with the matrix transpose written as , where the Trace Tr is over the 10-dimensional Lie algebra representation of so (10). We can write down the explicit form 18 where A µ,ab = α A α µ T α ab with another 45 pieces of the rank-10 matrix representation T α .
In general, the Lie algebra generator T α is hermitian. In the case of the real representation 10, the T α is not only hermitian, but also an imaginary and anti-symmetric matrix.
In summary, for our purpose, the two expressions of GUT-Higgs kinetic terms are both correct: , and the bivector field expression: . All these above GUT-Higgs fields (in the vector or bivector representations) also coupled to the so(10) gauge fields in the standard way.
5. Yukawa-like coupling terms: We also have several Yukawa-like coupling terms, (i) between the GUT-Higgs bivectors Φ bi and the vectors φ, is apparently a hermitian scalar. The σ 2 matrix acts on the 2-component spacetime Weyl spinor 6. Mean-field approximation: If for a moment, we neglect the gauge field A coupling in the covariant derivative, neglect the GUT-Higgs potential U(Φ R ), and neglect the possible WZW term S WZW [Φ bi ], then we only have the quadratic Lagrangian in between GUT-Higgs bivectors Φ bi , vectors φ, and the Weyl spinor ψ L . Then this quadratic Lagrangian, c. , at the mean-field level, can be integrated out to impose constraints and relations between the bivectors Φ bi , vectors φ, and the Weyl spinor ψ L . In some sense, what is integrated out becomes a Lagrange multiplier to impose a constraint on the remained fields. In this limit, we only need to regard the Weyl spinor ψ L as the elementary fields, the vectors φ is the 10 from the tensor product of two ψ L since 16 ⊗ 16 = (10 ⊕ 120 ⊕ 126). Then the bivector Φ bi is from the tensor product of two φ as the 10 ⊗ 10, out of the quartic ψ L 's 16 ⊗ 16 ⊗ 16 ⊗ 16.
7.
Wess-Zumino-Witten-like discrete torsion term: For now, we directly provide our endgame answer to WZW term, later we will backup and derive this WZW term in details from scratch in Sec. 3.2.
The schematic WZW action that we propose to match the mod 2 class w 2 w 3 global anomaly is: in terms of differential form with mod 2 valued forms of B and B fields, in the de Rham cohomology. The theory is defined on the 5d manifold M 5 whose boundary is the 4d space time M 4 = ∂M 5 . 19 The B and B are constructed out of some GUT-Higgs field Φ (such as the bivectorΦ bi orΦ bi , for Φ bi 18 The reason that (Dµ,AΦ bi ) ab ≡ ∇µΦ bi ab − i g[Aµ, Φ bi ] ab has a matrix commutator [Aµ, Φ bi ] in contrast with the familiar form Dµ,Aφ ≡ ∇µφ − i g Aµφ, is due to the following fact: The Lie group G transformation for some U ∈ G acts on the gauge field A as However, the Lie group transformation acts on the vector field φ as φ → U φ, while acts on the rank-10 matrix bivector field Φ bi as Φ bi ab → U Φ bi ab U . 19 Here we normalize the usual differential form . See a related discussion on the 5d B dB theory in [20]. The quantization conditions on the closed cycles, also map from: respectively, organized in (3.13)). More precisely, the WZW term is written in the singular cohomology class of B and B cochain fields: Here the 2-cochain fields are Z 2 -valued, they can be chosen as cohomology classes thus B ∈ H 2 (M, Z 2 ) and B ∈ H 2 (M, Z 2 ). The δ is the coboundary operator, and the Steenrod square Sq 1 ≡ δ 2 mod 2 here maps the singular cohomology H 2 (M, Z 2 ) → H 3 (M, Z 2 ), on some triangulable manifold M . 20 The wedge product ∧ of differential form in (3.14) becomes the cup product of cochains or cohomology classes in (3.15). Note that the triangulable manifold M is always a smooth differentiable manifold, thus we can downgrade the singular cohomology result (3.15) to reproduce the de Rham cohomology expression (3.14).
8. GUT-Higgs potential U(Φ R ), and a relation to non-linear sigma model (NLSM): Mostly we shall simply choose the GUT-Higgs potential written in (3.4), which is sufficient for a continuum QFT description. Some lattice or condensed matter based theorists may wonder whether there is a non-linear sigma model (NLSM) description at a deeper UV. One approach is to write down a potential with a NLSM constraint (Tr(Φ Φ)−R 2 ) with the norm of GUT-Higgs centered around a radius R, and introduce a Lagrange multiplier λ, such that integrating out´ [Dλ] . . . gives the fixed radius constraint at UV. With appropriate deformations, we anticipate a RG flow from UV to IR gives the GUT-Higgs potential. One reason to introduce a NLSM is that it is natural to adding the WZW term to NLSM. However, an NLSM description turns out to be not necessary for writing our WZW term. 9. Deconfined Quantum Criticality (DQC): The motivation to add this 4d S WZW [Φ] into our 4d mother EFT is to induce the analogous phenomenon called the deconfined quantum criticality [27]. The original deconfined quantum criticality [27] is proposed as a continuous quantum phase transition between two kinds of Landau symmetry breaking orders: Néel anti-ferromagnet order and Valence-Bond Solid (VBS) order in 3d (namely, 2+1d).
Here in out gauge theory context in 4d (namely, 3+1d), between the GG su(5) GUT and the PS su(4) × su(2) × su(2) model, we do not really have the conventional Landau symmetry breaking orders as both the su(5) and su(4) × su(2) × su(2) are dynamically gauged as gauge theories. But if we regard the su(5) and su(4) × su(2) × su(2) are internal global symmetries that are not yet gauged, then we are able to seek for a deconfined quantum criticality construction between the GG and PS models, as we will verify in the next Sec. 3.2.
20 Generally, given a chain complex C• and a short exact sequence of abelian groups: we have a short exact sequence of cochain complexes: Hence we can obtain a long exact sequence of cohomology groups: the connecting homomorphism ∂ is called Bockstein homomorphism. For instance, β (n,m) : H * (−, Zm) → H * +1 (−, Zn) is the Bockstein homomorphism associated with the extension Zn ·m → Znm → Zm where ·m is the group homomorphism given by multiplication by m. Specifically, β (2,2 n ) = 1 2 n δ mod 2, thus the Steenrod square obeys Sq 1 ≡ β (2,2) ≡ δ 2 mod 2.
Homotopy and Cohomology group arguments to induce a WZW term
We review the 3d WZW term construction in the familiar deconfined quantum criticality (dQCP) in 3d (namely, 2+1d) [27], in Appendix C, based on more nonperturbative arguments from homotopy and cohomology groups, and anomaly classifications from cobordism. Here we proceed with the same logic, to construct the 4d WZW term in the new deconfined quantum criticality (DQC) in 4d (namely, 3+1d) to justify what we claimed in (3.15).
Below we write G as the original larger symmetry group, while G sub is the remained preserved unbroken symmetry in the corresponding order (i.e., Néel or VBS orders for 3d dQCP; the GG or PS for the 4d DQC we will propose). Then we have the following fibration structure: where the quotient space G G sub is the base manifold (i.e., the orbit) as the symmetry-breaking order parameter space. The G is the total space obtained from the fibration of the G sub fiber (i.e., the stabilizer) over the base G G sub .
Now we follow the similar logic for the 3d dQCP summarized in Appendix C, generalizing the idea to deal with our 4d DQC. Here we can keep the larger U(5) instead of SU(5) as the preserved internal symmetry of the su(5) GUT.
PS su(4) × su(2) × su (2): . π 0 π 1 π 2 π 3 π 4 π 5 GG O(10) U(5) Let us comment about the construction of 4d WZW and its 4d 't Hooft anomaly, step by step, 1. Start with the hint from homotopy groups, we need to find topological defects trapped in the orderparameter target manifold of bosonic GUT-Higgs fields in the GG and PS models, 21 classified by π n GG ( O(10) U(5) ) and π n PS ( O(10) O(6)×O(4) ) such that the dimensionality n GG + n PS = d where the d is the total spacetime dimension thus d = 4 (or one lower dimension compared with the 5d where the WZW is extended to put on). This suggests that we take Note that ( O(m+n) O(m)×O(n) ) ≡ Gr(m, m + n) is a Grassmannian manifold. Here we need Gr(6, 10) = Gr(4, 10). 2. We will use the cohomology construction of the WZW term, furnished by the hints of homotopy groups.
Then we need a relation between homotopy group and cohomology group.
In algebraic topology, an Eilenberg-MacLane space K(G, n) is a topological space with a single nontrivial homotopy group, s.t. π n (K(G, n)) ∼ = G and π m (K(G, n)) = 0 if m = n. It can be regarded as a building block for homotopy theory, also it provides a bridge between homotopy and cohomology. Let X be a topological space or a manifold. The set [X, K(G, n)] of based homotopy classes of based maps from X to K(G, n) is a natural bijection with the n-th singular cohomology group H n (X, G). In particular, when π n (X) ∼ = G, H n (X, G) = Hom(π n (X), G) = Hom(G, G).
There is a distinguished element ω ∈ H n (X, G), as the generator of the cohomology group H n (X, G), corresponding to the identity morphism in Hom(G, G). The morphism is realized as 3. With the above homotopy group (3.19) in mind, we can use the Serre spectral sequence to derive the following: 22 In fact, we just need one of the two components from SO(10)/U(5), whose cohomology group: 4. We can also derive The mod 2 cohomology of real Grassmannian manifold is well-known from the theory of Stiefel-Whitney characteristic classes. The integral cohomology is trickier but it can be worked out. 5. We now take a Z 2 cohomology class called B(Φ bi ) out of and another Z 2 cohomology class called B (Φ bi ) out of • The B(Φ bi )-field as a second cohomology class, can be constructed out of the GUT-Higgs field Φ 54 in the 54 representation of so (10). In particular, we can also write Φ 54 as a bivector GUT-Higgs field symmetric representation, 54 S out of 10 ⊗ 10, calledΦ bi that we detail in Sec. 3.3.
• The B (Φ bi )-field as a second cohomology class, can be constructed out of the GUT-Higgs field Φ 45 in the 45 representation of so (10). In particular, we can also write Φ 45 as a bivector GUT-Higgs field anti-symmetric representation, 45 A out of 10 ⊗ 10, calledΦ bi that we detail in Sec. 3.3.
Similar to the familiar 3d dQCP in Appendix C, we can also provide the physical intuitions on the link invariants between various topological defects: between the charged objects and the charge operators constructed from homotopy groups and cohomology groups. For example, ) as a charged object. The charge operator 2-surfacê 2 can be linked with a charged 1d loop ς 1 GG in the 4d spacetime. Follow the generalized higher global symmetry language [57], this nontrivial linking number Lk implies a measurement of U(5) symmetry on the topological defect. Precisely, the linking number Lk, manifested as a statistical Berry phase, is evaluated via the expectation value of path integral: (3.27) Related descriptions of link invariants of QFTs can be found in [58,59] and references therein. 22 We can answer in more general case O(2n)/U(n). We will need the Universal Coefficient Theorem (UCT), so that H 2 (X, A) = Hom(H2(X), A) ⊕ Ext(H1(X), A), for some topological space X and any abelian group coefficient A. The space O(2n)/U(n) has two connected components, each of which is diffeomorphic to SO(2n)/U(n), so 4)). There is a topological defect line along a 1d loop called ς 1 PS , paired up with a 1-connection calledṽ gives a 1d line operator exp(iπ¸ς 1 PSṽ ) as a charged object. The charge operator 2-surface˜ 2 can be linked with a charged 1d loop ς 1 PS in the 4d spacetime. Follow the generalized higher global symmetry language [57], this nontrivial linking number Lk implies a measurement of (O(6)×O(4)) symmetry on the topological defect. Precisely, the linking number Lk, manifested as a statistical Berry phase, is evaluated via the expectation value of path integral: Lk(˜ 2 , ς 1 PS ) M 4 . We leave more of these picturesque discussions and imaginative figures, in a companion work.
6. Based on the above observations about the link invariants, follow Appendix C's logic, our 4d DQC construction is valid if we introduce a mod 2 class 4d WZW term, defined on a 4d boundary M 4 of a 5d manifold M 5 , schematically in a differential form or de Rham cohomology, Recall the footnote 19 about our normalizations of differential forms and cohomology classes. More precisely, we can improve this to construct WZW in the singular cohomology class: We thus succeed to verify our claims in (3.14) and ( These conclude our derivation of 4d WZW and 't Hooft anomaly for a candidate 4d DQC for GG-PS GUT transition.
Composite GUT-Higgs model within the SM
Before analyzing the effect of the 4d WZW term, we will first review how so(10) GUT, GG, PS, and SM can be unified in the same quantum phase diagram by the different condensation pattern of the SO(10) bivector GUT-Higgs field. Follow Sec. 2.2, for this discussion, we will first turn off the WZW term, assuming that the theory has no additional w 2 w 3 anomaly. Starting from the so(10) GUT phase, which has the largest internal symmetry group Spin(10), the GUT-Higgs field can be unified as an SO(10) bivector field Φ bi ab ∼ φ a φ b (for a, b = 1, 2, · · · , 10), (3.32) which can be considered as a composition of two SO(10) vector fields φ a , where the SO(10) vector φ a can be further considered as a composition of two Weyl fermions ψ , (for a = 1, 2, · · · , 5).
(3.33)
Here when two quantum fields Φ A and Φ B are linearly coupled with each other in the field theory (as source and original fields), we denote them in this notation Φ A ∼ Φ B , such that they are "dual" to each other and share exactly the same symmetry properties. There are 16 × 16 real symmetric matrices Γ a acting in the fermion flavor space, which are determined by the following algebraic relations (for a, b = 1, 2, · · · , 5): In view of the above composite construction, we refer to the bivector representation Φ bi as the composite GUT-Higgs field.
The composite Higgs field contains elementary Higgs components of both Φ 45 and Φ 54 , since 10 ⊗ 10 = 1 ⊕ 45 A ⊕ 54 S . Follow (3.13), we introduce the following notations to denote different irreducible representations of the composite GUT-Higgs field (in terms of SO(10) vector bilinears): • TrΦ bi ∼ a φ a φ a is equivalent to Φ 1 as the 1 S of SO(10).
in SO (10). The GUT-Higgs fieldΦ bi = 5 a=1 ϕ † a ϕ a itself defines the generator of the U(1) X group, whose Z 4 subgroup defines Z 4,X . The 16 Weyl fermions split as 16 ∼ 5 1 ⊕ 10 1 ⊕ 1 1 under SU(5) × Z 4,X . The Z 4,X generator in the Spin(10) spinor representation is given by (3.39) By diagonalizing q X operator, we indeed found five-fold eigenvalues of −3, ten-fold eigenvalues of 1 and a one-fold eigenvalue of 5. After mod 4, they all correspond to charge 1 under Z 4,X . Further investigate the representation of SU(5) generators in each q X -charge sectors, we can confirm that the q X = −3 sector is indeed in the anti-fundamental representation 5 and so on to form 16 ∼ 5 −3 ⊕ 10 1 ⊕ 1 5 .
3. Spin(10) → × Z 4,X by simultaneously condensingΦ bi andΦ bi (both 54 S and 45 A representations) to configurations specified in Eqn. (3.35) and (3.37). The unbroken symmetry group is generated by the sub-algebra of so(10) that commute with both GUT-Higgs condensates Φ bi and Φ bi , which must take the form of where A n×n = −A n×n ∈ R n×n are real antisymmetric matrices and S n×n = S n×n ∈ R n×n are real symmetric matrices. They can be combined in the complex representation as such that H n×n = H † n×n ∈ C n×n are complex Hermitian matrices. There is no traceless condition imposed on H 3×3 and H 2×2 and they act independently in each subspace, so they generate the U(3) × U(2) subgroup of U(5), which is further a subgroup of SO(10). The two U(1) subgroups of U(3) and U(2) are generated by 5 a=3 ϕ † a ϕ a and 2 a=1 ϕ † a ϕ a respectively. Since the U(1) X (or Z 4,X ) generator has already been identified as 5 a=1 ϕ † a ϕ a , so the U(1)Ỹ generator must be given by the remaining U(1) generator 1 2 (−3 2 a=1 +2 5 a=3 )ϕ † a ϕ a , which is represented in the Spin(10) spinor representation as matching all the fermion contents in the SM (see Table 3).
No bilinear mass generation by bivector GUT-Higgs:
Unlike the SM-Higgs that generates a bilinear mass for SM Weyl fermions, the GUT-Higgs in 45 and 54 do not generate a bilinear mass for SM Weyl fermions. Because the SO(10) bivector GUT-Higgs field Φ bi corresponds to four-fermion operators, which is supposed to be perturbatively irrelevant. Even if it condenses, it is not expected to gap out the Weyl fermions if its vaccum expectation value is small (but it will Higgs down the gauge group), so the theory remains gapless in the fermion sector in all phases. However, sufficiently strong Higgs condensation of TrΦ bi (or Φ 1 equivalently) can lead to symmetric mass generation (SMG) [13,[42][43][44][45][46][47][48][49][50][51][52][53][54][55] as discussed previously.
Low-energy descriptions for the WZW theory
The WZW term and its associated w 2 w 3 global anomaly can significantly modify the dynamics in the GUT-Higgs sector. There are several possibilities for the low-energy fate of the WZW theory: 1. Spontaneous symmetry breaking (SSB). The SO(10) internal symmetry of WZW term (or Spin(10) for the full modified so(10) GUT) is spontaneously broken by GUT-Higgs condensation. Within this scenario, there are a few different symmetry breaking patterns relevant to our discussion (recall Sec. 2.2): • Φ 45 = 0, the so(10) GUT is Higgs down to the su(5) GUT.
In all three cases, the w 2 w 3 (V SO(10) ) anomaly is matched by symmetry breaking the Spin(10) down to the GG, PS and SM groups. 26 The resulting vacua is in the same quantum phase as the corresponding vacua in the absence of the WZW term.
2. The SO(10) symmetry remains unbroken, and the w 2 w 3 anomaly persists to low-energy. The low-energy effective theory must saturate the anomaly requirement, which further leads to several different possibilities: (a) WZW conformal field theory (CFT): The WZW theory flows to a non-trivial CFT fixed point, where the GUT-Higgs field Φ remains gapless and disordered (not condensing), and also does not deconfine into fragmented excitations.
(b) Deconfined quantum criticality (DQC): The GUT-Higgs field Φ deconfines into fragmented excitations: partons and emergent gauge fields, which are new particles beyond the SM. The low-energy physics will be described by new quantum electrodynamics (QED ) or quantum chromodynamics (QCD ) sectors. In any case, the total gauge group must be enlarged to include the emergent gauge structure of partons, which is a phenomenon called gauge enhanced quantum criticality (GEQC) [31]. This can be viewed as the generalization of the deconfined quantum criticality (DQC) [27,[62][63][64] to gauge-Higgs models. Possible field theory descriptions of the DQC can be classified by the parton statistics as: • Fermionic parton theory, where the fractionalized particles in the emergent matter sector are fermions, which is the focus of our following work. • Bosonic parton theory, where the fractionalized particles in the emergent matter sector are bosons.
It is possible that two seemly different descriptions (e.g. fermionic v.s. bosonic parton theories) may be related by dualities, as discussed in [64,65]. In this scenario, the w 2 w 3 anomaly should be matched either by the anomalous fermionic matter or by a non-trivial θ-term of the emergent gauge field.
(c) Topological order with low-energy non-invertible TQFT: The w 2 w 3 anomaly could also be matched by a certain 4d topological order. A simplest possibility is the Z 2 -gauge theory topological order (more precisely, generated by dynamical spin structures), which can be considered as a descendent of the DQC when the emergent gauge group is reduced to Z 2 by some further Higgsing.
Among the above possibilities: 1. The SSB scenario in the WZW theory has no substantial difference with our previous discussions without the WZW term, which will not be repeated here. 2.(a) The WZW CFT is a non-trivial possibility, which the authors are not aware of suitable theoretical tools to study it, which will thus be left for future exploration. 2.(b) The DQC scenario will be the focus of the following discussion. In particular, we will consider a QED 4 theory with fermionic partons as the effective field theory description. The WZW theory could potentially admits dual bosonic parton descriptions as well, but we will also leave this possibility for future study. 2.(c) The topological order scenario could be derived from the DQC scenario, which will also be left for future study.
Dirac Fermionic Parton Theory and a Double-Spin structure DSpin within a modified so(10) GUT
Here we propose a fermionic parton construction for the WZW term in Sec. 3.2. We propose that WZW term Eqn. (3.14) can also be viewed as a low-energy description of this Dirac fermionic parton theory with an action: We will soon argue that importantly the fermion parity Z F 2 of this Dirac fermionic parton ξ requires to be different from the original fermion parity Z F 2 of the standard model or GUT fermions ψ. Namely, we will soon introduce a new kind of spin structure with two distinct fermion parities, which we name it formally a double spin structure: The theory contains the following ingredients: 1. There are 10 Dirac fermions ξ forming the 10 (vector representation) of SO (10). Here γ µ are the standard rank-4 γ matrices of 4-component Dirac fermions with γ FIVE = iγ 0 γ 1 γ 2 γ 3 andξ = ξ † γ 0 .
2. The covariant derivative D µ = ∇ µ − ia µ − igA µ contains the minimal coupling of the fermionic parton ξ to a new emergent dynamical U(1) gauge field a µ , as well as the minimal coupling to the SO(10) gauge field A µ (which is part of the Spin(10) gauge field in the conventional so(10) GUT in Sec. 2.2). We may treat the SO(10) gauge field A µ as a background field for now, and discuss how it can be gauged later.
3. The GUT-Higgs field Φ is written as its 10 × 10 matrix representation Φ bi of the SO(10) bivector form. It couples to the fermionic partons by taking its traceless symmetric componentΦ bi (the 54 of SO(10)) as the vector mass of ξ and its antisymmetric componentΦ bi (the 45 of SO(10)) as the axial mass of ξ. In this way, the SO(10) bivector GUT-Higgs boson effectively deconfines into two SO(10) vector fermions: 4. In the QED 4 theory S QED 4 , the GUT-Higgs field fractionalizes into gapless fermionic partons with emergent U(1) gauge interactions. The situation is similar to the U(1) Dirac spin liquid [66,67] discussed in the condensed matter physics context. Therefore we may also call this QED 4 theory as the Fragmentary GUT-Higgs Liquid model. We first argue that the QED 4 theory (without a θ-term) in Eqn. (3.44) saturates the same w 2 w 3 anomaly as the WZW term in Sec. 3.2. The starting point is to identify that the spacetime-internal symmetry (here Spin × Z F 2 U(1) ) and the gauge group (here SO(10)) of the fermionic parton theory is (10), (3.46) with fermions in the 10 1 representation of SO(10) and U(1) . Notice that we use the prime notation to indicate that those groups contain the new fermion parity Here we use the bracket notation around [U(1) ] to indicate that this U(1) is dynamically gauged eventually in terms of the emergent gauge fields near the quantum criticality. In other words, the new fermion parity Z F 2 must also be dynamically gauged because How do we reconcile the Spin structure (of the familiar SM and GUT in Sec. 3) and the Spin structure (of this new fermion parton theory (3.44)) in the full theory? After all, we have to place a full theory on some curved spacetime with a single unified geometric structure. The full spacetime-internal structure of this modified so(10)-GUT, that we require to include Spin × Z F 2 Spin(10) of (2.3) and Spin c × SO(10) of (3.46) as subgroups, turns out to be: 29
47)
27 If this theory has 't Hooft anomaly in G, it cannot be trivially gapped by preserving the G-symmetry. Since we like to construct fermion parton theory QED 4 (3.44) to saturate the w2w3 anomaly of SO(10) symmetry (or Spin × Z F 2 Spin(10) symmetry), we should forbid the (3.44) to get any quadratic mass term that preserves the SO(10). It turns out that the QED 4 have U(1) , CP , and T symmetries that can forbid any SO(10) symmetric quadratic mass term: (i) The U(1) symmetry: ξ → e i θ ξ forbids any Majorana mass of the form ξ T L/R i σ 2 ξ L/R that potentially gaps out the Dirac fermion (written as two Weyl fermions: ξ = ξL + ξR).
x) forbids the axial iξγ FIVE ξ mass: iξγ FIVE ξ → − iξγ FIVE ξ. 28 Because the order-parameter target manifold in our construction involves a Grassmannian manifold ( O(m+n) O(m)×O(n) ) ≡ Gr(m, m + n), the corresponding GUT-Higgs Liquid may also be called Grassmannian Liquid by some condensed matter people. 29 Again we use the bracket notation around [U(1) ] and [Z F 2 ] to indicate that they must be dynamically gauged. Although the Spin(10) is also dynamically gauged in the GUT, the Spin(10) may still be treated as a global symmetry in the context of quantum criticality of the internal flavor symmetry of fermions in the condensed matter system. However, the [U(1) ] and [Z F 2 ] must be dynamically gauged due to their roles at quantum criticality, regardless whether the Spin(10) is gauged or not. In summary, there is a hierarchy of gauging: the brackets [..] implies those degrees of freedom have a higher priority to be gauged.
where we implement the early advertised double spin structure DSpin ≡ (Z F 2 × Z F 2 ) SO structure. We leave the detail construction of this full spacetime-internal G modified so(10)-GUT symmetry based on the group extension in the footnote remark 30 and the Appendix E.
The U(1) group is free of anomaly, which is consistent with the fact that this emergent U(1) structure can be gauged. Gauging U(1) out of Spin c × SO(10) removes the spin structure of the fermion theory, allowing the gauge theory to be placed on non-spin manifolds. So the resulting theory is a bosonic theory with an SO × SO(10) symmetry. It is expected that the spacetime SO group should carry the w 2 w 3 anomaly, and the anomaly could only originate from the fermionic partons in the QED 4 theory.
To check the anomaly in the fermion sector, we first turn off the Higgs coupling (as it does not affect the anomaly analysis), such that the theory becomes as simple as´M 4ξ γ µ D µ ξ d 4 x. Without coupling to the GUT-Higgs field, the theory has an enlarged SU(2) gauge group, generated by ξ † ξ, Reξ γ 5 ξ, Im ξ γ 5 ξ, among which ξ † ξ generates the U(1) gauge group as a subgroup of SU (2) . With the enlarged SU(2) gauge group, the fermionic parton theory is promoted from a QED 4 theory to a QCD 4 theory (without enlarging the fermion content), whose group structure is 31 (10), (3.50) 30 Here are some comments about our construction of spacetime-internal symmetry. More details are in Appendix E. First, the ψ fermion in the 16 of Spin(10) requires a fermion parity Z F 2 , while the ξ fermion in the 10 of SO(10) requires another new fermion parity Z F 2 . Next, both ψ and ξ fermions require the common SO × SO(10) structure (as the quotient group of the total symmetry group), because they share the same bosonic part of spacetime rotational special orthogonal symmetry group SO, and their SO(10) gauge fields are the same. However, the ψ fermion requires a total structure Spin× Z F The above short exact sequences can be combined into the following group extensions: This total extended spacetime-internal (DSpin × Z F 2 Spin(10)) group is compatible with both fermionic spectrum restrictions for ψ and ξ. By modifying the Z F 2 into U(1) in the web of (3.48), we thus obtain the G modified so(10)-GUT ≡ (DSpin × Z F 2 Spin(10)) × Z F 2 U(1) in (3.47). Related to the DSpin structure, by including an extra discrete symmetry such as a time-reversal symmetry, the literatures also discover the structures known as DPin [68] and EPin [35] structures, see also an interpretation via the regularized quantum many-body model [69]. See more elaborations in Appendix E. 31 Similar to (3.48), by modifying the Z F 2 into SU(2) in the web, we thus obtain a modification on (3.47) into the original Dirac fermion ξ is in 2 L ⊕ 2 R of Spin(1, 3) and (1, 10) of U(1) × SO (10), while now the fermion ξ becomes in 2 L of Spin(1,3) and in the (2, 10) representation of SU(2) × SO (10). Again we use the bracket notation around [SU (2) ] and [Z F 2 ] to indicate that they must be dynamically gauged near the criticality. This QED 4 to QCD 4 promotion does not change the anomaly structure, because the SU(2) group is still anomaly-free. Namely, there are only two possible combinations of nonperturbative global anomalies out of the cobordism classification for Spin × Z F 2 SU(2) symmetry given by TP 5 (Spin × Z F 2 SU(2) ) = Z 2 2 [12,17,22]: 1. No Witten SU(2) anomaly [70]: Given that there are even number (ten) of fundamental fermions 2 of SU(2) , so 10 mod 2 = 0. (2) anomaly [12]: Given that there is no 4 of SU(2) fermions, so 0 mod 2 = 0.
No new SU
So the anomaly is still contained in the SO(10) group out of G QCD 4 = Spin h × SO (10). To match the w 2 w 3 anomaly, we make a connection to the recently discovered new SU(2) anomaly [17] by the following trick on the SO × SO(10) sector: we first embed SU(2) × SO (10) in Sp(10) and use a sequence of maximal special (S) or regular (R) Lie subalgebra [56] decomposition Sp(10) ← Sp(2) × Sp(8) ← SU(2) ×Sp(8) to show that a different SU(2) subgroup carries the w 2 w 3 anomaly. Under the embedding, the representation of the fermionic parton ξ splits as 32 16). • Since we have argued that (2,10) in SU(2) × SO(10) has no Witten or the new SU(2) anomalies in the SU(2) sector, so the new-SU(2) anomaly must come from the remained SO(10), or more precisely the remained SO × SO(10) out of the full Spin h × SO(10) in (3.50). According to [22,24], the classification of 't Hooft anomaly of SO × SO(10) symmetry is generated respectively by the cobordism group: Therefore, we claim that the new-SU(2) anomaly can be identified by w 2 w 3 (V SO(10) ), come from the remained SO(10) out of the Spin h × SO(10).
• We can further extend the Spin h × SO(10) structure of the fermionic parton theory QCD 4 to the full (2) ] structure of the modified so(10) GUT, under the pullback: In terms of the interpretation of the anomaly (we can gauge the anomaly-free SU(2) ), we are left with The two w 2 w 3 (T M ) and w 2 w 3 (V SO (10) ) anomalies in the TP 5 (SO × SO(10)) = Z 2 2 becomes identified as the same anomaly in the TP 5 (Spin × Z F 2 Spin(10)) = Z 2 of (2.5). Thus, of course, now we can also interpret as the gauge anomaly w 2 w 3 (V SO (10) ) as the gravitational anomaly w 2 w 3 (T M ) due to the relation (−1)´w 2 w 3 (T M ) = (−1)´w 2 w 3 (V SO(10) ) as mentioned before. The analysis establishes that the proposed QED 4 or QCD 4 theory in Eqn. (3.44) at least has the same 4d nonperturbative global mixed gauge-gravitational w 2 w 3 anomaly as the proposed 4d WZW term in (3.15).
To reproduce the WZW term more explicitly, we extend the QED 4 theory to the 5d bulk where ξ still forms the 10 1 under U(1) × SO (10). Note that in 5d, each Dirac fermion already defines five gamma matrices γ 0 , γ 1 , γ 2 , γ 3 , γ 4 , which are rank-4 matrices. By doubling the fermion content ( which means we need two sets of 5d Dirac fermions in 10, thus there are 2 × 10 Dirac fermions in 5d), we are able to introduce two more gamma matrices, denoted γ 5 and γ 6 , such that all seven gamma matrices γ 0 , · · · , γ 6 are rank-8 matrices satisfying the Clifford algebra relation {γ µ , γ ν } = 2δ µν . The bulk fermions are gapped by the mass term m. The boundary QED 4 theory (with massless fermions) is reduced from the bulk QED 5 theory (with massive fermions) as the effective domain wall theory, which lives on the 4d domain wall separating the m > 0 and m < 0 phases in 5d. 33 To show that the QED 4 theory is equivalent to the WZW theory, we only need to show that the bulk QED 5 theory can reproduce the WZW term (3.15). For this purpose, we introduce two 2-form R gauge fields B = B µν dx µ ∧ dx ν and B = B µν dx µ ∧ dx ν that couple to the fermionic parton as Integrating out the massive fermion ξ, we obtain the BF 5-form term with 2-form B and B fields: with the constraint that the 2-form gauge fields B and B are locked to the cohomology classes that measure the topological defects inΦ bi andΦ bi respectively (3.58) The emergent U(1) gauge field a decouples from the GUT-Higgs field Φ and the 2-form gauge fields B, B , which can be integrated out independently. Further integrate out the 2-form gauge fields B, B , we obtain an action for Φ (simply by substituting the constraint), Recall the footnote 19 about our normalizations of differential forms and cohomology classes. This leads to the proposed WZW term in Eqn. (3.15) 59) which is expected to be placed on the 5d manifold M 5 whose boundary is the 4d spacetime M 4 = ∂M 5 .
Color-Flavor Separation and Dark Gauge Sector: 4d Deconfined Quantum Criticality
The QED 4 theory describes the DQC scenario of the 4d WZW-term like theory at low-energy. In this scenario, the GUT-Higgs field deconfines into fragmentary excitations, which are new 0d particles beyond the SM: • 10 new fermions ξ in the 10 1 of U(1) × SO (10), as fermionic partons that fractionalize the GUT-Higgs field; • a new U(1) photon a µ in the 1 0 of U(1) × SO (10), which mediates a new gauge force that exists between and only between fermionic partons. It does not couple to any particle in the SM sector, hence appears dark to us. Therefore, we will call it the dark photon.
The GUT-Higgs boson can be considered as the bound state of two fermionic partons (of opposite emergent U(1) gauge charges) bind together by the the emergent U(1) gauge force mediated by dark photons.
• From particle physic perspective, the fermionic partons and dark photons are more fundamental constituents of the GUT-Higgs bosons.
• From condensed matter physics perspective, these fragmentary excitations are emergent collective modes of the GUT-Higgs field instead. The two complementary viewpoints are a matter of culture. The readers can take whichever interpretation that is more favorable to their mindset.
Because the QED 4 theory is deconfined in 4d, the fragmentary GUT-Higgs liquid is expected to be a stable phase in the phase diagram Fig. 8. It covers the quantum critical region (critical in the sense that excitations are gapless), and may possibly extend into the modified so(10) GUT phase (as long as fermionic partons remain deconfined). Starting from the fragmentary GUT-Higgs liquid phase, we can access the adjacent phases by GUT-Higgs condensation.
• Φ bi = 0, the system enters the PS GUT phase, where fermionic partons are fully gapped by the vector mass.
• Φ bi = 0, the system enters the su(5) GUT phase, where fermionic partons are fully gapped by the axial mass.
• Φ bi = 0 and Φ bi = 0, the system enters the SM phase, where fermionic partons are fully gapped by both vector and axial masses.
In all phases, the dark photon will remain gapless and decoupled from all the other particles, which provides a new candidate for the light dark matter.
A substantial difference of fermionic partons ξ in the fragmentary GUT-Higgs liquid from quarks and leptons ψ in the SM, lies in their distinct assignment of quantum numbers. For the spacetime symmetry representation, the Dirac fermion partons ξ is in the complex 2 L ⊕2 R of Spin(1, 3); the SM's Weyl fermion is in the complex 2 L of Spin(1, 3).
For the internal symmetry representation, consider entering the SM phase from the fragmentary GUT-Higgs liquid, the Dirac fermionic partons, apart from the gap opening, also has its representation split from 10 1 under U(1) × SO(10) to 34 (1, 2) 1,3,−2 ⊕ (3, 1) 1,−2,−2 ⊕ (1, 2) 1,−3,2 ⊕ (3, 1) 1,2,2 under SU(3) c × SU(2) L × U(1) dark gauge × U(1)Ỹ × U(1) X of the SM. The weak SU(2) flavor and the strong SU(3) color quantum numbers separate to different fermions, called flavoron and coloron, denoted by the f and c Dirac fermions as Grassmann numbers respectively, as summarized in Table 1. We shall name this phenomenon as color-flavor separation, as it is analogous to the spin-charge separation [71][72][73] in condensed matter physics. The flavoron can participate SU(2) weak interaction but not SU(3) strong interaction. On the contrary, the coloron can participate SU(3) strong interaction but not SU(2) electroweak interaction. Many of them also carry electromagnetic charge, such that they can also participate electromagnetic interaction. Beyond the SM interactions, the flavoron and coloron also interact among themselves by the emergent U(1) gauge force mediated by the dark photon. Note that there exist a flavoron (in the f L sector) which do not participate SU(3) strong and electromagnetic interactions. It only participate SU(2) weak interaction (like left-handed neutrinos) and dark gauge interaction (unlike neutrinos), which makes it especially a potential better candidate for heavy dark matter. 34 Here we use the branching rule of the Lie algebra representations for the following inclusion: so(10) ← su(5) × u(1)X (R regular subalgebra), so that 10 ∼ 5−2 ⊕ 52; and also the su(5) ← su(3) × su(2) × u(1)Ỹ (R regular subalgebra) so that 5 ∼ (1, 2)3 ⊕ (3, 1)−2 and 5 ∼ (1, 2)−3 ⊕ (3, 1)2. To conclude, here in Table 2, we summarize the quantum field content of the mother effective field theory of the 4d so(10) GUT + GUT-Higgs potential + with or without WZW term. we summarize our physical findings. on the various quantum vacua of mother effective field theory (10) gauge field A (45 Lie algebra generators denoted as 45 adj. , but not the 45 rep), the SO(10)-bivector spacetime-scalar Φ bi , and the SO(10)-vector spacetime-scalar φ as an auxiliary field (Lagrange multiplier with no dynamics). Model II contains all the field contents of Model I, in addition, Model II contains extra fields: the 4d WZW term π´M 5 B(Φ bi ) δB (Φ bi ) lives on the boundary of a 5d bulk can induce a candidate low energy QED 4 theory with a Dirac spacetime-spinor ξ (as a fermionic parton) and a U(1) emergent dark gauge field a (1 Lie algebra generator denoted as 1 adj. , which carries no U(1) charge). The rep of fermionic parton ξ in su(3) × su(2) × u(1)Ỹ × u(1) X is given in Table 1. There are two types of fermion parities in a double Spin structure DSpin ≡ (Z F 2 × Z F 2 ) SO.
Based on three binary conditions: The three binary conditions enumerate totally eight possibilities (where below we can use 3-bits, "???", 35 We may use the bracket notation on a group [G internal ] to emphasize that group is dynamically gauged. each bit "?" labels a "x" or "o" to specify without or with that binary condition holds), which we enlist their physics interpretations, one by one: We stay in the Landau-Ginzburg phase of the Spin(10) global symmetry.
We stay in the Landau-Ginzburg phases, but the U(Φ R ) potentially breaks the Spin(10) global symmetry to other continuous Lie group global symmetries G GG , G PS , and G SM , via spontaneous global symmetry breaking. There are 45, 24, 21, and 12 Lie algebra generators for each of these groups. So there are corresponding numbers of the low energy Nambu-Goldstone modes, matching the number of the broken Lie algebra generators based on the Goldstone's theorem.
In principle, because there is no 't Hooft anomaly for the 16n chiral fermions with these G internal internal global symmetries, we can gap out all chiral fermions while preserving G internal via a symmetric mass generation through appropriate interactions [12,13].
We obtain the familiar so(10) GUT with the [Spin (10)] gauged. At a deep UV higher energy, there shows the asymptotic freedom of 16n Weyl fermions (quarks and leptons are liberated with a weaker coupling at a shorter distance for such a non-abelian Lie group gauge force [32,33]). At an IR lower energy, the Spin(10) gauge fields confine the 16n Weyl fermions, which is a strongly coupled gauge theory with all fermions can gain an energy gap (i.e., "mass" due to the confinement).
oxo -
Then we are in the dynamical gauge theory phases but with gauge symmetry breaking. The U(Φ R ) potentially breaks the Spin(10) gauge group to other continuous Lie gauge group G GG , G PS , and G SM , via Anderson-Higgs mechanism of spontaneous gauge symmetry breaking. There are 45, 24, 21, and 12 Lie algebra generators for each of these groups. Recall in the global symmetry story, there are corresponding numbers of the low energy Nambu-Goldstone modes, matching the number of the broken Lie algebra generators based on the Goldstone's theorem. But now some massless gauge fields can "eats" the degrees of freedom of Goldstone bosons, so to become the massive gauge field with extra degrees of freedom.
Note that again, at a deep UV higher energy, there shows the asymptotic freedom of Weyl fermions; while at an IR lower energy, the non-abelian Lie gauge forces of G GG , G PS , and G SM ) can confine some of the Weyl fermions. In this strongly coupled gauge theory, some fermions can gain an energy gap (i.e., "mass") due to the confinement. But we do still have the electroweak-Higgs causing spontaneous gauge symmetry breaking su(2) L × u(1) Y → u(1) EM . The u(1) EM stays deconfined and propagate the gaplss electromagnetic waves in our vacuum.
Here the fermion mass can come from a combination of mechanism from: the confinement mass, the Anderson-Higgs (gauge-)symmetry-breaking mass, or the gauge theory analog of the symmetric mass generation.
xox -
We stay in the Landau-Ginzburg phase of the Spin(10) global symmetry, but the 4d WZW term causes the 4d deconfined quantum criticality (DQC) with fractionalized fragmentary excitations.
This DQC is also a gauge-enhanced criticality (GEQC) because we have a new gauge force (that we call Dark Gauge force with U(1) dark gauge Dark Photons) emergent near the criticality. The fractionalized fragmentary excitations carry the U(1) dark gauge gauge charge. If the U(1) dark gauge dark photons stay gapless dynamically at deep IR, then it is due to the protection of w 2 w 3 anomaly.
We stay in the Landau-Ginzburg phases, but the U(Φ R ) potentially breaks the Spin(10) global symmetry to other continuous Lie group global symmetries G GG , G PS , and G SM , via spontaneous global symmetry breaking. Other than the low energy Nambu-Goldstone modes matching the number of the broken Lie algebra generators in the neighbor phases, we still have the fractionalized fragmentary excitations that also carries U(1) dark gauge gauge charge, with U(1) dark gauge Dark Photons.
We obtain the modified so(10) GUT + WZW with the [Spin(10)] gauged. At a deep UV higher energy, the GUT-Higgs potential + WZW term may affect the renormalizability of EFT; however, what we concern is the EFT that works below certain energy cutoff scale such as GUT scale M GUT or the 5d bulk invertible TQFT energy gap ∆ iTQFT . Other than the DQC and GEQC phenomena described above in the scenario 5., the theory shows: • The Spin(10) gauge bosons can propagate or leak to the 5d bulk.
• The 16n Weyl fermions are gappable (because there is no anomaly protection for these 16n fermions).
This scenario follows directly from the scenario 7., but with a GUT-Higgs potential triggering (gauge-)symmetry-breaking. All statements in the scenario 7. follow also here. Moreover, • There is a sequence of various possibilities at various energy scales from the UV to the IR dynamical fates of this QFT. We do not know the definite answer of quantum dynamics. Here we only enlist the possibilities of quantum dynamical fates of the modified so(10) GUT + 4d WZW term (with 16n Weyl fermions) based on the w 2 w 3 anomaly matching constraints: i). Spin(10) gauge group can be broken down to contain an SU(2) gauge subgroup such that there is a new SU(2) anomaly of mixed gauge-gravity type w 2 w 3 (T M ) = w 2 w 3 (V SO(3) ) within the Spin × Z F 2 SU(2) ≡ Spin h symmetry [17], again dynamically gauging SU(2) makes the SU(2) gauge bosons can propagate to the 5d bulk. ii). The gauge group can be broken down to contain a U(1) gauge subgroup which can also have a pure gravitational w 2 w 3 (T M ) anomaly if the theory is all-fermion U(1) gauge theory [19,20]. The Spin × Z F 2 U(1) ≡ Spin c structure trivializes the w 2 w 3 (T M ) anomaly. iii). The gauge group can be broken down to contain a Z 2 gauge subgroup which can also have a pure gravitational w 2 w 3 (T M ) anomaly if the theory has fermionic strings [18,[74][75][76]. The Spin structure trivializes the w 2 w 3 (T M ) anomaly.
• However, the WZW dynamics in the quantum critical region that we propose in Sec. 3.4.2 shows none of the above. Instead, we suggest a different IR low energy fate of WZW theory: the Spin(10) symmetry can be fully preserved, while the mixed gauge-gravity anomaly w 2 w 3 (T M ) = w 2 w 3 (V SO (10) ) is matched by a Dirac fermionic parton theory QED 4 with emergent U(1) dark gauge force and with a DSpin structure. Fig. 11 2. Introduce a Higgs Φ so (10), 16 and add an extra Weyl fermion (17th Weyl fermion) singlet 1 under Spin (10). This works only if some of the following holds: (a) The 17th Weyl fermion is not charged under the Z 4,X -symmetry, so we have the Z 16 -anomaly cancelled already by 16n Weyl fermions. This is likely to be true because this 17th Weyl fermion is singlet 1 under Spin(10), thus is also not acted by the center Z(Spin(10)) = Z 4,X .
(b) If the 17th Weyl fermion is also charged under the Z 4,X -symmetry, then we require the Z 4,Xsymmetry is broken (thus the Z 16 -anomaly is removed), or the Z 4,X -symmetry is preserved but 17 mod 16-anomaly is cancelled again by additional new sectors with −1 mod 16-anomaly.
What are other new ways to leave only the observed 15n Weyl fermions at low energy, but the Z 16 global anomaly can still be cancelled in the full quantum system? To begin with, to characterize the full 4d anomaly of this 15n SMs or GUTs, we should combine the two types of anomalies: First, a potential global Z 2 anomaly, the w 2 w 3 for our 4d WZW term, such as in the Fragmentary GUT-Higgs Liquid model in Sec. 3.4. Second, the Z 16 global anomaly captured by a 5d version of Atiyah-Patodi-Singer 16 . We can write that 5d APS invariant in terms of the 4d APS invariant of Pin + -structure from TP 4 (Pin + ) = Z 16 . The two combined invertible TQFT, labeled by p ∈ Z 2 and ν ∈ Z 16 , has a partition function Z on M 5 , which together labels a deformation class of SM [16]: with p ∈ Z 2 , a 4d Atiyah-Patodi-Singer η invariant ≡ η Pin + ∈ Z 16 , ν ∈ Z 16 . (4.1) The cohomology classes of background gauge field A Z 2 ∈ H 1 (M, Inspired by highly-entangled interacting quantum matter recent developments (see reviews in [77,78]), Ref. [36][37][38] proposed additional new sectors to cancel the anomalies, for example, 3. Symmetry-preserving anomalous gapped 4d TQFT.
7. Symmetry-preserving or symmetry-breaking gapless phase, e.g., extra massless theories, free or interacting conformal field theories (CFTs). The interacting CFT can also be related to unparticle physics [79] in the high-energy phenomenology community.
The heavy gapped new sectors above can be heavy Dark Matter candidates. The interesting constraints from mod 2 and mod 16 global anomalies on our 4d DQC model are: • Z 16 anomaly constraints on the GG and SM of 15n Weyl fermions: On the Georgi-Glashow su(5) GUT and the Standard Model SM q=6 side, we can have 15n Weyl fermions, plus additional new sectors enlisted (above and in [36][37][38]) to match the Z 16 anomaly.
• Z 2 w 2 w 3 anomaly constraints on the so(10) GUT and PS of 16n Weyl fermions: On the so(10) GUT and the Pati-Salam model sides, there are various types of Z 2 class w 2 w 3 anomalies, of the SO(10), SO(6), or SO(4) bundles. The Z 2 w 2 w 3 anomaly is meant to be cancelled by our 4d WZW term.
• At the vicinity of the 4d DQC we have proposed, there can be another interplay between the 15n Weyl fermions (GG and SM) to 16n Weyl fermions (the so(10) GUT and PS), such that the DQC becomes a topological quantum phase transition or topological quantum criticality .
4d boundary criticality to a 5d bulk criticality : Compare with the phase diagram in Fig. 8. Notice that we can interpret the above 4d criticality as a boundary criticality -• On the modified so (10) GUT and the PS model + WZW term side with 16n Weyl fermions in Fig. 8: with the w 2 w 3 Z 2 -class anomaly on the 5d bulk of a mod 2 class invertible TQFT.
• On the modified su(5) GUT and the SM + WZW term side with 15n Weyl fermions in Fig. 8: with the η(PD(A Z 2 )) Z 16 -class anomaly on the 5d bulk of a mod 2 class invertible TQFT.
Once the [Spin (10)] is dynamically gauged, • The 5d bulk on the modified so ( ). Gauging [Z 4,X ] turns the 5d fermionic bulk to a 5d bosonic bulk TQFT (with long-range entanglement, gapped topological order, and described by gauged cohomology, gauged cobordism, or higher category theory). The 5d bulk can remain to be gapped.
Thus there is a phase transition between the deconfined and gapless 5d bulk to another side of gapped 5d bulk. This phase transition can be interpreted as a 5d bulk topological quantum criticality . with a pseudoreal representation in the 4d Euclidean signature.
Acknowledgements
Internal symmetry representation Below we provide two Tables, 3 and 4, to organize the internal symmetry representations of particle contents of the SM, the su(5) GUT, the Pati-Salam model, the so(10) GUT.
A.1 Embed the SM into the su(5) GUT, then into the so(10) GUT There is a QFT embedding, the so(10) GUT ⊃ the su(5) GUT ⊃ the SM 6 only for G SM q=6 via an internal symmetry group embedding: The representations of quarks and leptons for these models are organized in Table 3. There are two versions of electroweak hypercharge normalization listed in Table 3, such that the charge of U(1) Y is 1 6 of the charge of U(1)Ỹ .
A.2 Embed the SM into the Left-Right and Pati-Salam models, into the so(10) GUT
There are two version of internal symmetry groups for Pati-Salam (PS) model [6]: with q = 1, 2. There are two version of internal symmetry groups for Senjanovic-Mohapatra's Left-Right (LR) model [80], with q = 1, 2. In general, there is a QFT embedding, the PS model ⊃ the LR model ⊃ the SM for both q = 1, 2 via the internal symmetry group embedding: Table entries indicate the quantum numbers associated with the representation of the groups given in the top row. We show a generation of SM fermion matter fields in Table 3. There are 3 generations, triplicating Table 3, in SM. All fermions have the fermion parity Z F 2 representation charge 1. In the su(5) GUT, by including the U(1) X , we have the (SU(5) × U(1) X )/Z 5 = U(5)q =2 structure described in Ref. [60,61]. Here U(1) X ⊃ Z 4,X ⊃ Z F 2 and SU(5) ⊃ U(1) Y . Both U(1) X and U(1) B−L are outside the SU(5).
Namely, when q = 1, we have Furthermore, only when q = 2, we can have the whole embedded into the Spin(10) for the so(10) GUT: The representations of quarks and leptons for these models are organized in Table 4.
(2) Second, in order to break G PS 2 further down to G SM 6 ≡ , we take the representation whose branching rule in (B.2) contains the (1, 1) 0 of G SM 6 . This means that we can take the 15 of SU(4) as the second GUT-Higgs called Φ su(4), 15 . But if we want to obtain this second GUT-Higgs from a higher-energy so(10) GUT, it turns out that we can find Φ su (4), 15 from what we had named in (2.10) called Φ so(10),45 ≡ Φ 45 , from (B.5) more naturally, as we will soon see. 39 38 It may be also possible to introduce the second GUT-Higgs of Φ so(10),45 ≡ Φ 45 (different from Φ45) which also contains the Φ su(5),24 that can break SU(5) down to GSM 6 .
Another possible choice proposed in Georgi's textbook [41] is that in addition to the first GUT-Higgs Φ so(10),45 ≡ Φ45, one may also introduce a scalar Higgs of a 16 or a 126 of Spin(10) in order to Higgs down to GSM.
However, these choices are not ideal for us, due to the reason of quantum criticality that we pursue later. The quantum criticality that we pursue only require Φ so(10),45 ≡ Φ45 and Φ so(10),54 ≡ Φ54, from (2.10) and (2.11). 39 Another possible choice proposed in Georgi's textbook [41] is that in addition to the first GUT-Higgs Φ so(10),54 ≡ Φ54, one may also introduce a scalar Higgs of a 16 or a 126 of Spin(10) in order to Higgs down to GSM.
However, these choices are not ideal for us, due to the reason of quantum criticality that we pursue later. The quantum criticality that we pursue only require Φ so(10),45 ≡ Φ45 and Φ so(10),54 ≡ Φ54, from (2.10) and (2.11).
3.
branching rules: The Standard Model (SM) electroweak Higgs in the representation does the job to break . Then next, we can ask how to find Φ SM from the representation of su(5), or su(4) × su(2) × su(2), or so(10).
If we take into account the discrete Z 2 symmetry (a time-reversal or a spatial reflection symmetry), the above SO(2) symmetry becomes an O(2) = SO(2) Z 2 symmetry, while the above SO(3) symmetry becomes an O(3) = SO(3) × Z 2 symmetry.
Below we write G as the original symmetry group (such as SO(3) × SO(2) valid to the UV lattice scale), while G sub is the remained preserved unbroken symmetry in the corresponding order (Néel or VBS orders). Then we have the following fibration structure: where the quotient space G G sub is the base manifold (i.e., the orbit) as the symmetry-breaking order parameter space. The G is the total space obtained from the fibration of the G sub fiber (i.e., the stabilizer) over the base G G sub . Here is a systematic table computation on the homotopy group π k of ( G G sub ) for Néel or VBS orders, To our knowledge, the most systematic, physically intuitive, and mathematically transparent construction of the 3d dQCP and its 3d WZW term can be based on the following arguments: 1. The Néel order breaks an SO(3) (iso)spin rotational symmetry down to an U(1) = SO(2) (iso)spin rotational symmetry such as along the z axis, such that (3.16) in the Néel order becomes: Hedgehog core, instanton, and magnetic monopole: The SO(3) symmetry breaking hedgehog core has a 0d singularity in the spacetime. This 0d singularity of this hedgehog core in the 3d spacetime can be also regarded an instanton in the 3d spacetime. We can couple this whole configuration to SO(3) background gauge field, this means that we can use the w 2 (V SO(3) ) to measure the magnetic charge of SO(3). We evaluate the w 2 (V SO(3) ) over the Néel's SO(3) symmetry-breaking target space S 2 , it turns out that there is a 2π-flux over S 2 . Therefore, the hedgehog core is not only an instanton event but also an SO ( (ii). This SO(3) symmetry-breaking hedgehog core traps a "fractionalized charge-1/2 object charged under the preserved SO(2) symmetry (or Z 4 symmetry on a lattice scale)," namely in the projective representation of Z 4 , which is in the unit integer representation Z 8 . Namely, the SO(3)-symmetry-breaking topological defect, hedgehog core in the Néel phase, traps the 1 2 -fractionalization of the unbroken SO(2), or Z 4 , charged object of VBS order.
(iii). The winding number of such Néel hedgehog configuration can be classified by This says the S 2 as a 2d surface in 3d spacetime wrapping around the target S 2 of the Néel's SO (3) symmetry-breaking target space (the base manifold and stabilizer in (C.3)). The spatial S 2 circle as a homology class (in H 2 (M, Z), called this 2d sphere 2 ) can be paired up with a cohomology class B ∈ H 2 (M, Z). To make sense the unit generator of the winding Z class, the B evaluated on 2 (bounding a 3-disk Σ 3 by 2 so ∂Σ 3 = 2 ) must have the following: (iv). Now imagine in a 3d spacetime picture, we can regard: • the 0d hedgehog core ς 0 Néel hedgehog as the charged object, fractionalized charged under the preserved SO(2) (a projective representation in Z 4 , precisely a linear representation in Z 8 ).
• the 2d S 2 called 2 with B ∈ H 2 (M, Z) on the 2 , as the charge operator, or the symmetry generator of the SO(2). Then, follow the higher symmetry or generalized global symmetry language [57], the measurement of the symmetry is exactly performed by evaluating the linking between the ς 0 Néel hedgehog and 2 in a 3d spacetime M 3 . Precisely, the linking number Lk, manifested as a statistical Berry phase, is evaluated via the expectation value of path integral: Here ϕ ς 0 Néel hedgehog is the 0d vertex operator evaluated around the 0d hedgehog core, which is again the 0d magnetic monopole at the open end of the SO(3) background-gauged 1d 't Hooft line. Related descriptions of link invariants of QFTs can be found in [58,59] and references therein.
2. The VBS order breaks an SO(2) spatial rotational symmetry in the continuum (or breaks Z 4 rotational symmetry on a lattice), such that (3.16) in the VBS order becomes: (i). The SO(2) symmetry-breaking VBS vortex core has a 0d singularity trapping an (iso)spin-1/2 object called the (iso)spinon in the space (famously popularized by Levin-Senthil [81]), which indeed is a 1d vortex loop (called this 1d loop ς 1 VBS vortex ) in the spacetime. (ii). The (iso)spinon with (iso)spin-1/2 trapped at the VBS order parameter vortex core is a "fractionalized charge-1/2 object charged under the preserved symmetry SO(3)," namely in the projective representation of SO(3), which is in the fundamental representation 2 of SU (2). Namely, the SO(2)-symmetrybreaking topological defect, the vortex in the VBS phase, traps the 1 2 -fractionalization of SO(3) charged object of Néel order. This says the spatial S 1 wrapping around the target S 1 of the VBS's SO(2) symmetry-breaking target space (the base manifold and stabilizer in (C.7)). The spatial S 1 circle as a homology class ( in H 1 (M, Z), called this 1d circle 1 ) can be paired up with a cohomology class A ∈ H 1 (M, Z). To make sense the unit generator of the winding Z class, the dA evaluated on a 2-disk Σ 2 (bounded by 1 so ∂Σ 2 = 1 ) must have the following Stoke theorem: (iv). Now imagine in a 3d spacetime picture, we can regard: • the 1d vortex loop ς 1 VBS vortex as the charged object, fractionalized charged under the preserved SO(3) (a projective representation in SO (3), precisely a linear representation in SU (2)).
• the 1d S 1 circle 1 with A ∈ H 1 (M, Z) on the loop, as the charge operator, or the symmetry generator of the SO(3). Then, the measurement of the symmetry is exactly performed by evaluating the linking between the ς 1 VBS vortex and 1 in 3d spacetime. Precisely, the linking number Lk, manifested as a statistical Berry phase, is evaluated via the expectation value of path integral: Here a is a 1d background-gauged SO(2) connection evaluated around the 1d vortex loop. Related descriptions of link invariants of QFTs can be found in [58,59] and references therein.
3. Overall, combined the above data, we have learned that the 3d dQCP construction can be induced by the linking number Lk( 2 , ς 0 Néel hedgehog ) = 1 and Lk( 1 , ς 1 VBS vortex ) = 1 in the 3d spacetime. To furnish more physical intuitions, we can deduce that: (i). If we extend the 3d spacetime t, x, y to an extra 4th dimension z, the previous 0d hedgehog core ς 0 Néel hedgehog trajectory can be a 1d pseudo-worldline ς 1 Néel hedgehog in the 4d spacetime M 4 . Similarly, the previous 1d vortex loop ς 1 VBS vortex trajectory can be a 2d pseudo-worldsheet ς 2 VBS vortex in the 4d spacetime M 4 . Such two configurations can be linked in 4d, with a linking number: This describes the link in the extended 4d spacetime of two charged objects, charged under SO(2) and SO(3) respectively.
(ii). In a parallel story, the charge operators (of the above charged objects) are the 1d SO(2)-background gauged A line operator on 1 , and 2d SO(3)-background gauged B surface operator on 2 . Such two configurations can be linked in 4d, with a linking number: This describes the link in the extended 4d spacetime of two charge operators, of SO(2) and SO (3) respectively. Lk(ς 1 VBS vortex , 1 ) M 3 .
D Perturbative Local and Nonperturbative Global Anomalies via Cobordism: Without or With T or CP symmetry
Here we enlist the results of perturbative local and nonperturbative global anomalies via cobordism mostly obtained from [22,24]. Some of these results are used in (2.5). For some spacetime-internal symmetry groupḠ of the SM or GUT models, we denote: . 42 Here our differential form normalization follows the footnote 19. So we send A/π → A and B/π → B. It can again be easily verified that this WZW has two properties: (1) invertible on |Z(M 4 )| = 1 on a closed 4-manifold, (2) this WZW term really is a 3d boundary theory on M 3 of the extended M 4 . This WZW term is meant to capture the 3d boundary anomaly of the 4d bulk invertible TQFT: (−1)´M 4 w 2 (V SO(3) )w 2 (V SO(2) ) . 43 The Z2 classification of the WZW term also comes from another quantum matter intuitive argument: When two copies of the WZW terms are put together, the system can be trivialized by an interlayer large coupling without breaking symmetry.
We apply a version of cobordism group Ω d G ≡ TP d (Ḡ) from Freed-Hopkins [26]. Ref. [12,22,24,61] had computed some of these 5th cobordism group TP 5 classifications of the 4d anomalies (via Thom-Madsen-Tillmann spectra [82,83], Adams spectral sequence [84], and Freed-Hopkins's theorem [26]), to obtain: 16 , q = 1, 3. Z 5 × Z 2 2 × Z 4 × Z 16 , q = 2, 6. TP 5 (Spin × Z F 2 Z 4,X × SU(5)) = Z × Z 2 × Z 16 . For details about their 5d manifold generators and 5d invertible TQFTs, see Ref. [24]. Comments on these perturbative local and nonperturbative global anomalies are in order: • Perturbative local anomalies are classified by integer Z classes, detectable via the infinitesimal or small gauge or diffeomorphism transformations deformable to the identity element. Given the chiral fermion (quarks and leptons) contents in Appendix A, we can check that all the perturbative local anomalies (all Z classes) are cancelled in SMs and GUTs. These perturbative local anomaly cancellations are well-known, verified in any standard text books on SMs and GUTs.
• Nonperturbative global anomalies are classified by finite torsion Z n classes, detectable via the large gauge or diffeomorphism transformations, not deformable to the identity element.
-The Z 2 and Z 4 anomalies in TP 5 (Spin × Z F 2 Z 4,X × G SMq ) or TP 5 (Spin × Z F 2 Z 4,X × SU(5)) include the variants or mutated versions of the Witten anomaly [70], by modifying the original SU(2) bundle to some principal SU(n) bundles. Also there is a Z 4 class anomaly from the hypercharge U(1) 2 Y paired with a X-background field with (X) 2 = (−1) F . All these Z 2 and Z 4 anomalies are checked to be cancelled [36][37][38].
-The Z 16 anomaly in TP 5 (Spin × Z F 2 Z 4,X × G SMq ) or TP 5 (Spin × Z F 2 Z 4,X × SU (5)) can be cancelled if there are 16n Weyl fermions, each is charged under Z 4,X with (X) 2 = (−1) F . Since we only observe 15n Weyl fermions so far by experiments, Ref. [36][37][38] proposed alternative scenarios to cancel Z 16 anomaly with 15n Weyl fermions at low energy -we revisit this issue separately in Sec. 4.2 -Several Z 2 anomalies in TP 5 (Spin × Z F 2 G PS q =1,2 ) or TP 5 (Spin × Z F 2 Spin(10)) come from either the variants of the Witten SU(2) anomaly [70] (modifying the SU(2) gauge bundle to other bundles) or the variants of the new SU(2) anomaly [17] (modifying the w 2 (T M )w 3 (T M ) = w 2 (V SO(3) )w 3 (V SO(3) ) of SO(3) bundle to other SO(n) bundles). Follow [12,17], we can check that the chiral fermion sectors (of quarks and leptons) of PS and so(10) GUTs do not suffer from any of these Z 2 global anomalies.
However, the hallmark of our 4d WZW term, and the Fragmentary GUT-Higgs Liquid model in Sec. 3.4, relies on matching them with the w 2 w 3 anomaly. So below, we walk through the distinct properties of the various kinds of w 2 w 3 anomalies listed in (D.1), in more details.
Theη is a mod 2 index of 1d Dirac operator as a real massive 1d fermion, as a 1d cobordism invariant of TP 1 (Spin) = Z 2 .
5.
With a time-reversal T or CP symmetry, or a generic T such as CT symmetry : If we hope to have the crossing term w 2 (V SO(6) )w 3 (V SO(4) ) + w 2 (V SO(4) )w 3 (V SO(6) ) (D.5) to enter the anomaly constraint in the PS models, we need to have Sq 1 (w 2 (V SO(6) )w 2 (V SO(4) )) = w 1 (T M )(w 2 (V SO(6) )w 2 (V SO(4) )) = 0, this means that we need to include the time-reversal T (or CP ) symmetry, or a generic T such as CT symmetry.
In the so(10) GUT, there are actually two kinds of time-reversal symmetry square: There are two kinds of commutation relations between time-reversal T and the Spin(10) generators: either commute (direct product "×") or non-commute (semi-direct product " " ).
So if we include the time-reversal T into the (Spin × Z F 2 Spin(10))-structure, there are totally (at least) four kinds of time-reversal symmetries for the so(10) GUT. Based on the computation in Ref. [61], we summarize the four versions of the so(10) GUT with time-reversal symmetries, and their cobordism group TP 5 : The punchline here in (D.9) is that because time-reversal T (or CP ) or some T is a valid global symmetry, we can put the theory on an unorientable manifold with w 1 (T M ) = 0 also w 1 (V O(10) ) = 0. Therefore, the crossing term in (D.5) can still contribute a potential anomaly. This crossing term anomaly w 2 (V SO(6) )w 3 (V SO(4) ) + w 2 (V SO(4) )w 3 (V SO (6) ) turns out to play a possible crucial role in our construction of Sec. 3.4. See more discussions in a companion work.
Similar stories apply to a larger gauge group unification for three generations of fermions, such as the so (18) GUT with a Spin(18) gauge group. We simply replace all above discussions of so(10) to so (18), and replace Spin(10) to Spin (18).
E Fermionic Double Spin structure DSpin for a modified so(10) GUT-Higgs liquid model Here are detailed comments about our construction of spacetime-internal symmetry that involves the fermionic double spin structure DSpin given in Sec. 3.4.2.
1. First, we recall that we have introduced: Weyl fermion ψ in the 16 of Spin(10) for the so(10) GUT, Dirac fermion ξ in the 10 of SO(10) (also of Spin (10)) for the fermionic parton QED 4 theory.
2. The modified so(10) GUT requires a Spin × Z F 2 Spin(10) structure in order to manifest a w 2 w 3 anomaly. In this structure, the fermion ψ in 16 is charged with (−1) F odd under the fermion parity Z F 2 . This meanwhile implies the constraint on the matter field spectrum under the Spin × Z F 2 Spin(10) structure: There is a short exact sequence: 1 → Z F 2 → Z(Spin(10)) = Z 4,X → Z(SO(10)) = Z 2 → 1. Given the Z 4,X charge state |X with X = 0, 1, 2, 3, we have its representation z X such that z ∈ U(1) with |z| = 1, where we embed the normal subgroup Z F 2 ⊂ Z 4,X ⊂ U(1). • The Z 4,X symmetry generator U Z 4,X acts on |X , which becomes U Z 4,X |X = i X |X with z = i. • The subgroup Z F 2 symmetry generator U Z F 2 = (U Z 4,X ) 2 can also act on |X , which becomes U Z F 2 |X = (U Z 4,X ) 2 |X = i 2X |X = (−1) X |X . Thus, we read the fermion parity (−1) F , the |1 and |3 are fermionic with −1 (thus odd in Z F 2 ), while the |0 and |2 are bosonic with +1 (thus even in Z F 2 ). • Any fermion charged under Z F 2 must have the (−1) F = −1 also identified as the Z 2 normal subgroup of the center Z(Spin(10)) = Z 4,X . Thus these fermions must have a Z(Spin(10)) = Z 4,X charge either 1 1 1 1 where we can choose G int = Z F 2 , U(1) , or SU(2) to reproduce the required structure in Sec. 3.4.2. In all cases, we have G int ⊇ Z F 2 contains the new fermion parity as its normal subgroup.
In addition to the DSpin structure, by including an extra discrete symmetry (such as a time-reversal symmetry), the literature also discovers the structure known as DPin [68] and EPin [35] structures.
• The DPin [68] is known as introducing two types of fermions (with Z 2 ) with the time-reversal symmetry acting differently on fermions, T 2 = (−1) F + and T 2 = +1 respectively (via the group extension 1 → Z | 2021-07-01T01:42:19.339Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "ed9cb158d955ef742d641e5a4d1c49fddf5217c7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ed9cb158d955ef742d641e5a4d1c49fddf5217c7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
222244186 | pes2o/s2orc | v3-fos-license | The Use of Mobile Apps in Learning English Language
Language is one of the significant elements that affect international communication activities. In addition, Ahmadi stated that one of the important elements for learning is the method that instructors use in their classes to facilitate language learning process. According to Bull and Ma (2001), technology provides offers unlimited resources to language learners. Harmer (2007) and Genç lter (2015) emphasized and teachers should encourage learners to find appropriate activities through using computer technology in order to be successful in language learning. Clements and Sarama (2003) declare that the use of suitable technological materials can be useful for learners. According to Harmer (2007), using computer-based language activities improve cooperative learning in learners. Technology has always been an important part of teaching and learning environment learning. When we talk about technology in teaching and learning, the word „integration‟ is used (Eady & Lockyer). Although learners have been born into a technologically rich world, they may not be skilful users of technology. In addition, just providing access to technology is not adequate. Meaningful development of technology-based knowledge is significant for all learners in order to maximize their learning. In this review paper, the researcher will review some of the significant issues pertinent to the use of technology in the learning and teaching of English language skills. (Bennett, Maton & Kervin, 56-57). Nowadays, mobile technologies and mobile applications (apps) are becoming an indispensable part of learning, including foreign language learning. In fact, mobile learning research shows that the use of cell phones and their applications continues to be Beneficial for learning the foreign language, thanks in particular to its special features (e.g. interactivity, Ubiquity, or portability) and encouragement and feedback from teachers. Klimova (2018) in his book Evaluation of the e_ectiveness of the use of a mobile application on students‟ study achievements mentions , The trend nowadays of using mobile phones in language learning is that they are mainly used As the language acquisition service. The Blended Learning (BL) approach, therefore (a combination of Face-to face instruction and online learning) are mainly implemented for their use. In addition, the BL approach is especially suitable for distant students, who due to their work commitments cannot be involved in fulltime English language study. Abstract
The purpose of this review study is to explore whether mobile apps used in the learning of English as a foreign language are beneficial. Do the mobile apps improve important progress in college student"s English ability?
II. Review of Literatures
The study of mobile phone use in the classroom and language learning is not novel. Many research papers have been conducted in this field supporting the use of mobile phone in learning. Thoronton and Houser (2005) in his article entitled Using mobile phones in English education in presented three studies in mobile phone learning. University students. Students felt that using mobile phone in learning is «a valuable teaching method» and they highly rated its «educational effectiveness» in the classroom. A further study by them were subdivided into three groups: using cell phones text messages, using computer e-mail, and speaking. Attewell,( 2004) in his article Mobile Technologies and Learning asserts mobile phones have positively contributed to the field of learning in many different ways. First, mobile learning helps learners to improve their literacy and numeracy skills and to recognize their existing abilities. Second, it can be used to encourage both independent and collaborative learning experiences. Also, it helps learners to identify areas where they need assistance and support.
Telegram app has an impact on English language skills such as reading, writing, listening, and speaking. Naderi and Akrami (2018) stated that using telegram groups in instruction has a significant effect on the learners' reading comprehension ability. They affirmed that it improved the learners" ability in reading comprehension. In the same context of implementing Telegram app in the educational process of English skills, there were two of a previous study (Abbasi & Behjat, 2016;Setiawan & Wahyuni, 2017;Xodabande, 2017) which concentrated on improving speaking ability. Abbasi and Behjat (2016) investigated the effect of storytelling with Telegram on EFL students "speaking complexity. The result showed that the experimental group outperformed the control group regarding their speaking complexity.
Additionally, Setiawan and Wahyuni (2017) showed the role of E-talk Castel's model in improving students' speaking skill in English by using a recording tool in telegram application. The results revealed that implementing E-talk castle model supported by Telegram provided students with good aid to improve their English speaking skill.
Stepp-Greany (2002) used survey data from Spanish language classes which utilized a range of technological approaches and methods in order to determine the importance of the role of teachers, the relevance and availability of technology labs and individual components, and the effect of using technology on the learning process of a foreign language. The results confirmed student perceptions of the teacher as the primary learning facilitator, and stressed the importance of regularly scheduled language labs and the use of CD Rom. Stepp-Greany recommended a follow-up study to measure the effects of relevant technology on the learning process of foreign language acquisition. Shyamlee (2012) analyzed the use of multi-media technology in language teaching. The study found that such technology enhances student learning motivation and attention since it implicates students in the practical processes of language learning via communication with each other. Shyamlee recommended the use multi-media technology in classrooms, particularly as its positive impact on the learning process aligns with the ongoing efficacy of the teacher role. Blattner and Lomicka (2012) investigated how social networking sites (SNSs) are used in a language course and how students responded to them. This study was also intended to examine the attitudes of language learners and teachers regarding the use of Facebook (FB) in an academic setting. Based on their findings, researchers reported that students reacted positively to the use of FB in their language class as they found many benefits such as real audience. Participants also recognized FB as a new platform where they can put their developing language skills into practice and interact with native speakers in authentic and meaningful interaction. They also described FB as "casual" and "pressure free" which makes them comfortable practicing their written skills outside the classroom. On the other hand, participants of this study were less familiar with using FB in academia and tended to use it for group discussions and videos. Blattner and Fiori (2009) consider community building and development of socio-pragmatic competence via FB as useful pedagogical practices and possibilities in technology integrated classrooms.
According to Susikaran (2013), basic changes have come in classes beside the teaching methods because chalk and talk teaching method is not sufficient to effectively teach English. Raihan and Lock (2012) state that with a well-planned classroom setting, learners learn how to learn efficiently. Technology-enhanced teaching environment is more effective than lecture-based class. Teachers should find methods of applying technology as a useful learning instrument for their learners although they have not learnt technology and are not able to use it like a computer expert. Dawson, Cavanaugh, and Ritzhaupt (2008) and Pourhosein Gilakjani (2014) maintained that using technology can create a learning atmosphere centered around the learner rather than the teacher that in turn creates positive changes. They emphasized that by using computer technology, language class becomes an active place full of meaningful tasks where the learners are responsible for their learning. Drayton, Falk, Stroud, Hobbs, and Hammerman (2010) argued that using computer technology indicates a true learning experience that enhances learners" responsibilities. Technology encourages learners to learn individually and to acquire responsible behaviors. The independent use of technologies gives learners self-direction. Warschauer (2000a) described two different views about how to integrate technology into the class. First, in the cognitive approach, learners get the opportunity to increase their exposure to language meaningfully and make their own knowledge. Second, in the social approach, learners must be given opportunities for authentic social interactions to practice real life skills. This objective can be obtained through the collaboration of learners in real activities.
The findings of the research support the proven ineffectiveness of traditional English teaching methods, and confirm that learners are more enthusiastic and interactive when using modern technology to integrate English.
III. Research Method
The methods are based on a literature review of available sources found on the research topic in two acknowledged databases: Web of Science and Scopus.
Use of Technology in English Language Class
Technology is an effective tool for learners. Teachers should model the use of technology to support the curriculum so that learners can increase the true use of technology in learning their language skills. Cooperation is one of the important tools for learning. Bennett, Culp, Honey, Tally, and Spielvogel (2000) asserted that the use of computer technology lead to the improvement of teachers" teaching and learners" learning in the classes. ( 33) Dramatically, we live in a world that is evolving the mobile technology at such a rapid pace that we have difficulty following. With the advent of smartphones with android system and Apple products with iOS system such as iPad and iPhone, the mobile market has changed dramatically in just a few years, and the number of people who own such devices is increasing rapidly, particularly among young people.
The students can easily and freely access these English learning apps based on their own interests. In addition, these apps are built in terms of the specific objectives of the learners. The use of apps on mobile devices to learn English also breaks time and place restrictions. (Subian, 7) It means that students can learn English at any time and in any place. Mobile devices are becoming a kind of important tools for students to learn English.
According to the relevant researches, the Mobile-Assisted Language Learning can not only enhance students" English ability, but also increase students" learning motivation. With the rapid growth of apps on learning English and the popularization of mobile devices among college students, the learning advantages that apps on mobile devices offer to students have become increasingly important. Core features of mobile-learning, such as personalized learning, time-and place-alone, Collaboration between peers and teachers in both formal and informal environments, interactivity and ubiquity Make m-learning scientists of mobile devices. Mobile-Assisted Language Learning (MALL) focuses on language acquisition using mobile technology. There's no need for learners to sit in a classroom or at the front of a computer to study in MALL environment. MALL can actually be seen as an ideal solution for barriers to language learning in terms of time and place. (Miangah & Nezarat,309) MALL can be used to motivate and engage English language learners to develop their literacy and language skills by themselves. Softa expressed his opinion «the importance for use of MALL as a motivational piece to encourage language learning.» Sofa conducted a questionnaire given to 230 students about student motivation from the learning environment and the use of mobile technology. During using apps to learn, the students are more likely try to complete the study task independently. It is important to embed learning supports within the MALL that the learner is in control of. The expansion of time, place, and pace allows students for the continual exposure and practice of literacy skills. (Beecher & Williams,. The rapid growth of app technology has rendered these English learning apps capable of incorporating various media, such as text, image, animation, audio, and video, to create a multimedia instructional material, as well as prompt student interest in research. In addition, online interactions can enhance learner-to-content interactions and learning effects. These interactions include multimedia presentations, learners" contributions to learning materials and links to related learning materials. Recent studies also show that learning apps have a positive impact on learning English. College students in Iran, for example, are skilled and passionate users of mobile devices, so they can rely on internet-based or supported language learning for independent language learning and academic writing.
Mobile English Learning
Students use mobile phone in learning, but in a very limited way. According to Molenet, mobile learning can be broadly defined as 'the exploitation of ubiquitous handheld technologies, together with wireless and mobile phone networks, to facilitate, support, enhance and extend the reach of teaching and learning. Mobile technologies include mobile phones, smartphones, mini notebooks or netbooks, handheld GPS or voting devices, and specialist portable technologies used in science labs, engineering workshops or for environmental or agricultural study. Virtual learning environments and management information systems. It can be argued that tools used by learners are of little relevance; what is relevant is the notion of mobility and building conversations on learning in that cycle. Hashemi (2011) in his article entitled Using Mobile Phones in Language Learning/Teaching mentions First, ownership of the device makes a difference, since a tool that has only been borrowed may not be used in the same way as one that is owned and very familiar. Second, learners who have more than one device are likely to behave differently from those who only have one, because the former can more easily overcome common problems of short battery life and reliability. Third, particular mobile devices have strong associations with specific realms of activity, be it work-related or for leisure. (2478)(2479) Most mobile devices are valuable in education as, teaching aids for practitioners, and also as learning support tools for learners. Here are some of the main benefits: Learners can interact with each other and with the practitioner instead of hiding behind large monitors. It's much easier to accommodate several mobile devices in a classroom than several desktop computers. PDAs or tablets holding notes and e-books are lighter and less bulky than bags full of files, paper and textbooks, or even laptops. Handwriting with the stylus pen is more intuitive than using keyboard and mouse. It's possible to share assignments and work collaboratively; learners and practitioners can e-mail, cut, copy and paste text, pass the device around a group. Mobile devices can be used anywhere, anytime, including at home, on the train, in hotels -this is invaluable for work-based training. These devices engage learnersyoung people who may have lost interest in education -like mobile phones, gadgets and games devices such as Nintendo DS or PlayStation Portable. This technology may contribute to combating the digital divide, as this equipment (for example PDAs) is generally cheaper than desktop computers. (2479) The emergence of apps about education has changed the traditional learning mode, gradually changing from the teacher-centered to self-regulated, learner changed into knowledge construction of the active learning (Yiping & Lei, 2010). The technologyenriched learning is designed to enhance students" self-regulation and motivation (Kramarski & Gutman, 2006).
According to James (2013) the easy availability of apps on mobile devices means that students are increasingly turning to online resources for learning. At the same time, it"s also worth considering the benefits of apps that can help students to organize and compare different sources as part of projects and revision. Internet has its problems.
Merits of Mobile Learning
Regarding to the advantages of English-learning apps, being free to get these online resources and the accessibility that students can download resources into mobile devices and study without the restrictions in time and place are two main advantages. Furthermore, the large number of relevant apps can provide many choices for students to find the online resources they are genuinely interested in. It should be noted that the learning materials are regularly updated in most apps.
There are some benefits of Mobile Learning for English students as follow: Quiz control and self-assessment as a question or as games. Take the lessons and tutorials. Receive archived or brooked lectures live. Access to audio or video clips. Be part of virtual learning communities on the go. Student interaction with instructors and among each other. Enables several students work together on assignments even while at distant locations. The new generation likes mobile devices such as PDAs, phones and games devices.
Demerits of Mobile Learning
The social restrictions can be found in that "students don"t use mobile phone seriously". Some students play video or music in the classroom and that makes it "noisy and out of control". In addition to that, some teachers "don"t allow to use it" in the classroom because students cannot focus and will not pay attention. To the contrary, other students have expressed the disadvantages of Mobile telephone by saying they"re "feeling bored" with it. Moreover, it's "not interesting to use mobile telephone to learn." We still don't use it, since they are not motivated by teachers. The non-users of cell phones in the classroom reported their reasons for not using it, such as the high Internet prices, the small mobile screen, and the health problems.
Cell Phone and PDA displays on small screens. Devices can become out of date quickly. It is also Difficulties with printing, unless connected to a network. Difficulties with printing, unless connected to a network. Unfortunately, one drawback of mobile learning is that it merely increases a student's screen time in one day. Although, on the one hand, we aggressively aim to reduce the amount of time spent in front of a computer, smartphone, tablet, or TV screen for studentsparticularly younger onesmobile learning requires students to spend time in front of a screen to learn. Using mobile learning causes a great deal of confusion too. Most students open the smartphone to learn something, and end up chatting, posting photos, or playing video games using social media websites. Such forms of distractions waste one"s energy which could have been used to do a successful job. This can be a concern in rural areas and in areas where Internet and energy use is not yet widespread. If you have a device but don't have the electricity or internet you need to run the device and use the mobile learning facility. In the end some respondents suggested the University Ought to play a part in promoting and supporting the use of Mobile education technology, as one student suggested "University will draw up some mobile learning programs English.
V. Conclusion
In conclusion, the most important resource in the ICT world at the moment is mobile learning. Mobile learning is considered to be an important factor in keeping young people interested in learning, where more conventional approaches have struggled. PDAs with desktop functionality, the learning world is getting more mobile, more flexible and more exciting. What makes mobile technology so interesting is that it has an association with indoor and outdoor movement, through formal and informal environments, allowing learners to at least lead some way. Mobile technology takes learning out of the classroom, often outside of the teacher's reach. The industrial world is attaining achievement in all fields, despite their hard work. In the area of science and technology even the same progress is observed. New and creative technology has replaced outdated and out-of-date technology. Because of the emergence of this modern technology and internet, the field of education is evolving a lot. Given their hard work, the modern world is attaining milestones in all fields. Only the same change is made in the field of science and technology. New and innovative innovation has replaced obsolete and obsolete technology. The area of education is changing a lot, due to the advent of this new technology and internet. In addition, the teachers use some valuable smartphone devices when they teach English in their EFL / ESL classrooms. So, the learners take this opportunity to use the mobile apps to learn the language skills and focus more on learning them both inside and outside the classroom. | 2020-09-10T10:06:39.141Z | 2020-08-14T00:00:00.000 | {
"year": 2020,
"sha1": "24e82218521471e16084e2e6095229a8d64c2732",
"oa_license": "CCBYSA",
"oa_url": "https://bircu-journal.com/index.php/birle/article/download/1186/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "114f58231e7563a2aa285d760fae4a8b44cd3cf7",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
210708265 | pes2o/s2orc | v3-fos-license | SRV2 promotes mitochondrial fission and Mst1-Drp1 signaling in LPS-induced septic cardiomyopathy
Mitochondrial fission is associated with cardiomyocyte death and myocardial depression, and suppressor of ras val-2 (SRV2) is a newly discovered pro-fission protein. In this study, we examined the mechanisms of SRV2-mediated mitochondrial fission in septic cardiomyopathy. Western blotting, ELISA, and immunofluorescence were used to evaluate mitochondrial function, oxidative balance, energy metabolism and caspase-related death, and siRNA and adenoviruses were used to perform loss- and gain-of-function assays. Our results demonstrated that increased SRV2 expression promotes, while SRV2 knockdown attenuates, cardiomyocyte death in LPS-induced septic cardiomyopathy. Mechanistically, SRV2 activation promoted mitochondrial fission and physiological abnormalities by upregulating oxidative injury, ATP depletion, and caspase-9-related apoptosis. Our results also demonstrated that SRV2 promotes mitochondrial fission via a Mst1-Drp1 axis. SRV2 knockdown decreased Mst1 and Drp1 levels, while Mst1 overexpression abolished the mitochondrial protection and cardiomyocyte survival-promoting effects of SRV2 knockdown. SRV2 is thus a key novel promotor of mitochondrial fission and Mst1-Drp1 axis activity in septic cardiomyopathy.
Sepsis-induced
cardiomyopathy, which is characterized by left ventricular dilation and decreased ejection fraction, significantly increases perioperative mortality [1]. However, there are few effective drugs and therapeutic approaches for patients with septic cardiomyopathy. Identification of the molecular mechanisms underlying the development of septic cardiomyopathy might help identify new therapeutic targets and improve the efficacy of septic cardiomyopathy treatments as well as its prognosis [2].
Mitochondria are primarily responsible for ATP generation in cardiac cells. Structurally, mitochondria are highly plastic organelles that undergo continuous fusion, fission, trafficking, and mitophagy [3]. Mitochondrial homeostasis is controlled by mitochondrial fission. For example, active mitochondrial fission has been associated with cardiomyocyte death in a myocardial reperfusion model [4,5]. During oxidative stress, endothelial cell survival rates are also impacted by fission-initiated, caspase-9-related apoptosis [6]. Moreover, fission also facilitates high-fat-mediated hepatic injury [7,8] and diabetic nephropathy. Therefore, fission may be a critical regulator of mitochondrial function as well as cell survival [9]. This has been observed in various cancers, such as gastric cancer, liver tumors, and thyroid carcinoma [10]. However, few studies have explored the downstream effects and key AGING inducers of mitochondrial fission in septic cardiomyopathy.
Suppressor of ras val-2 (SRV2), a newly discovered pro-fission protein, affects mitochondrial shape and activates mitochondrial fission via multiple mechanisms [11]. First, SRV2 can promote interactions between Drp1 and mitochondria [12]. Subsequently, SRV2 promotes oligomerization of Drp1, which then forms a ring around mitochondria that constricts and cuts them into several fragments. SRV2 also increases the expression of stress fibers, such as F-actin [13], that facilitate Drp-1mediated mitochondrial division [14]. In this study, we conducted several experiments to understand the effects of SRV2 on fission in septic cardiomyopathy.
Macrophage stimulating 1 (Mst1), a key factor in the Hippo signaling pathway, is important for mitochondrial structural maintenance and functional preservation [15]. For example, Mst1 activation is associated with mitochondrial stress in LPS-treated hepatocytes. In fatty liver disease, inhibition of Mst1 reduces mitochondrial autophagy [16]. Interestingly, mitochondrial membrane potential and apoptosis are also affected by Mst1 in hyperglycemia-treated retinal epithelial cells. In contrast, loss of Mst1 attenuates renal ischemia reperfusion injury by maintaining mitochondrial homeostasis [17]. In addition, Mst1 knockdown enhances cardiomyocyte viability by improving mitochondrial performance through mitochondrial autophagy. The effects of Mst1 on mitochondrial fission have been widely reported in many kinds of cancers, such as gastric, lung, pancreatic, liver, and colorectal cancer [18]. In the present study, we explored whether SRV2-related mitochondrial fission is mediated by Mst1 in, and whether it contributes to the pathogenesis of, septic cardiomyopathy.
SRV2 is upregulated in septic cardiomyocytes and correlates with cardiac dysfunction
First, we measured alterations in SRV2 levels via qPCR and Western blotting in a mouse model of septic cardiomyopathy. As shown in Figure 1A and 1B, compared to the sham group, SRV2 transcript and protein levels were significantly elevated in mice with LPS-induced septic cardiomyopathy. Echocardiography was used to examine associations between SRV2 upregulation and sepsis-related myocardial damage. As shown in Figure 1C and 1D, compared to the sham group, LVEF and LVFS were significantly reduced after LPS treatment, suggesting a loss of cardiac contractile function. In addition, inflammation factors such as IL-1β, IL-8, TNF-α, and MCP-1, were markedly increased in mice injected with LPS ( Figure 1E-1H). Together, these results indicate that SRV2 is activated by LPS and is associated with heart failure in a mouse model of septic cardiomyopathy.
Loss of SRV2 attenuates cell death and sustains cardiomyocyte function
To determine whether SRV2 upregulation directly causes cardiac damage, a loss of function assay was performed by transfecting cardiomyocytes with siRNA against SRV2. Cardiomyocyte viability was then measured in an MTT assay. As shown in Figure 2A, compared to the control group, cardiomyocyte viability was reduced by LPS treatment; this effect was reversed by SRV2 siRNA transfection. Cardiomyocyte death was further analyzed with TUNEL staining and an LDH release assay. As shown in Figure 2B and 2C, compared to the control group, the number of apoptotic cells increased greatly after exposure to LPS treatment. SRV2 knockdown also reduced the ratio of apoptotic to normal cardiomyocytes. In accordance with these findings, LDH levels in the culture medium were markedly increased in response to LPS treatment and returned to normal levels after siRNA-induced silencing of SRV2 ( Figure 2D). SRV2 knockdown also decreased the transcription of inflammatory factors. As shown in Figure 2E-2H, compared to the control group, LPS treatment upregulated the transcription of IL-1β, IL-8, and MCP-1 in cardiomyocytes, and inhibition of SRV2 reversed this effect. These results indicate that downregulation of SRV2 attenuates LPS-induced cardiomyocyte death and dysfunction.
SRV2 activation is associated with mitochondrial fission
Next, we examined mitochondrial fission, an early indicator of cardiomyocyte damage [19,20], to understand the molecular mechanism by which SRV2 decreases cardiomyocyte functions and survival in LPS-mediated septic cardiomyopathy. First, an immunofluorescence assay was performed to quantify mitochondrial fission. As shown in Figure 3A-3C, compared to the control group, mitochondrial fission was activated by LPS in cardiomyocytes, as AGING evidenced by decreased mitochondrial length and increased mitochondrial fragmentation. Interestingly, SRV2 knockdown inhibited LPS-mediated mitochondrial fission, as indicated by reversal of mitochondrial network alterations and increased mitochondrial length ( Figure 3A-3C). In addition, transcription of mitochondrial fission-related proteins, including Drp1, Fis1, and Mff, increased rapidly after exposure to LPS ( Figure 3D-3F). Furthermore, levels of anti-fission factors such as Mfn2 and Opa1 markedly decreased after LPS treatment ( Figure 3G-3H). These data suggest that LPS stress triggers mitochondrial fission. In contrast, Drp1, Fis1, and Mff levels decreased ( Figure 3D-3F), while Mfn2 and Opa1 levels increased ( Figure 3G and 3H), after deletion of SRV2 in LPS-treated cardiomyocytes. These results indicate that LPSmediated upregulation of SRV2 promotes mitochondrial fission.
Inhibition of SRV2-mediated mitochondrial fission promotes cell survival and sustains cardiomyocyte function
Next, we explored whether SRV2 induced cardiomyocyte damage through mitochondrial fission by measuring viability in SRV-2 knockdown cardiomyocytes treated with FCCP, an agonist of mitochondrial fission [21]. As shown in Figure 4A, LPS-induced cardiomyocyte damage was reversed by SRV2 knockdown, and FCCP treatment blocked this effect. In addition, although cardiomyocyte death as indicated by TUNEL staining ( Figure 4B-4C) was attenuated by SRV2 knockdown after LPS treatment, FCCP increased the proportion of apoptotic cardiomyocytes.
In addition to cardiomyocyte death, we also examined structural alterations in the cardiomyocyte cytoskeleton, which is vital for cellular contraction [22]. Interestingly, expression of the cytoskeleton protein F-actin decreased after exposure to LPS, and SRV2 knockdown reversed this effect ( Figure 4D-4E). FCCP again blocked the effects of SRV2 knockdown after FPS treatment. Furthermore, SRV2 knockdown reduced LPS-induced increases in inflammatory response as indicated by IL-1β, IL-8, and MCP-1 transcription ( Figure 4F-4H) to near-normal levels in cardiomyocytes, and FCCP treatment again blocked this effect ( Figure 4F-4H). Together, these results indicate that inhibition of SRV2 protects cardiomyocytes against LPS-induced stress by inhibiting mitochondrial fission.
SRV2-induced mitochondrial fission promotes mitochondrial damage
To further characterize the molecular mechanism by which SRV-mediated mitochondrial fission promotes cardiomyocyte death, mitochondrial function and damage were measured [23]. Reactive oxygen species (ROS) are generated primarily by mitochondria, and excessive ROS production is a risk factor for myocardial depression [24]. Using an ROS probe, we found that ROS levels increased markedly in LPStreated cardiomyocytes ( Figure 5A-5B). SRV2 knockdown reduced ROS levels by inhibiting mitochondrial fission; FCCP restored increased ROS levels in SRV2-knockdown cardiomyocytes ( Figure 5A-5B). Antioxidant levels increased after SRV2 knockdown ( Figure 5C-5E), and decreased after subsequent FCCP treatment, in LPS-treated cardiomyocytes, suggesting that SRV2 inhibition exerts antioxidative effects in cardiomyocytes by inhibiting mitochondrial fission.
Mitochondrial damage is also characterized by the opening of mitochondrial permeability transition pores (mPTP) [25]. As shown in Figure 5F, compared to the control group, LPS increased the proportion of cardiomyocytes with open mPTPs. SRV2 knockdown prevented LPS-mediated mPTP opening, and FCCP treatment reversed this effect ( Figure 5F). Opening of mPTPs resulted in increased transcription of proapoptotic mitochondrial genes after LPS treatment ( Figure 5G-5J), and this effect was reversed by SRV2 knockdown-mediated inhibition of mitochondrial fission ( Figure 5G-5J). Together, these results demonstrate that SRV2-mediated mitochondrial fission promotes mitochondrial damage, which in turn leads to cardiomyocyte dysfunction and death.
Cardiomyocyte mitochondrial metabolism is disrupted by SRV2-induced mitochondrial fission
Mitochondrial energy metabolism is vital for cardiomyocyte survival and contraction [26]. ATP depletion and bioenergetic impairment have been observed in cardiomyocytes during septic cardiomyopathy [27]. Here, we investigated whether SRV2-mediated mitochondrial fission is also involved in cardiomyocyte metabolism dysregulation. As shown in Figure 6A, compared to the control group, ATP generation was decreased in LPS-treated cardiomyocytes. SRV2 knockdown increased ATP production, and FCCP attenuated this effect ( Figure 6A). ATP is generated primarily at the mitochondrial electron transport chain (ETC) complex. Transcription of the ETC is reduced by LPS and restored to control levels by SRV2 knockdown (Figure 6B-6D). FCCP-induced reactivation of mitochondrial fission decreased ETC transcription ( Figure 6B-6D). LPS, SRV2 knockdown, and FCCP also had similar effects on mitochondrial ETC activity ( Figure 6E-6G). Together, these findings indicate that SRV2-mediated mitochondrial fission leads to ETC dysfunction. As a result of this ETC dysfunction, mitochondrial membrane potential was reduced, as evidenced by increased green JC-1 fluorescence in LPStreated cardiomyocytes ( Figure 6H-6I). SRV2 knockdown reversed this decrease in mitochondrial membrane potential, and FCCP-induced mitochondrial fission again decreased mitochondrial potential ( Figure 6H-6I). Taken together, these results demonstrate that SRV2 knockdown, and the resulting suppression of mitochondrial fission, prevents LPS-induced dysregulation of cardiomyocyte energy metabolism.
SRV2 promotes mitochondrial fission via the Mst1-Drp1 signaling pathway
Lastly, we examined the signal transduction mechanism by which SRV2 promotes mitochondrial fission in LPS-treated cardiomyocytes [28]. Previous studies have reported that mitochondrial fission is primarily regulated by Drp1, which is the downstream effector of the Mst1 pathway [29,30]. Mst1-induced mitochondrial apoptosis has also been identified as an important mechanism of mitochondrial damage. We therefore investigated whether SRV2 induced mitochondrial fission through the Mst1-Drp1 signaling pathway. Drp1 and Mst1 transcription increased rapidly in response to LPS treatment ( Figure 7A-7B), and SRV2 knockdown prevented this upregulation ( Figure 7A-7B).
To determine whether the Mst1-Drp1 pathway is required for SRV2-induced mitochondrial fission, mitochondrial fission was measured after adenovirus expressing Mst1 was transfected into SRV2 knockdown cardiomyocytes. SRV2 knockdown again inhibited LPSmediated mitochondrial fission, and Mst1 overexpression reversed this effect ( Figure 7A-7B). Drp1, Mff, and Fis1 transcription were also upregulated in response to Mst1 overexpression in SRV2 knockdown cardiomyocytes ( Figure 7C-7D). These data indicate that mitochondrial fission is re-activated by Mst1 overexpression in SRV2-knockdown cells.
The SRV2 knockdown-induced increase in ATP generation was also abolished by Mst1 overexpression ( Figure 7E). In addition, antioxidant levels in SRV2 knockdown cardiomyocytes were similar to controls, and Mst1 overexpression decreased SOD, GSH, and GPX levels ( Figure 7F-7H). ROS content was also increased by Mst1 overexpression in SRV2 knockdown cells ( Figure 7I-7J). Finally, caspase-9 activity increased after LPS treatment, and this effect was reversed by SRV2 knockdown. However, Mst1 overexpression increased caspase-9 activity in SRV2 knockdown cardiomyocytes ( Figure 7K). Taken together, these data indicate that SRV2 promotes mitochondrial fission by activating the Mst1-Drp1 signaling pathway.
DISCUSSION
Septic cardiomyopathy is a transient left ventricular dysfunction triggered by excessive inflammatory response. Although numerous theories have been developed to explain the pathogenesis of septic cardiomyopathy, the most common cause is disruption of mitochondrial structure and function [31]. Several studies have demonstrated that protection of mitochondria can attenuate decreases in myocardial activity during septic cardiomyopathy [32]. Excessive mitochondrial damage is characterized by oxidative stress and energy metabolism disorders that inhibit contraction and promote death in cardiomyocytes [33]. Although many studies have explored the pathological role of mitochondria in septic cardiomyopathy [34], the upstream mediators of inflammation-induced mitochondrial damage have not yet been identified [35]. In this study, mice received LPS injections to induce septic cardiomyopathy. Our experimental results confirmed that inflammation-induced myocardial damage increased the expression of SRV2, a novel regulator of mitochondrial structure. Additionally, increased cardiomyocyte death and decreased cardiac function were associated with elevated SRV2 expression in cardiomyocytes [36,37]. Loss of function assays were performed to further investigate the role of SRV2 in sepsis-related cardiac damage. Interestingly, SRV2 knockdown promoted cardiomyocyte survival and attenuated LPS-induced inflammatory response, confirming that SRV2 is a novel promoter of myocardial damage in septic cardiomyopathy ( Figure 8). Previous studies have reported that SRV2 is involved in mitochondrial damage [38]. Here, SRV2 activated mitochondrial fission, which in turn promoted mitochondrial-associated cardiomyocyte apoptosis as evidenced by mitochondrial membrane potential loss, mitochondrial ROS overloading, antioxidant system suppression, cellular ATP depletion, pro-apoptotic factor release, and caspase family activation [39].
Furthermore, we found that SRV2 affects mitochondrial fission via the Mst1-Drp1 signaling pathway [40]. Overexpression of Mst1 abolished SRV2 knockdown-induced increases in cardiomyocyte survival and mitochondrial protection [41,42]. This suggests that the SRV1-Mst1-Drp1 signaling pathway is a novel regulator of cardiomyocyte viability and mitochondrial homeostasis in the context of septic cardiomyopathy [43].
At the molecular level, accumulation of Drp1 and Factin assembly initiate mitochondrial fission [44]. Drp1 forms a ring structure that causes mitochondrial contraction, and F-actin provides an adhesive force that helps Drp1 to complete mitochondrial contraction [45,46]. Notably, SRV2 promotes polarized actin cable assembly, facilitates actin turnover [47], and enhances SRV2 upregulates Mst1 and Drp1, which in turn activate mitochondrial fission. Excessive fission induces cardiomyocyte death by promoting mitochondrial oxidative stress and inflammatory response and disrupting energy metabolism. AGING F-actin synthesis. Moreover, Drp1 accumulation on the mitochondrial surface is also regulated by SRV2. In these ways, SRV2 plays crucial regulatory roles in mitochondrial fission [48]. Our findings in the septic cardiopathy model also support this conclusion. Drp1 expression was downregulated after SRV2 knockdown, and this was followed by decreases the levels of other mitochondrial fission-related factors, such as Mff and Fis1 [49,50]. However, the mechanism by which SRV2 modulates these mitochondrial fission-related factors remains unknown [51]. Notably, we demonstrated that SRV2 regulated Drp1 expression via Mst1-Hippo signaling; re-activation of the Mst1-Hippo pathway abolished the inhibitory effects of SRV2 knockdown on Drp1 expression. The Mst1-Hippo pathway has also been identified as an upstream regulator of mitochondrial fission [52]. For example, Mst1 activates mitochondrial fission by upregulating Drp1 in renal ischemia-reperfusion injury. In postinfarction cardiac injury [53], Mst1 activation is associated with the initiation of mitochondrial fission via JNKmediated posttranscriptional modification of Drp1. Moreover, in endometriosis, Drp1-related mitochondrial fission is also affected by Mst1 [54]. The Mst-Hippo pathway has also been characterized as a cancerkilling pathway in several kinds of cancer, such as pancreatic, liver, gastric, and colorectal cancer [55,56]. In this study, we identified an important mechanism by which Mst1 promotes cardiomyocyte death and mitochondrial fission [57]. These findings improve our understanding of the roles that the Mst1-Hippo pathway and SRV2 play in acute cardiac injury.
Some limitations should be considered when interpreting the results of this study. First, the SRV2 knockdown assay was performed in vitro, and animal studies and human research are needed to verify our findings. Furthermore, although we found that SRV2 modulates Drp1 expression, it remains unknown whether the Mst1-Hippo pathway also regulates other mitochondrial fission-related factors.
Animals
Eight-week old neonatal C57BL/6 mice (Oriental Bio Service Inc., Nanjing) were maintained in standard cages on a 12 h light/dark cycle at 22°C ± 2°C with 55-65% relative humidity and given food and water ad libitum. Animal care and experimental procedures were conducted in accordance with the guidelines established by the Institutional Animal Care and Use Committees at Fujian Medical University.
Mouse model and drug administration
The septic cardiomyopathy mouse model was established as previously described with minor adjustments. Thirty C57BL/6 mice were randomly divided between the normal saline control group (n=10) and an LPS-induced group (n=20). Mice in the LPS-induced group were injected intraperitoneally with LPS (10 mg/kg) purchased from Sigma-Aldrich (St. Louis, MO). Mice in the normal saline control group were injected with an equal volume of sterile saline [58].
Echocardiographic assessment
In order to evaluate left ventricular (LV) function in the mouse septic cardiomyopathy model, transthoracic echocardiography was performed according to previously described procedures [59], including left ventricular ejection fraction (LVEF) and left ventricular fractional shortening (LVFS) parameters. All analyses were performed by a single investigator who was blinded to the experimental groups [60].
ELISA assay
Blood was collected from the mice through eyeball extraction 24 h after LPS injection, and serum was separated by centrifugation at 3500 rpm at 4°C for 15 min. An ELISA kit (Invitrogen, Carlsbad, CA, USA) was used to measure levels of inflammatory cytokines (IL-1β, IL-8, TNF-α, and MCP-1) according to the manufacturer's protocols. Emission was assayed at 450 nm relative to a reference wavelength using a microplate reader (Bio-Rad, Hercules, CA, USA) [61].
Reverse transcription-quantitative polymerase chain reaction (RT-qPCR)
Total RNA was extracted from left atrial (LA) tissues using TRIzol Reagent (Invitrogen, Carlsbad, CA), and single-stranded cDNA was transcribed using the PrimeScript™ RT reagent Kit with gDNA Eraser (Takara, Dalian, China). RT-qPCR was performed on an ABI Prism 7500 Sequence Detection system (Applied Biosystems; Thermo Fisher Scientific, Inc.). The thermocycling conditions were as follows: 50°C for 2 min, then 40 cycles of 95°C for 30 sec and 60°C for 1 min. Transcript levels were measured relative to GAPDH using a calibration curve [62].
Western blot analysis
Total protein was isolated from samples with lysis buffer. Proteins of interest were separated on SDS-PAGE gels, transferred to PVDF membranes (Millipore, Hong Kong, China), and incubated with Mst1 primary AGING antibody (1:1000, Cell Signaling Technology, #3682) followed by horseradish peroxidase (HRP)-conjugated secondary antibody. The protein bands were detected by chemiluminescence (ECL) and were visualized using a Kodak Image Station 4000 (Rochester, NY). Band densities were quantified using the Quantity One analysis system (Bio-Rad Laboratories, UK) [63].
Cell culture and transfection
The rat embryonic ventricular cardiomyocyte H9C2 cells were purchased from American Type Culture Collection (ATCC). The cells were maintained in DMEM (Hyclone) supplemented with 10% fetal bovine serum (FBS; Hyclone), 100 U/mL penicillin (Sigma), and 100 μg/mL streptomycin (Sigma) at 37°C in a humidified atmosphere with 5% CO2 [65]. Prior to experiments, cells were grown to 80-90% confluence, then transfected for 24 h with siRNA against SRV2 and Mst1 adenovirus at a multiplicity of infection of 50, achieving a 90% transduction efficiency. Cells were then subjected to serum starvation (0.4% FBS) for 24 h and treated with LPS at 20 μΜ for 24 hours [66].
Mitochondrial membrane potential assay
Mitochondrial membrane potential (MMP) was determined using a JC-1 probe (BD Biosciences, San Diego, CA, USA). Briefly, after incubation with 10 μg/mL JC-1 in the dark for 20 min at 37°C, the cells were washed with PBS and observed using a confocal microscope (Leica Microsystems, Heidelberg, Germany). MMP was quantified by measuring the 590/488 fluorescence intensity ratio [67].
ATP concentration
Mitochondrial ATP concentration was measured using an ATP quantification kit according to manufacturer's instruction (Invitrogen, USA) [62]. ATP concentrations were normalized to total protein levels [68].
TUNEL assay
Cell apoptosis was measured in a TUNEL assay using an Apoptosis In Situ Detection Kit (Abcam, Cambridge, MA, USA) according to the manufacturer's instructions. A Leica TCS-SP laser scanning confocal microscope (Leica Microsystems, Heidelberg, Germany) was used to take photomicrographs [71].
Statistical analysis
Statistical analysis was performed using SPSS 22.0 software.
Continuous variables with normal distributions were tested using one-way ANOVAs followed by the Tukey post-test; values are expressed as means ± SEM. The Kruskal-Wallis test was used for non-normally distributed variables; values are expressed as medians and interquartile ranges. Differences between groups were analyzed by studentʼs independent t-tests or Mann-Whitney U tests [72]. AF incidence across groups was analyzed using the Fisher exact test; values are expressed as percentages. P-values < 0.05 were considered statistically significant.
AUTHOR CONTRIBUTIONS
XLS and YRZ designed the experiments and wrote and edited the manuscript. JQX, ML, and XTW performed the experiments. RGY analyzed the data. Public hospital research funds supported the experiments.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest. All authors have read and approved the final manuscript. | 2020-01-19T14:03:10.997Z | 2020-01-17T00:00:00.000 | {
"year": 2020,
"sha1": "825006c53569960ff6842d1fd46fcebf0085e31c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/aging.102691",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98c40327828db6f857ac96c36341066e491b643d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
242219599 | pes2o/s2orc | v3-fos-license | Measuring E-Commerce Adoption Behavior of Z-generation in A Developing Country, Evidence from Mongolia
To analyze e-commerce adoption behavior of Z generation in Mongolia. The effects of personal innovativeness, perceived usefulness, social influence and perceived risk on intention through the mediation role of attitude were specifically examined. In order to achieve the purpose of the research, a survey was conducted on consumers who experienced online shopping malls from October 21, 2020 to November 2, 2020. A total of 332 effective questionnaires were collected and the data were analyzed by using the SPSS and Smart PLS 3.3 programs. The current study reveals an individual’s perceptions of the usefulness of personal innovativeness and social influence, which were important factors towards the adoption of e-commerce. Individuals who have a higher degree of perceived risk have a higher preference to try products without a prior evaluation. Marketers who intend to expand into developing markets such as Mongolia are advised to consider consumer generation and attitudes towards adopting new technology. The study enhances our perception about young generation in Mongolia, whose behavior and action are similar to those young people, who live in a developed country.
The above stated facts provide a very interesting target for researchers to study the consumer behavior of Mongolia. Even though, several studies have been conducted on an individual's behavior and their influence on ecommerce adoption, these studies mainly focus on developed countries. Very few studies have been undertaken on developing countries like Mongolia.
By focusing on Mongolia, this study enriches the extant knowledge on technology acceptance and ecommerce adoption behavior in developing and emerging market economies. Basically, it offers fresh insights into how different factors enhance consumer's purchases of products in the emerging market of Mongolia. This would actually assist key stakeholders, such as e-retailers, associations, and policymakers to develop and manage their strategies and initiatives to promote e-commerce. This study also provides insightful suggestions to decisionmakers, such as governments, business owners and individuals evaluating e-commerce behavior among Mongolian people.
Theoretical background and hypothesis
The world has become more global and the market is easily accessible. E-business is defined as a business model that digitalizes a modern business model to suit the needs of consumers, and the development of new technologies, making it more efficient and effective. Businesses are moving towards this e-business era due to the evolution, change, and digital development of e-business. Michael Wade, who is the director of the Global Center for Digital Business Development (Wade, 2015) explains that "the e-business evolution is an organizational change that aims to improve performance and efficiency through the use of digital technology and business models." If a company does not keep pace with the others and introduce digital technology to its business, it runs the danger of losing its competitiveness and market position. E-commerce is not only buying and selling products and services over the Internet, but also involves other activities such as commercial transaction to support the sales process (Nawarathna and Banda, 2019).
A complete definition of E-commerce is electronic communications and digital information processing technology in business transactions to create, transform, and redefine relationships for value creation between organizations and individuals (Emmanuel et al., 2000). Previous studies indicate that personal innovativeness, self-efficacy, and perceived usefulness influence an individual's eagerness to adopt the technology (June, 2014; Delafrooz et al., 2011). Agarwal and Prasad (1998) documented the first personal innovativeness concept related to the motivation and desire to accept and adopt new products and services (Lu, 2014). Self-efficacy is associated with how people think, believe, and feel and refers to beliefs in individual capabilities to successfully manage and accomplish tasks (Bandura, 1997). Self-efficacy studies aim to motivate individuals to accomplish effortful consumption tasks (Luszczynska and Schwarzer, 2005). Moreover, individuals perceive all information before decision making, and perceived usefulness refers to anything related to e-commerce adoption advantages (Delafrooz et al., 2011;Cela and Cazacu, 2016;Liu et al., 2013). Lu et al. (2005) found that perceived usefulness and ease of use were strong variables in consumer willingness to adopt the technology. They concluded that variables, such as personal innovativeness and social influence must be also considered in determining consumer acceptance. Innovativeness showed a direct effect on ease of use and usefulness, which in turn impacted the consumers repurchase intention to adopt wireless Internet services via mobile technology.
2.1E-commerce in Mongolia
Mongolia is considered to have a very young age group of the population, who is adopting technology quickly and love shopping. Mongolian youth are comfortable using technology and preferring to shop online. The statistical data indicate that about 30% of the Mongolians belong to the group of 0-18 years, and 45% of them live in the capital city, Ulaanbaatar. The number of Mongolian people using the Internet has increased due to the advances in information technology, and the increased activity of e-commerce at present. In 2019, the number of Internet users in Mongolia was 5.5 million (duplicated numbers), which is 2.9 times more than in 2014. This led to a sharp increase in the number of Internet users with the introduction of the 3G network in 2014 and the LTE network in 2016.
In general, consumers can order and purchase products from a diverse range of websites for supermarkets, restaurants, grocery stores, food intermediaries, and delivery businesses, such as shoppy.mn, toktok.mn, songo.mn and avlaa.mn. It is estimated that Mongolian consumers are likely to shop online. Given that many online shoppers visit virtue stores on social platforms, this inevitable trend has been speeded up by a drastic increase in the number of social media users (e.g. Facebook) in Mongolia.
The Mongolian e-commerce market is growing faster and expected to see tremendous growth over the next few years. Moreover, the World Health Organization (WHO) urges countries to take advantage of e-commerce in 2021. There is a tendency in Mongolia to opt-out of online shopping to avoid the spread of COVID-19 virus. The current Mongolian e-commerce industry has experienced unprecedented growth with its total revenue increasing over 60 percentages during last year (Mongol bank, 2020). According to an e-commerce market survey conducted by the Mongol Bank in May-June 2020, 95% or 3,331 out of 3,520 individuals purchased products online and 5% (189) bought products for resale. 45% ordered through intermediaries in Mongolia, 23% made direct purchases from foreign websites, such as Amazon, Alibaba and G-market. In terms of age, 27% of the consumers are 16-25 years old, 46% are 26-35 years old, 22% are 36-45 years old, and the remaining 5% are over 46 years old. Ecommerce is dominated by young women (66%). Even though e-commerce is well established in developed countries, it is still at an early stage in Mongolia, hence, there is a need to discover which factors contribute to its adoption.
Hypothesis Development
Across various disciplines, innovativeness is described differently. A widely accepted definition among researchers was the degree of early acceptance of innovation (Agarwal and Prasad, 1999). Rogers (1995) defined innovativeness as the degree to which an individual adopted innovation before others did. Several studies found a direct and positive relationship between consumer innovativeness and purchase intention. The existing literature indicates that innovativeness had a positive relationship to customer repurchase intention with self-service technologies (Chen, 2008). Furthermore, Zhang et al. (2011) found a direct relationship between personal innovativeness and attitude toward using information technologies. Due to the novelty of the e-commerce atmosphere in Mongolia, it is expected to see innovators as the first individuals to purchase products online. The belief that e-commerce is a new way of buying goods that could offer benefits, such as lower prices, convenience, and wider choice would tend to attract more people with a high level of innovation than people with a lower level. Agarval and Karahanna (2000) found that people who has higher level of personal innovativeness difference variable as a new construct to Davis' original TAM model and hypothesized that individuals with higher levels of personal innovativeness are expected to develop more positive perceptions about the innovation in terms of advantage, usefulness, compatibility. and have more positive intentions toward the utilization of a new information technology.
It is posited, that individuals who have the ability to adopt new ideas and changes have higher tendencies to evaluate e-commerce systems favorably. This leads to the following hypotheses: H1: Personal innovativeness has a positive effect on attitude in adopting an e-commerce platform. H2: Personal innovativeness has a positive effect on the intention to adopt an e-commerce platform. H3: Personal innovativeness has a positive effect on perceived usefulness in adopting an e-commerce platform. Perceived usefulness is one of the cognitive factors that discover the acceptance of information technology, according to TAM .
Perceived usefulness is an individual's perception that using new technology will enhance or improve her or his performance (Davis, 1989;Davis, Bagozzi, and Warshaw, 1989). In the technology acceptance model, an individual's intention towards new technology is strongly affected by their perceived usefulness and perceived ease of use, which in turn lead to the intention to use new technology . Furthermore, Kurnia and Chien (2003) found that both perceived usefulness and attitude positively affect intention to buy products online.
Social influences refer to perceived pressures from social networks to make or not to make a certain behavioral decision. In sociology, social network effects have been used to explain and understand a variety of organizational behavior phenomena, such as commitment and satisfaction (Krackhardt and Porter, 1985). However, social influences have been regarded a critical element in innovation diffusion literature as well (Cooper and Zmud, 1990;Klonglan and Coward, 1970;Laudon, 1985;Triandis, 1971). Support from influential others has an important impact on what action a potential adopter chooses to take, because individuals adapt their attitudes, behaviors and beliefs to their social context (Salancik and Pfeffer, 1978). This study explores how the effects of perceived usefulness affect an individual's intentions to adopt an e-commerce platform. Based on the above theoretical background, the following hypotheses are proposed: H4: Social influence has a positive effect on perceived usefulness in adopting an e-commerce platform. H5: Perceived usefulness has a positive effect on attitude in adopting an e-commerce platform. Attitude has been one of the key variables for adopting e-commerce (Chen and Tan, 2004;Richard, 2005). Quevedo-Silva et al. (2015) discovered that attitude had a positive relationship with the intention to purchase products online among Brazilian shoppers. This finding is also confirmed by Loketkrawee and Bhatiasevi (2018), who claimed that attitude has a strong influence on consumer's intentions to use online grocery shopping.
Consequently, it can be considered that the more positive the attitude of an individual to e-commerce, the greater will be the willingness to adopt an e-commerce platform. Based on the above theoretical background, the following hypothesis is proposed: H6: Attitude has a positive effect on the intention to adopt an e-commerce platform. Perceived risk can be categorized into two significant types: behavioral risk and environmental risk (Park and John, 2010). Behavioral risk essentially happens as a consequence of online retailers' action, who generally aim to gain multiplied advantages from the online shopping technique. Product risks are typically pertained to consumers' reflection of online spent time by attaining a pleasant feeling and contemplation of the value of the products or services (Park and John, 2010). Environmental risk is the outcome of emotional and spontaneous deliberation that takes place during the engagement in a purchasing action, which usually involves online shopping between an online retailer and consumer (Park and John, 2010). Online shopping implies financial and security risks, because it is difficult to control the transaction. It is imperative for decision makers to scrutinize consumer behavior in connection with risk due to the high uncertainty in online transactions. According to Brettman (1973), decision makers must develop strategies in order to decrease individual perceived risk. Perceived risks influence the perceived usefulness. For instance, the way consumers perceive risks, such as in case of product failure, when they perceive the product not to be useful for them. According to Huotilainen and Tuorila (2005), people are suspicious of new technologies and have bigger trust in production. Furthermore, Siegrist et al. (2007) suggest that perceived risk has a negative correlation with willingness to buy a product. This means that the higher the risk, the lower the acceptance.
This has to do with the usefulness of the technology people experience. Therefore, this leads to the following hypothesis: H7: The higher the perceived risks, the lower the perceived usefulness. H8: The higher the perceived risks, the lower the attitude to adopt an e-commerce platform. H9: The higher the perceived risks, the lower the intention to adopt an e-commerce platform.
Research methodology 3.1 Objectives of the Study
This study investigates the e-commerce adoption behavior of generation Z (gen Z) in Ulaanbaatar Mongolia using 332 valid data. The data had been specifically examined and assessed that will help the reader to understand the influence of young people's innovativeness, social influence, perceived usefulness and risk on e-commerce adoption in Mongolia. Therefore, the target population of this study was gen Z, who lived in Ulaanbaatar, Mongolia.
Research Design
This study was based on primary data by using quantitative approaches. The quantitative part was based on standardized questionnaires and involved 332 participants.
Data Collection Procedure
This study was conducted in the period between 21 st October and 2 nd November 2020. A randomly selected sample of 400 Mongolian people were participated in this research. A total of 400 questionnaires were distributed, and 332 were returned. Sixty-eight questionnaires were not fully answered and were thus excluded from the analysis. Table 3.1 shows the frequencies and percentages of the study sample characteristics. Consequently, a total of 400 samples, constituting an 83% returned ratio were applied in this study.
Measurement
There were 18 items used to measure six constructs on a five-point Likert scale (1= strongly disagree and 5= strongly agree). Personal innovativeness was measured using the items developed by Agarwal and Prasad (1998) and Hong et al. (2013). Perceived usefulness was evaluated by adapting items developed by Thompson et al. (1994), Davis (1989), andHong et al. (2013). Perceived risk was estimated by items adapted from Cela and Cazacu (2016), while social influence was assessed by Corbitt, B.J., Thanasankit, T. and Yi, H (2003). Attitude was measured by using items developed by Wang and Liu (2009), and intention to adopt was calculated by using items adapted from Davis (1989), Hong et al. (2013), andHwang (2005). The items are grouped under each variable in the questionnaire.
Data Analysis Procedure
Partial Least Square (PLS) was employed to test the model and hypotheses. The model estimation was performed by Smart PLS 3.0 (Ringle et al., 2013). T-values were calculated using a bootstrapping procedure with 1000 resamples (Chin, 1998). Smart PLS-3 path models have two sets of the linear equation: Inner model (structural model) and outer model (measurement model). The inner model specifies the relationship between unobserved or latent variables, and the outer model identifies the relationships between the latent variable and its observed manifest variable (Henseler et al., 2009).
Measurement Model
The general approach recommended by Gefen et al. (2000) for evaluating validity and reliability was followed. Table 3.3 presents the discriminant validity test, which is performed by cross loading the data among the variables and shows that all items exhibit high loading (>0.7) no item loaded higher on the constructs, which indicates strong discriminant validity. The aim of the discriminant validity analysis is to provide a clear assessment of whether the proposed construct has the highest relationship with its indicators compared to the other construct.
Convergent and discriminant validity was examined for the assessment of validity. The average variance extracted (AVE) is used as a criterion of convergent validity (Fornell and Larcker, 1981). If AVE is more than 0.5, it indicates that the construct has sufficient convergent validity. To measure internal consistency, composite reliability (CR) is used. The value of CR must be higher than 0.7. The data shows that CR is more than 0.7 and AVE is more than 0.5, so all constructs have convergent validity. Furthermore, Fornell and Larcker (1981) criterion was used to assess discriminant validity. The AVE of each latent variable should be higher than the squared correlations with all other latent variables (AVE > φ²). The data indicates that all AVE exceed the squared correlation, so all constructs have discriminant validity. Cronbach's alpha and composite reliability are used to measure internal consistency and reliability based on the interrelationship of the observed item variables. Table 3.4 illustrates that the data are reliable because both Cronbach's alpha and the composite reliability are above 0.7 (Eisingerich and Rubera, 2010). The Average Variance Extracted (AVE) measures the convergent validity. The data have adequate convergent validity if the AVE score exceeds 0.5.
Structural Model
As the outer model shows that data is reliable and valid, the inner model can be evaluated as well. The structural model (inner model) specifies the relations among latent constructs. The significance level of path coefficients with a bootstrapping with 1000 re-samples was tested.
Results indicate that all hypotheses are supported. Table 3.4 demonstrates the results of the hypothesis testing and the PLS structural model. The hypotheses are tested by path coefficients and significance levels. First, the researcher analyzed the indirect effect between the independent variable and the intervening variable. An indirect relationship exists between the variables if the t-value is above 1.96. The results show that PI-ATT (11.552), PU-ATT (0.567), and PR-ATT (0.555) could not exceed 1.96. As a result, it can be concluded that there is an indirect effect on these variables. The proposed conceptual model is supported by empirical data except hypothesis 3 and 5. The results of the hypothesis testing are summarized in Table 3.5. 0.317 ***p < 0.00, **p < 0.05, *p < 0. 10 First, the researcher analyzed the indirect effect between the independent variable and intervening variable. To examine the indirect effect, this study uses the Sobel test, which compares the coefficient score and standard deviation to determine the t-value. An indirect relationship exists between the variables if the t-value is above 1.96. The results show that the t-value of PI-ATT-INT (5.135) is exceed 1.96. Thus, it can be concluded that there is indirect effect between these variables. The other variables are below 1.96, as a consequence, there is no indirect effect among these variables. Individuals who have a higher degree of perceived risk have a higher preference to try or test products without a prior judgement. The proposed conceptual model is mostly supported by the empirical data. The path analysis provides support for several hypotheses in this research. The results of the hypothesis testing are summarized in Table 5. Hypotheses 1 is supported, as personal innovativeness has a positive impact on attitude, which is strongly supported by Amoroso and Lim (2015), Lu (2014), andTuran et al. (2015), who state that personal innovativeness has a positive effect on attitude. This finding illustrates that young individuals who can embrace new ideas and practices, as well as accept change, in their case the tendencies are higher to assess ecommerce systems favorably. Due to the creativity, imagination and new ideas of the gen Z, they can be characterized as innovative persons. Therefore, they have a relatively higher curiosity in trying new products and evaluating products more than those, who have lower levels of personal innovativeness. According to Agarwal and Prasad (1998), individuals can be categorized as innovative, if they keen on to adopt an innovation. Amoroso and Lim (2015) argues that personal innovativeness demonstrates a system's ability to form individuals' motive to assess a product or service. As a result, it can be suggested that innovative individuals could become more involved in evaluating a platform.
Hypothesis 2 is valid, as it suggests that people who have a higher level of innovativeness have a behavioral intention to apply an e-commerce platform. Consequently, individuals with a high degree of personal innovativeness have a higher degree of adopting new products or services. According to the findings, it is suggested that more consumers can be attracted, especially young individuals by creating e-commerce platforms with more creativity, which can provide more benefits to them. Hence, business owners should focus on providing a wider range of products or services by designing a website with an attractive layout, as well as with several options, and generating user-friendly ecommerce platforms, while governments may invigorate citizens by strengthening the ICT infrastructure. Business owners could create better platforms that provide more advantages to consumers thanks to better support from the government for ICT infrastructure. Individuals/consumers will be able to access the Internet and e-commerce platforms more easily and engage in online shopping due to a better ICT infrastructure.
Gen Z is basically interested in adopting new ideas, including products or services. Hypothesis 3 is supported as personal innovativeness has a positive effect on perceived usefulness in adopting an e-commerce platform. As it suggests that people who have a higher level of innovativeness generates a strong impact on perceived usefulness, which is strongly supported by Agarwal and Prasad (1998). Hypothesis 4 is valid, as it suggests that social influence has positive influences on perceived usefulness. Hypotheses 5 is not valid, as it suggests that perceived usefulness by e-commerce users could not inspire individuals to assess and adopt such platforms. As a consequence, perceived usefulness may not become a major aspect that impacts individuals in the decision-making process.
Hypothesis 6 is valid, as the attitude has a positive effect on the intention to adopt e-commerce platforms. This finding indicates that the higher the individual's likelihood to assess a platform, the higher the preference towards e-commerce adoption (Turan et al., 2015). Individuals could be triggered to adopt a platform by thanks to higher level of positive responses. This result is also consistent with a previous study, that was investigating individual behavior theories, such as the TAM, TRA, and TPB.
Hypotheses 7 and 8 are rejected, which are the higher the perceived risks, the lower the perceived usefulness. The higher the perceived risks, the lower the attitude to adopt an e-commerce platform. The results show that perceived risk does not lead individuals to adopt e-commerce platforms. This finding indicates that gen Z prefer to evaluate a platform first before adopting the platform. If the response is less favorable, they will not intend to adopt.
Decision makers should be aware that they must create powerful products that trigger gen Z's curiosity to embrace, which could drive the young individuals to adopt without evaluating the e-commerce platform (Saadé et al., 2012) Finally, hypothesis 9 is supported, because perceived risk has a positive effect on the intention to adopt ecommerce. Based on the findings, young generation in Mongolia do not perceive e-commerce platforms too risky, and according to their opinion, it is safe enough to perform some tasks through e-commerce systems. Mongolia is a developing country, where young generation has similar attitude and intention to adopt e-commerce technology. Table 3.5 The results of the hypothesis testing Hypotheses Results H1 Personal innovativeness has a positive effect on attitude in adopting an e-commerce platform.
Confirmed
H2 Personal innovativeness has a positive effect on the intention to adopt an e-commerce platform.
Confirmed
H3 Personal innovativeness has a positive effect on perceived usefulness in adopting an ecommerce platform.
Confirmed
H4 Social influence has a positive effect on perceived usefulness Confirmed H5 Perceived usefulness has a positive effect on attitude in adopting an e-commerce platform. Rejected H6 Attitude has a positive effect on the intention to adopt an e-commerce platform.
Confirmed H7 The higher the perceived risks, the lower the perceived usefulness Rejected H8 The higher the perceived risks, the lower the attitude to adopt an e-commerce platform Rejected H9 The higher the perceived risks, the lower the intention to adopt an e-commerce platform Confirmed
Conclusion
The e-commerce adoption behavior of individuals in Ulaanbaatar, Mongolia was investigated in this study. The data was specifically examined and assessed that will help to understand the influence of an individual's perceived usefulness, personal innovativeness, social influence and perceived risk on e-commerce adoption. The findings indicate that an individual's perceived usefulness, personal innovativeness, social influence and perceived risk enhance the behavior of e-commerce adoption. However, perceived usefulness and risk did not influence the attitude. Although, the dimensions of TAM have been studied in previous research, no known researchers have been found to empirically study the dimensions of TAM in the Ulaanbaatar, Mongolia context. Therefore, this study has added to the growing body of research in TAM by using a series of tests to assess for validity and reliability of the constructs. First, the conceptual model has a good fit with the sampling data. Furthermore, all hypotheses were empirically supported. The results indicate that perceived innovativeness has a positive influence on attitude, which is also supported by Amoroso and Lim (2015), Lu (2014), Turan et al. (2015) and Diyan Lestari (2019). These findings explain that people who are able to adopt new ideas and practices are more likely to evaluate e-commerce platforms. Additionally, the results reveal that self-efficacy has a positive influence on attitude. It indicates that the higher the confidence level in using an e-commerce platform, the higher the tendency to evaluate the platform (Ayub et al., 2017). When people believe that they can use an e-commerce system, they prefer to adopt the system (Pihie and Bagheri, 2013;Campo, 2011). It can also be concluded that people in Mongolia are relatively confident in operating e-commerce platforms. Higher self-efficacy motivates individuals to assess a platform and try to adopt it. For decision-makers, enhancing self-efficacy is essential. Different strategies can be developed to utilize tools in order to motivate, attract and educate their users. These efforts can be started by developing a positive e-commerce platform campaign, providing clear user manual guides, and creating userfriendly platforms. The results indicate that higher perceived usefulness obtained by e-commerce users could motivate individuals to evaluate and adopt e-commerce platforms. This result is also supported by Hamid et al. (2016) and Dohan and Tan (2013). Perceived usefulness may become a major aspect that influences individuals in the decision-making process. Hsu and Bayarsaikhan (2012) found that online shopping is perceived as more convenient by Mongolian consumers, and it also provides numerous advantages. A higher degree of intention could result from a positive response. With the rapid growth of online shopping, the intention will depend on favorable judgments of a product (Khan et al., 2015). The results also suggest that business owners should create e-commerce platforms that provide more benefits and become more creative in attracting consumers. Finally, it shows that attitude has a positive influence on intention. A higher level of positive responses could trigger individuals to adopt a platform. Nowadays, especially in the business field, individual understanding behavior can be an appropriate tool for formulating such long-term business strategies. In addition, by providing content regulation, creating awareness, and providing information and communication technology infrastructure, it allows individuals to generate value through e-commerce systems, governments and regulations for consumer protection and support e-commerce companies.
Managerial implication
It is vital for all e-commerce companies to empirically test the effect of their customer's attitudes and adoption behavior over and above a mere consideration of their performance. For managers, the section on managerial implications discusses the relevance of the findings to the practice of business management and marketing and makes recommendations for managerial actions. Such results have a practical effort on both business and customers. Companies need to be trained, and consumers should be aware of the advantages of the e-commerce company. Some of the training sessions may include video or content on television for their organizations and the public. This is important, because at the level of intensity of development, behavioral concerns are believed to have a key role to play in the adoption of e-commerce. Moreover, the program should be in Mongolian language and easy to use. It should be easy for those who want to trade online, and it should be easy to use and have a simple structure.
Limitation and future studies
Like many other studies, this study also has its limitations. One of the circumstances that may have negatively influenced the results is the insufficient number of participants in the present study. Due to limited time, 332 participants were conducted in Ulaanbaatar city in Mongolia. By increasing the sample size, testing this model extensively, a future research could be generalized to critically evaluate the proposed framework. Data has been collected from the only capital city of Mongolia, which may not represent the whole country's population. Future studies are highly recommended to select a more diversified group of individuals. Moreover, culture also affects attitude and adoption behavior differently. The final policy implication is that the Mongolian government should pay more attention, provide better access, and create better regulation for managing e-businesses and consumer protection due to the high economic potential of e-commerce. Consequently, these can be considered as potential limitations for a future study. | 2021-08-20T19:07:13.280Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3a5e49f512669c9b00102488ceff89ed4719a621",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/EJBM/article/download/56041/57880",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e13dcc11a05565be5193dc003ee90a0dbfca6c0f",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
245101941 | pes2o/s2orc | v3-fos-license | When Should Premature Ventricular Contractions Be Considered as a Red Flag in Children with Cardiomyopathy?
Premature ventricular contractions (PVCs) are common and generally benign in childhood and tend to resolve spontaneously in most cases. When PVCs occur frequently, an arrhythmia-induced cardiomyopathy may be present requiring medical or catheter ablation. PVCs are only rarely the manifestation of a cardiomyopathy. The purpose of this review is to provide some tips and tricks to raise the suspicion of a cardiac disease based on the presence and characteristics of PVCs in children.
Introduction
Premature ventricular contractions (PVCs) are classically considered common findings among children. Their prevalence varies across reports, case series and the age of children. PVCs are found on electrocardiograms (ECG) and/or 24-h ECG Holter monitoring in approximately 40% of children [1]. Notably, isolated PVCs are detected in about 10-15% of infants with structurally normal hearts and usually disappear in the first three years of life. In contrast, PVCs persist in 20 to 35% of healthy adolescents [2]. Unfortunately, to date, the mechanism relating to the spontaneous resolution of PVCs in childhood remains unclear.
Despite being historically perceived as benign entities, PVCs can sometimes be related to cardiac dysfunction, being the cause or also the consequence. The challenge is to differentiate between benign PVCs, "potentially dangerous" PVCs and malignant PVCs.
Arrhythmia-Induced Cardiomyopathy
Although PVCs in structurally normal hearts are classically considered benign, arrhythmiainduced cardiomyopathy (CMP) has widely been described in the literature.
The role of PVCs as a cause of left ventricular (LV) dysfunction in the adult population has been extensively recognized, and a PVC burden > 24% has been suggested as the cut-off for predicting the occurrence of PVC-induced CMP. Conversely, this issue is still a matter of intense debate in the pediatric population [3].
In 2010, Kakavand and co-workers reported an average PVC burden associated with LV dysfunction in children of 36% with the recovery of the cardiac function in all patients after a successful treatment or spontaneous resolution of PVCs [4]. More recently, Bartels demonstrated that a PVC burden > 30% was significantly associated with the development of LV dysfunction in children [5].
Differently from these results, Guerrier et al. did not report any significant relationships between the PVC burden and LV systolic function in a cohort of patients <21 years old, nor with the PVC morphology, coupling interval or complex PVCs (couplets, triplets or short runs of nonsustained ventricular tachycardia (nsVT)) [6]. These findings were [7]. This further supported the data of Sun and colleagues who found a correlation between frequent PVCs (>10/min) and a short coupling interval (RR' ≤ 0.6), both causing a marked reduction of the ejection fraction and cardiac indices [8].
Although a strong correlation with the PVC burden or other factors has not been demonstrated as the underlying mechanism of PVC-induced CMP, possible explanations could be (i) transient changes in intracellular calcium, (ii) the maintenance of an ionic flux during the abnormal contraction, and (iii) LV dyssynchrony, which may lead to LV dilatation and dysfunction, reduced cardiac output and coronary perfusion [4,9,10].
Based on these considerations, Drago et al. recommend the treatment of idiopathic PVCs with antiarrhythmic drugs or a catheter ablation, only in the presence of a depressed LV function or to reduce symptoms [11].
Malignant PVCs
Despite being most often considered benign, PVCs in children may also be the epiphenomenon of a CMP, which can also precede the onset of CMP, or the manifestation of a channelopathy.
The risk of life-threatening arrhythmias is a well-established concept in CMPs such as hypertrophic cardiomyopathy (HCM) and arrhythmogenic cardiomyopathy (ACM) [12,13].
As for idiopathic dilated CMP, isolated or complex PVCs develop quite late, and therefore, to date, the only parameter considered for arrhythmic risk and primary prevention is a marked reduction in the ejection fraction. Indeed, in current guidelines the absolute benefit of the implantable cardioverter-defibrillator (ICD) in nonischemic dilated cardiomyopathy (non-IDCM) is considered lower than in IDCM as the class of recommendation for ICD implantation has changed from class I to class IIa for symptomatic patients with non-IDCM and an ejection fraction ≤ 35% despite at least three months of optimal medical therapy [14,15].
Very recently, a greater arrhythmogenic risk has been observed in patients with CMP such as arrhythmogenic ventricular CMP (right and left dominant) as well as in end-stage HCM and LV noncompaction. CMP with an arrhythmic phenotype is currently classified as "arrhythmogenic" cardiomyopathy (ACM). This arrhythmic phenotype can occur even in the absence of overt heart failure, and the prognosis is not related to the severity of right ventricular (RV)/LV dysfunction and dilatation [16].
The common hallmark of these forms of ACM is a large amount of fibrosis with a subsequent propensity for cardiac arrhythmias, even in the initial stages. The left dominant forms may present with a "hypokinetic, nondilated phenotype", which means that a mild or more than mild hypokinesia may be present even in the absence of dilatation. This differs from the classic idiopathic DCM where dilatation and marked impaired systolic function are the key features of the disease and where the absence of extended fibrosis only confers a significant arrhythmic risk during the advanced stages of heart failure [17].
ACM, aside from arrhythmogenic right ventricular dysplasia, can also be due to ACM with left dominance and desmosomal mutations as well as to CMP with mutations of non-desmosomal genes such as TMEM 43 and TGFβ, which interact with the function of desmosomes (i.e., LMN A/C, FLNC, PLN), conferring an elevated arrhythmic risk. Furthermore, secondary CMPs characterized by fibrosis and arrhythmic risk (e.g., chronic myocarditis, Chagas' disease, cardiac sarcoidosis and amyloidosis, and histiocytic, mitochondrial or metabolic forms of CMP) can be classified in the same way.
Notably, channelopathies can also cause ACM. Pathologic variants in genes encoding for ion channel proteins and causing inherited arrhythmic disorders (e.g., long QT syndrome, short QT syndrome, Brugada syndrome, catecholaminergic polymorphic ventricular tachycardia, etc.) have been described as having a possible role in the development of ACM [18].
In all these cases, the family medical history, symptoms and PVC characteristics are of paramount importance. For this reason, a family history of cardiomyopathy, channelopathy or sudden cardiac death should raise concerns when evaluating children with PVCs.
Symptoms such as palpitations, chest discomfort, fatigue and syncope, especially during effort, should be carefully evaluated in the diagnostic work-up, as they often suggest an underlying cardiac disease. Moreover, if the PVC burden and complexity tend to progress and increase during follow-up, this should raise the suspicion of a cardiac disease [19].
Evaluation of Pediatric Patients with PVCs: Tips and Tricks
Our approach to the evaluation of pediatric patients with PVCs includes: ECG, 24-h ECG Holter monitoring, exercise testing, echocardiogram, and eventually advanced imaging and genetic testing.
The ECG is useful to rule out any pathological findings that may be associated with PVCs due to a CMP, including a T-wave inversion in the precordial leads, low voltages in the limb leads, the epsilon wave, a prolonged terminal activation of the QRS, signs of ventricular hypertrophy or remodeling.
24-h ECG Holter monitoring is useful for quantifying the PVCs' burden, for evaluating complex PVCs, nsVT or sustained ventricular tachycardias (sVT) as well as for assessing the prevalence of PVCs during daytime or nighttime, or an eventual polymorphism.
Exercise testing is crucial for evaluating the response to physical activity, i.e., whether PVCs are suppressed, increased on effort, or persist or appear during the recovery phase. This latter finding is still a matter of debate when trying to classify PVCs as benign or not. In fact, the response to exercise could help in differentiating between malignant and benign PVCs. Classically, PVCs that are suppressed or decrease with exercise are considered benign while PVCs occurring or increasing at a high workload have been considered a warning sign of heart disease. As is well known, the appearance of bidirectional polymorphic ventricular contractions/VT during exercise should raise the suspicion of catecholaminergic polymorphic ventricular tachycardia [20,21]. However, in specific situations the role of the exercise test is questionable. In this regard, Sequeira reported that, in young patients with suspected ACM, the absence or suppression of PVCs during exercise should not necessarily be considered a benign sign and that the role of exercise testing in this setting remains to be clearly established [22]. Notwithstanding this, some authors suggest further investigations in the presence of an uncommon PVC pattern or of rare, repetitive, polymorphic, shortcoupled and exercise-induced PVCs [23].
During exercise testing, it is also important to rule out variations of the ST-T segment as an ischemic sign due to coronary disease (coronary anomaly or intramyocardial bridge) or, in the case of HCM, due to sub-endocardial ischemia.
Regarding the site of origin of PVCs, the outflow tract is classically considered the most common site in idiopathic/benign PVCs (typical left bundle branch block (LBBB) morphology with inferior axis). Differentiating PVCs arising from the right ventricular outflow tract (RVOT) from those of the left ventricular outflow tract (LVOT) may be challenging. An early transition in the precordial leads suggests an LVOT origin.
PVCs with a wide QRS (>130 ms) and LBBB morphology and with a horizontal, intermediate or indeterminate axis suggest the right ventricular free wall as the site of origin. In this case, a more thoroughly investigation is needed as PVCs may be associated with structural heart muscle disease. Additionally, PVCs > 500/24 h with an LBBB non-RVOT morphology can be considered a major criteria for ACM diagnosis according to the new diagnostic criteria [24] (Figure 1). In children, a fascicular origin of PVCs (right bundle branch block (RBBB), narrow QRS, and superior axis when originating from the posterior fascicle or inferior axis when originating from the anterior fascicle) should be considered a reassuring feature. Conversely, PVCs with a wide QRS and atypical RBBB morphology and with a horizontal or indeterminate axis should trigger a thorough evaluation as they may be a sign of impaired LV function (fibrosis, cardiac masses, mitral valve prolapse).
As fascicular PVCs carry a benign prognosis and tend to disappear during childhood, some authors recommend the same follow-up approach as for PVCs of LBBB morphology, which conversely tend to remain unchanged with age, though CMP may developing over time in some cases despite a benign diagnosis at initial evaluation [28,29].
An "atypical" PVC morphology, less numerous PVCs, and repetitive, polymorphic and incessant VT can also be the manifestation of hamartomas, CMPs, cardiac tumors, myocarditis, and mitral valve prolapse [2,23] (Table 1). In children, a fascicular origin of PVCs (right bundle branch block (RBBB), narrow QRS, and superior axis when originating from the posterior fascicle or inferior axis when originating from the anterior fascicle) should be considered a reassuring feature. Conversely, PVCs with a wide QRS and atypical RBBB morphology and with a horizontal or indeterminate axis should trigger a thorough evaluation as they may be a sign of impaired LV function (fibrosis, cardiac masses, mitral valve prolapse).
As fascicular PVCs carry a benign prognosis and tend to disappear during childhood, some authors recommend the same follow-up approach as for PVCs of LBBB morphology, which conversely tend to remain unchanged with age, though CMP may developing over time in some cases despite a benign diagnosis at initial evaluation [28,29].
An "atypical" PVC morphology, less numerous PVCs, and repetitive, polymorphic and incessant VT can also be the manifestation of hamartomas, CMPs, cardiac tumors, myocarditis, and mitral valve prolapse [2,23] (Table 1). Two-dimensional transthoracic echocardiography is the first-line imaging modality for ruling out cardiac dysfunction, cardiac hypertrophy or other pathological conditions such as intracardiac masses (e.g., fibromas, rhabdomyomas, hamartomas, Purkinje cell tumors, etc.). Unfortunately, in several CMPs, including ACM, it is of very limited usefulness except for the initial stages.
Two-dimensional speckle tracking strain analysis may be useful for detecting a subtle reduction of the ejection fraction or segmental wall motion abnormalities, but further studies are warranted to validate these parameters as diagnostic tools [30,31].
If the suspicion of a malignant arrhythmia remains high, second-line tests are essential to rule out the presence of structural heart disease/CMP/myocarditis/cardiac masses.
Contrast-enhanced cardiac magnetic resonance (CMR) allows for the assessment of the RV/LV dimensions as well as global and regional systolic function, specifically looking at abnormal wall thinning, RVOT dilation, RV/LV enlargement, biventricular global dysfunction, regional wall motion abnormalities, and the presence of late gadolinium enhancement [32,33]. The importance of CMR is one of the crucial points of the updated "Padua criteria" for ACM [24], but its role is unquestionable in all other CMPs.
Finally, genetic testing is indicated in selected cases to confirm or refine the diagnosis and risk stratification of CMPs, especially if they are genetically determined, and to eventually enable an appropriate family screening [34] (Figure 2). Two-dimensional transthoracic echocardiography is the first-line imaging modality for ruling out cardiac dysfunction, cardiac hypertrophy or other pathological conditions such as intracardiac masses (e.g., fibromas, rhabdomyomas, hamartomas, Purkinje cell tumors, etc.). Unfortunately, in several CMPs, including ACM, it is of very limited usefulness except for the initial stages.
Two-dimensional speckle tracking strain analysis may be useful for detecting a subtle reduction of the ejection fraction or segmental wall motion abnormalities, but further studies are warranted to validate these parameters as diagnostic tools [30,31].
If the suspicion of a malignant arrhythmia remains high, second-line tests are essential to rule out the presence of structural heart disease/CMP/myocarditis/cardiac masses.
Contrast-enhanced cardiac magnetic resonance (CMR) allows for the assessment of the RV/LV dimensions as well as global and regional systolic function, specifically looking at abnormal wall thinning, RVOT dilation, RV/LV enlargement, biventricular global dysfunction, regional wall motion abnormalities, and the presence of late gadolinium enhancement [32,33]. The importance of CMR is one of the crucial points of the updated "Padua criteria" for ACM [24], but its role is unquestionable in all other CMPs.
Finally, genetic testing is indicated in selected cases to confirm or refine the diagnosis and risk stratification of CMPs, especially if they are genetically determined, and to eventually enable an appropriate family screening [34] (Figure 2).
Treatment Options
As previously reported, benign PVCs should only be treated in the case of arrhythmia-induced CMP or to reduce symptoms [11]. Beta-blockers and class IC antiarrhythmic drugs are the preferred agents, whereas class III antiarrhythmic drugs may be used in selected cases. Catheter ablation may be considered in patients who are refractory to medical therapy or those with a progressive impairment of the LV function [11,25].
If PVCs are associated with channelopathies or CMPs, the treatment depends on the underlying disease mechanism and is mainly aimed at preventing sudden cardiac death. According to current guidelines, pharmacological treatment is the first-line approach, and catheter ablation may be indicated in selected cases with CMPs, taking into account the progressive nature of the disease. Finally, ICD implantation is often considered the only effective treatment for the prevention of sudden cardiac death in this particular patient population [35].
Treatment Options
As previously reported, benign PVCs should only be treated in the case of arrhythmiainduced CMP or to reduce symptoms [11]. Beta-blockers and class IC antiarrhythmic drugs are the preferred agents, whereas class III antiarrhythmic drugs may be used in selected cases. Catheter ablation may be considered in patients who are refractory to medical therapy or those with a progressive impairment of the LV function [11,25].
If PVCs are associated with channelopathies or CMPs, the treatment depends on the underlying disease mechanism and is mainly aimed at preventing sudden cardiac death. According to current guidelines, pharmacological treatment is the first-line approach, and catheter ablation may be indicated in selected cases with CMPs, taking into account the progressive nature of the disease. Finally, ICD implantation is often considered the only effective treatment for the prevention of sudden cardiac death in this particular patient population [35].
Conclusions
In children, PVCs are often a benign phenomenon and rarely the manifestation of a cardiac disease.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to Institutional and Research policies. | 2021-12-12T16:14:14.063Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "ada77debf1387e6de48339dc42ab537fab06470e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2308-3425/8/12/176/pdf?version=1639110794",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "adafff5ee9d4669a453b009042807cd68ca1196f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253020191 | pes2o/s2orc | v3-fos-license | Acetaminophen pharmacokinetics in infants and children with congenital heart disease
Abstract Background Acetaminophen is routinely used for perioperative analgesia in children undergoing major surgical procedures. There are few estimates of acetaminophen pharmacokinetic parameters in children with congenital heart disease, especially those with cyanotic heart disease. Aims The current study prospectively investigated differences in acetaminophen pharmacokinetics following surgery using cardiopulmonary bypass in children with cyanotic and acyanotic congenital heart disease. Methods Children (2–6 years, 9–23 kg) presenting for median sternotomy for Fontan palliation (cyanotic patients) or two ventricle surgical repair (acyanotic patients) were eligible for inclusion. A single intravenous dose of acetaminophen (15 mg/kg) was administered at the start of sternal closure after separation from cardiopulmonary bypass. The time‐course of acetaminophen concentrations were described using non‐linear mixed effects models. One and two‐compartment disposition models with first‐order elimination were tested. Pharmacokinetic parameter estimates were scaled using allometry and standardized to a 70 kg person. Results There were 208 acetaminophen concentrations assayed from 30 children, 15 with cyanotic, and 15 with acyanotic heart disease. A 2‐compartment model best described acetaminophen PK. Parameter estimates (population parameter variability, PPV%; 95% confidence interval, CI) were clearance CL 15.3 L.h‐1.70 kg‐1 (22.2%; 13.8–16.7), intercompartment clearance Q 45.4 L.h‐1.70 kg‐1 (22.4%; 25.2–61.9), central volume of distribution V1 33.5 L.70 kg‐1 (23.2%; 25.9–38.8), peripheral volume of distribution V2 32.1 L.70 kg−1 (21.7%; 25.9–38.8). Neither clearance nor volume parameters differed between cyanotic and acyanotic patients. Conclusions Acetaminophen pharmacokinetics were characterized using a 2‐compartment model with first‐order elimination following cardiac bypass surgery in children. Population pharmacokinetic parameter estimates were similar to other studies in children. No differences were detected between patients with cyanotic and acyanotic heart disease.
| INTRODUC TI ON
Opioid analgesics are a mainstay of therapy in the immediate postoperative period following congenital cardiac surgery in children. The benefits of multimodal analgesia with the administration of adjunctive agents include improved analgesia, a reduction in total opioid use, and decreased opioid-related adverse effects. 1,2 Acetaminophen is routinely used in many institutions for the perioperative management of pain in children undergoing major surgical procedures, including median sternotomy, cardiopulmonary bypass (CPB), and surgical repair of congenital heart disease (CHD). 1,3 There remains limited pharmacokinetic analyses for patients given acetaminophen who are undergoing repair of CHD, especially those with cyanotic congenital heart disease. [4][5][6] An understanding of acetaminophen pharmacokinetics (PK) in this population may be used to direct dosing using a target-concentration strategy that identifies a dose associated with the desired clinical effect while avoiding potential adverse effects. [7][8][9][10] 2 | ME THODS This study was approved by the institution Review Board at Nationwide Children's Hospital (STUDY00000766) and registered at clini caltr ials.gov (NCT04278625). Exclusion criteria included an allergy to acetaminophen, severe hepatic disease, or other clinical contraindications to acetaminophen use, or the receipt of acetaminophen within 24 h of their procedure. Following written, informed consent, children between 2 and 6 years of age presenting for surgery for CHD requiring median sternotomy and CPB were enrolled. The patients were separated into cyanotic patients (Fontan palliation) or acyanotic patients (two ventricle repair).
After separation from CPB and completion of modified ultrafiltration, intravenous acetaminophen (15 mg/kg) was administered over 15-20 min. Following the administration of acetaminophen, blood samples were obtained at 7 time periods between 15 and 20 min, 30 and 40 min, 50 and 70 min, 80 and 100 min, 2, 4, and 6 h. No additional acetaminophen was administered during the six-hour study period.
For each sample, 1 ml of blood was obtained from the arterial cannula and placed in an EDTA (Ethylenediamine tetra-acetic acid) tube. The blood was promptly centrifuged at 3000 rpm for 10 min.
The plasma was then frozen at −80°C and assayed for acetaminophen at a later time. Acetaminophen assays were performed by NMS Labs (Horsham, PA 19044-2208) using high-performance liquid chromatography/tandem mass spectrometry. The lower limit of quantitation of the assay was 0.50 μg/ml. The precision of the assay was reported as an average of 1.8% with a range of 1.1%-2.5%.
| Pharmacokinetic modeling
Acetaminophen PK were investigated using 1-and 2-compartment models with first-order elimination. Models were parameterized in terms of elimination clearance (CL) from the central compartment, inter-compartment clearance (Q2), and central and peripheral volumes (e.g., V1, V2). These parameter estimates were standardized using theory-based allometry to a typical 70 kg individual (Equation 1). [8][9][10] where Pi is the parameter in the ith individual, Wi is the weight in the ith individual and P STD (e.g., CL STD , Q STD , V1 STD , and V2 STD ) is the parameter in an individual with a standard weight W STD of 70 kg. The allometric theory-based exponent (EXP) was fixed at ¾ for clearance parameters and 1 for distribution volumes. 10 The influence of cyanosis on acetaminophen PK was assessed by adding a factor for cyanosis (F_CYAN) to clearance and volume. Population pharmacokinetic parameter estimates were obtained using nonlinear mixed effects models (NONMEM 7.5, ICON Development Solutions, Hanover, MD, USA).
acetaminophen, cardiopulmonary bypass, congenital heart disease, paracetamol, pharmacokinetics What is already known about this subject: Attainment and maintenance of therapeutic plasma concentrations of acetaminophen are required to provide supplemental analgesia following major surgical procedures.
Although the therapeutic margin of safety remains wide for acetaminophen, the pharmacokinetics and metabolism of medications may be altered in infants and children undergoing surgery for congenital heart disease using cardiopulmonary bypass.
What this study adds:
In patients with both cyanotic and acyanotic CHD, the pharmacokinetic parameters of acetaminophen were similar to those that have been previously reported when differences in size were accounted for using theory-
| RE SULTS
There were 208 acetaminophen concentrations available for analysis from 30 participants. These individual time-concentration profiles for those with cyanotic and acyanotic heart disease are shown in Figure 1. Demographic details for the study participants are shown in Table 1 with separation into patients with cyanotic and acyanotic congenital heart disease. A violin plot demonstrating the distribution of ages and weights is shown in Figure S1. Table 2. Between subject variability on Table 3.
A factor for cyanosis on clearance and volume did not result in a significant decrease in OBJ at the 0.05 level (e.g., ΔOBJ <2.71, p > .05). A prediction-corrected visual predictive check for the final acetaminophen model is shown in Figure 2. Additional diagnostic plots are available in Figure S2. Individual model fits are shown in Figure S3. Prediction-corrected visual predictive checks demonstrating model performance for both cyanotic and acyanotic groups are presented in Figure S4. The NM-TRAN code for pharmacokinetic analysis is available in Appendix S1.
| DISCUSS ION
Acetaminophen is a common adjunct in the multimodal approach to perioperative pain management in infants and children, including those undergoing median sternotomy and surgery for congenital heart disease (CHD) using cardiopulmonary bypass (CPB). When therapeutic plasma concentrations are obtained, acetaminophen augments analgesia, and decreases opioid requirements without adverse effects on respiratory and gastrointestinal function.
Although dosing regimens have been explored in children, there are limited data regarding acetaminophen PK following surgery for CHD using CPB, especially in patients with cyanotic lesions. 16 The PK and metabolism of medications may be altered in infants Given the potential for covariates to alter acetaminophen PK following surgery for CHD using CPB, the current study focused on a single dose of acetaminophen in this patient population. Acetaminophen pharmacokinetics were characterized using a 2-compartment model with first-order elimination following CPB.
In patients with both cyanotic and acyanotic CHD, no difference was detected when compared with healthy patients without CHD.
Acetaminophen parameter estimates were similar to those that have been previously reported when differences in size were accounted for using theory-based allometry. 4,5,23 Patients in this study were greater than 2 years of age and therefore maturation of acetaminophen clearance was not investigated. Estimates of clearance were similar to those in adults, as expected based on previous studies in healthy cohorts of patients greater than 2 years of age, who were free of comorbidities.
The impact of acute or chronic comorbid conditions on drug pharmacokinetics should be considered in patients following surgery for congenital heart disease. Although half of our cohort had cyanotic heart disease, none had end-organ dysfunction that would alter hepatic metabolism of acetaminophen. In both groups (cyanotic and acyanotic), the postoperative course was uncomplicated without the requirement for vasoactive support and no evidence of end-organ effects from surgery, cardiopulmonary bypass, residual heart disease, hypoxemia, hemodynamic dysfunction, or system inflammatory response syndrome. In the general postoperative care of this patient population, consideration must be given to the development of severe hepatic dysfunction which may lead to alterations in acetaminophen metabolism. 24,25 One previous study has evaluated the PK of acetaminophen following surgery for CHD using CPB. 16 The study cohort included 30 patients, 17 of whom had trisomy 21. Although seven patients had tetralogy of Fallot, no information was provided regarding their preoperative oxygen saturation and in distinction of our study, no comparative analysis was performed evaluating patients with or without cyanosis. Comparison of our data with these is difficult because neither size (allometry) nor clearance maturation (age) were used as covariates. 26 Although these authors noted that the clearance of acetaminophen was lower and the volume of distribution was higher in their cohort compared with those historically given acetaminophen after craniofacial surgery; comparison is again compromised by the lack of parameter and between-subject variability standardization of these studies. When these data are considered in the context of our study, we postulate that neither the type of CHD (cyanotic versus acyanotic) nor the perioperative impact of CPB and the usual post-CPB processes impact acetaminophen pharmacokinetics. Dosing regimens need not be altered from those commonly used during the postoperative period.
ACK N OWLED G M ENTS
This work was funded by the Heart Center Intramural Funding Program of Nationwide Children's Hospital. These data were presented at the 2022 Spring Meeting of the Society for Pediatric Anesthesia.
F I G U R E 2
Prediction-corrected visual predictive check (pcVPC) for the acetaminophen pharmacokinetic model. Plots show median (solid) and 90% intervals (dashed lines). The left hand plot shows all prediction-corrected observed acetaminophen concentrations. Right hand plot shows prediction-corrected percentiles (10%, 50%, and 90%) for observations (gray dashed lines) and predictions (red dashed lines) with 95% confidence intervals for prediction percentiles (median, pink shading; 5th and 95th blue shading). | 2022-10-21T06:18:06.034Z | 2022-10-20T00:00:00.000 | {
"year": 2022,
"sha1": "34f1fb28c70cf3535a269f3ada1068d45f297de1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "2281cd86b1c6007da35df597a0e7f2169602eaac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201697718 | pes2o/s2orc | v3-fos-license | Simultaneous AdSV determination of Ga and In on Hg(Ag)FE electrode by AdSV in presence of cupferron
A sensitive and selective method for the simultaneous determination of trace gallium and indium in natural water samples using adsorptive stripping voltammetry at the Hg(Ag)FE electrode was established. The optimum analytical conditions include 0.2 mol L−1 acetate buffer (pH = 5.3) and 4 × 10−4 mol L−1 cupferron. The calibration graph was linear from 5 × 10−9 to 5 × 10−7 mol L−1 for the simultaneous presence of indium and gallium. The detection limits for preconcentration time of 50 s were 1.6 × 10−9 mol L−1 and 1.4 × 10−9 mol L−1 for gallium and indium, respectively. Selectivity of the method was determined by investigating the influence of numerous different foreign ions. The interferences of surfactants and humic substances were minimized by preliminary mixing with resin. Analytical results of natural water samples analysis showed that the proposed procedure is suitable for direct environmental water analysis.
Introduction
The importance of the procedure for the simultaneous determination of Ga(III) and In(III) in environmental water samples results from an increased rate of production and utilization of these trace metals which are used in high-tech applications. In the literature, indium and gallium are often referred to as technology critical elements [1]. Both of these elements have similar, often desirable, valuable properties and therefore play a great role in the high technology industry, mainly in the production of semiconductors and electronic devices. Gallium enjoys vast application in optoelectronics (e.g., LED's), telecommunication, aviation, and many commercial and household items such as alloys, computers, and DVD's [2]. The biggest use of indium has been recorded in thin-film coatings in liquid crystal display screens (LCDs) used in computers and CD/DVD players, solar cells, electroluminescent lamps, and flat panel displays [3,4]. In recent years, gallium and indium alloys have been used for 3D printing with liquid metals; these alloys allow one to create structures by piling drops on top of each other and to create specific shapes [5]. The broad use of these metals leads to their continuous introduction into the environment. As their use is similar, thus they often get into the environment from the same anthropogenic source. The possible environmental and (eco)toxicological effects due to the application of In and Ga in high-tech are still being investigated. There is little information about health effects to humans or animals due to exposure to indium and gallium compounds. However, some cases of poisoning with these metals are known. These metals are also carcinogenic agents [6,7]. Oral ingestion of GaAs and InAs can cause serious symptoms such as gastrointestinal discomfort, vomiting, coma, and even death in the case of acute poisoning. The consequences of chronic poisoning with these compounds include anemia, leukopenia, skin cancer, and other internal cancers [8]. In the near future, the demand for indium and gallium is expected to continue to increase and thus the potential occupational exposure to the compounds of these metals attracts considerable attention [9]. Current toxicological data show that, with the exception of persons with heavy occupational exposure such as those employed in electronics industry, problems for the general population are rather unlikely [10]. As concerns environmental problems, In(III) and Ga(III) can cause soil and water contamination and be harmful to living beings. The soil-plant system is largely dependent on the quality of environmental waters; therefore, it is necessary to obtain information about the concentration of these elements in environmental water samples. So the demand for analytical techniques that would be able to perform quantification of these "technologically critical elements" is still growing.
In the literature data, various techniques for the simultaneous determination of Ga(III) and In(III) in environmental samples have been described. The vast majority of them were spectrometric methods such as electrothermal-atomization atomic absorption spectrometry [11,12], inductively coupled plasma mass spectrometry [13], spectrophotometry [14,15], and inductively coupled plasma optical emission spectrometry [16]. Because it is often advised to use two different procedures to verify the correctness of the performed determinations, preferably based on other techniques, the purpose of our work was to develop a procedure for the simultaneous determination of Ga(III) and In(III) using stripping voltammetry as the most commonly used electrochemical method because of its low cost, high sensitivity, and short measurement time. Amidst stripping voltammetric methods, we can distinguish anodic stripping voltammetry (ASV) and adsorptive stripping voltammetry (AdSV), where AdSV was preferable to obtain a lower detection limit. Up to now, multiple procedures that are characterized by different sensitivities for determining gallium and indium separately in environmental samples have been developed. The previously published adsorption voltammetry stripping methods for the determination of gallium(III) and indium(III) in environmental samples are collected in Table 1 for gallium and Table 2 for indium, respectively.
Our scientific group described in the literature an adsorption voltamerometric procedure for the simultaneous determination of Ga(III) and In(III) using a bismuth as a working electrode [32]. However, the detection limits obtained in this procedure were unsatisfactory. Therefore, the purpose of our further research was to develop a procedure for simultaneous determination of gallium and indium with lower detection limits. As can be seen in Tables 1 and 2 the lowest detection limits for both Ga(III) and In(III) determination procedures were obtained using a renewable mercury silver-based electrode (Hg(Ag)FE) as a working electrode. Therefore, this electrode was used in the presented work to develop a voltammetric procedure for the simultaneous determination of gallium and indium. The Hg(Ag)FE electrode is a viable alternative to the hanging mercury drop electrode (HMDE) because it ensures all merits of the mercury electrode such as a very low detection limit and additionally thanks to its construction significantly reducing toxicity, which is very important for laboratory environment. As a complexing agent, cupferron was chosen for our experiments, which is the most commonly used complexing agent in the adsorptive stripping voltammetry separate determinations of both gallium and indium (see Tables 1 and 2).
Apparatus
All electrochemical measurements were performed using a μAutolab analyzer (Utrecht, The Netherlands). A threeelectrode configuration consisted of an Hg(Ag)FE working electrode, a platinum wire counter electrode, and an Ag/ AgCl reference electrode (in saturated NaCl). The solutions were stirred with a magnetic stirring bar. The Pt electrode and the Ag/AgCl electrode were prepared in our laboratory. The Hg(Ag)FE electrode was purchased from the MTM-ANKO Cracow, Poland (the mercury film area was 7 mm 2 ). All experiments were carried out at room temperature.
The Hg(Ag)FE construction is based on pulling up the silver wire electrode base into the mercury chamber placed in the electrode corpus and then pushing it back outside the electrode corpus into the analyzed solution just before voltammetric measurement. In this way, the silver wire base is covered with a new mercury film and in this form, it is ready for use. After measurement, the electrode must be refreshed and this is obtained through pulling up the silver wire with mercury film inside the electrode. During this step, the silver wire crosses over special O-ring seals and a precise wipe out takes place [33].
Reagents and solutions
Standard solutions of 1 g L −1 Ga(III) and 1 g L −1 In(III) were obtained from Merck (Darmstadt, Germany) and Fluka (Buchs, Switzerland), respectively. The solutions of Ga(III) and In(III) of lower concentrations were prepared every day by dilution of the stock solution as required. Cupferron (N-nitrosophenylhydroxylamine ammonium salt) was obtained from Merck (Darmstadt, Germany). A solution of 1 × 10 −2 mol L −1 of cupferron was prepared every day by dissolving 0.0155 g of the reagent in water in a 10-mL volumetric flask. The 1 mol L −1 acetate buffer (pH = 5.3) was prepared from Suprapur CH 3 COOH and NaOH obtained from Merck. Triton X-100, sodium dodecyl sulfate (SDS), and cetyltrimethylammonium bromide (CTAB) were purchased from Fluka (Buchs, Switzerland). Humic acid (HA) sodium salt was obtained from Aldrich. The river fulvic acid (FA) and natural organic matter (NOM) were obtained from the Suwannee River and purchased from the International Humic Substances Society. Rhamnolipids (biosurfactant) and Amberlite XAD-7 resin were obtained from Sigma (St. Louis, MO, USA). The resin was washed four times with triply distilled water and dried up at the temperature of 50°C. All solutions were prepared using triply distilled water.
Bystrzyca river sample preparation
Natural river water samples from the Bystrzyca river were collected with polypropylene bottles and then filtered through 0.45 μm Millipore membrane filters. The samples were kept at the temperature of 6°C and they were submitted to analysis without any pretreatment.
Standard procedure of voltammetric measurement
The standard voltammetric measurement was carried out in a solution containing 0.2 mol L −1 acetate buffer pH = 5.3 and 4 × 10 −4 mol L −1 cupferron. The experiments were run from nondeaerated solutions with a volume of 10 mL.
The adsorptive voltammetric procedure consisted of the following main steps: & Deposition step: at two potentials successively following − 0.9 V for 20 s and − 0.7 V for 30 s, during that time the Ga(III)-cupferron and In(III)-cupferron complexes were accumulated simultaneously on the Hg(Ag)FE working electrode as a result of adsorption, under stirring solution. & Registration of the voltammogram: after a rest period of 5 s, a differential pulse stripping voltammogram was recorded under quiescent solution, while the potential was scanned from − 0.5 to − 1.2 V. Intensities of the obtained peaks were directly proportional to the concentration of Ga(III) and In(III) in the sample. The parameters of the differential pulse voltammetric measurement were as follows: scan rate 20 mVs −1 and pulse height 100 mV. The indium peak appeared at~− 0.65 V and the gallium peak appeared at~− 1.05 V.
After a single voltammetric measurement, the mercury film was refreshed as a result of inserting a silver wire on which the mercury film is formed into the center of the electrode. During that stage, the silver wires crossed with special O-rings and there was a precise wiping. Afterwards, before each subsequent measurement, the silver wire was ejected outside the electrode corpus through the mercury compartment and the mercury film was created [33].
Procedure of mixing with resin
In the case of analysis of real samples whose matrix is rich in organic substances such as surfactants and/or humic substances before the standard voltammetric measurement, it is advisable to remove the organic substances because they may interfere with the voltammetric measurement. To do this, the analysis of real samples with the voltammetric method should Tap water [24] Hg(Ag)FE, renewable mercury silver-based electrode; BiFE, bismuth film electrode; HMDE, hanging mercury drop electrode; PbFE, lead film electrode; DDTC, diethyldithiocarbamate; PCV, pyrocatechol violet; APDC, ammonium pyrrolidine dithiocarbamate be preceded by mixing the analyzed sample with the Amberlite XAD-7 resin according to the following scheme. The analyzed sample solution, 4 mL of 1 mol L −1 acetate buffer pH = 5.3 and an adequate volume of triply distilled water, so that the final volume of the solution was 10 mL, should be added to a glass vial and 0.5 g of XAD-7 resin was inserted. Then, the prepared solution was stirred for 5 min using a magnetic stirring bar. During that time, the organic substances were adsorbed on the resin surfaces, while indium and gallium ions remained in the solution. After sedimentation of the resin, 5 mL of the solution was pipetted into the electrochemical cell. Next, 400 μL of 1 × 10 −2 mol L −1 cupferron and 4.6 mL of triply distilled water were inserted consecutively into the electrochemical cell so that the desired concentrations were obtained (0.2 mol L −1 acetate buffer pH = 5.3 and 4 × 10 −4 mol L −1 cupferron). Finally, the voltammetric measurement was performed as described in the previous chapter.
Results and discussion
In order to develop a new procedure for the simultaneous determination of In(III) and Ga(III) by AdSV method, the measurement conditions such as: & pH and concentration of the supporting electrolyte, & the concentration of the complexing agent, & the potential and time of accumulation of the indium and gallium complexes at the working electrode surface had to be chosen so as to obtain optimal signals concomitantly for both analyzed elements, considering the height, shape of the peaks, and their separation on the voltammogram. Previous studies showed that Ga(III) and In(III) form electrochemically active complexes with cupferron, which allow voltammetric determination of these elements with a low limit of detection [19,25,26,32]. The choice of the working electrode was directed to provide the best sensitivity, and as we know, this can be obtained using mercury electrodes. However, it is also important that the applied electrode should not cause toxic effects. The Hg(Ag)FE electrode is much less toxic than the HMDE as a consequence of a lower amount of mercury, besides we are dealing with amalgam, not pure mercury as in the case of HMDE. Therefore, the current article presents the AdSV method applied for the simultaneous determination of In(III) and Ga(III) using cupferron as a complexing agent and a renewable mercury silver-based electrode as the working electrode.
Effect of pH and concentration of supporting electrolyte
In previously developed voltammetric procedures using the adsorptive metal accumulation at the working electrode surface in the form of a complex with cupferron, an acidic environment was used. Thus, based on available data concerning the determination procedures of In(III) and Ga(III) separately by the AdSV method as a supporting electrolyte, an acetate buffer was chosen as the most suitable for both Ga(III)cupferron and In(III)-cupferron complex formation. Then, the focus was on choosing the appropriate pH of the acetate buffer. The measurements were carried out at the same concentration of each of the tested buffers, equal to 0.2 mol L −1 in the pH range from 3.6 to 5.6. The obtained results showing the effect of pH on the height of indium and gallium peaks are presented in Fig. 1. As can be seen, the highest signals for both indium and gallium were undoubtedly obtained for pH = 5.3 and above, so the acetate buffer pH 5.3 was chosen as the optimum one. The effect of acetate buffer concentration on the indium and gallium peak currents was studied in the range from 0.1 to 0.4 mol L −1 . It was observed that both peak currents were increasing with the increase of buffer concentration to 0.2 mol L −1 whereas at higher concentrations they remained unchanged. Consequently, as the supporting electrolyte,
Effect of cupferron concentration
Changing the concentration of the cupferron chelating agent can also have an enormous influence on the sensitivity of indium and gallium determination. The effect of cupferron concentration on the values of AdSV indium and gallium peak currents was studied in the range from 1 × 10 −5 to 1 × 10 −3 mol L −1 . The indium peak appeared at the concentration of cupferron equal to 1 × 10 −5 mol L −1 and increased with the increase of cupferron concentration up to 2 × 10 −4 mol L −1 . At concentrations of cupferron higher than 6 × 10 −4 mol L −1 , the peak of indium slightly decreased. Whereas, the gallium peak appeared at the concentration of cupferron 5 × 10 −5 mol L −1 and increased upon increasing the concentration of cupferron to 4 × 10 −4 mol L −1 and then it remained unchanged. So the concentration of cupferron equal to 4 × 10 −4 mol L −1 was used as the optimum concentration for both determined elements. We also noticed that with increasing concentration of cupferron, both peaks were moving towards more negative potentials. The influence of cupferron concentration on the indium and gallium peak currents is presented in Fig. 2.
Conditions of accumulation potential and time of Ga(III)-cupferron and In(III)-cupferron
In order to assess the influence of accumulation potential directly on analytical results, the adsorptive stripping response of gallium and indium was studied in the solution containing 0.2 mol L −1 acetate buffer pH = 5.3, 4 × 10 −4 mol L −1 cupferron, and 1 × 10 −7 mol L −1 Ga(III) and In(III). The main goal was to obtain high and comparable gallium and indium signals at the same concentrations. The potential was examined in the range from − 1.0 to − 0.4 V with fixed deposition time of 50 s. It can be observed that upon changing accumulation potential of Ga(III)-cupferron and In(III)-cupferron towards more positive values, both obtained peaks were higher. Considering that the gallium peak increased only to the potential − 0.7 V and next it was stable, this potential was pre-selected. It was found that the accumulation potential did not influence the separation of the examined peaks in the entire examined range. Next the total accumulation time was tested in the range from 10 to 70 s. The values of the voltammetric peak currents increased almost linearly with increased total accumulation time up to 50 s both for gallium and indium and then they were constant. As our goal was to obtain comparable gallium and indium signals at the same concentrations and at the accumulation potential − 0.7 V, the signal of In(III) was bigger than the signal of Ga(III); the next step was to investigate whether running the accumulation step at two different potentials could allow us to obtain comparable gallium and indium peak heights for the same concentrations. Therefore, apart from the accumulation potential of − 0.7 V, additional potential applied to the working electrode was added. The optimization of these parameters was carried out by changing the first potential (in the range from − 1.
Selectivity
The selectivity of the Hg(Ag)FE electrode for the determination of gallium and indium was evaluated by introducing the concentrations of other metal ions as interfering species into solutions with constant concentration of Ga(III) and In(III) equal to 1 × 10 −7 mol L −1 . A tolerable limit was defined as the amount of foreign ions that produced an error not exceeding 5% in the peak currents of Ga(III) and In(III). The vast majority of ions in excess in relation to gallium and indium did not affect their voltammetric signal; however, in some cases, a different effect on the determined elements was observed, which is why the maximum tolerable concentrations of foreign ions for Ga(III) and In(III) are shown separately in Tables 3 and 4, respectively. The big advantage of the proposed procedure is that even a 20-fold excess of Cd(II) relative to In(III) does not affect the indium signal. This is particularly important in the case of Cd(II) which is serious interferents in anodic voltammetric determination of indium, because their reduction potentials are very close to reduction potentials of indium. In adsorptive voltammetric procedure using cupferron as a complexing agent, reduction potential of cadmium is at about − 0.58 V [34], while the indium reduction potential is at about − 0.65 V which ensures satisfactory separation of peaks at determined concentrations of these elements.
Interference of organic compounds
The proposed procedure was developed to analyze environmental water samples that naturally have an organic matrix. Consequently, in the course of this procedure, the interference generated by various organic compounds was precisely investigated and minimized. Among numerous organic substances commonly present in natural water are surface-active substances and humic substances. First, the influence of surface-active substances on analytical signals of gallium and indium in the proposed procedure was studied. Triton X-100-nonionic surfactant, SDS-anionic surfactant, CTAB-cationic surfactant, and Rhamnolipidbiosurfactant were selected to examine. Complete results of the impact of different types of surface-active substances using a standard procedure and the procedure with mixing with the resin are presented in Table 5. As can be seen, under the influence of even very small amounts of Triton X-100 (0.5 ppm), CTAB (0.5 ppm), and Rhamnolipid (1 ppm), the voltammetric signals of both gallium and indium were completely suppressed using a standard procedure. These substances clearly reduced both voltammetric signals. The anionic surfactant SDS did not show such a large interference because at a concentration of 5 ppm, it caused a decreased indium signal only by 60% and the gallium peak by 48%. Nevertheless, the preliminary mixing with the Amberlite XAD-7 resin significantly increased the allowable concentration in the analyzed sample of the surfactant, which did not exert any effect on the indium and gallium signals (for Triton X-100-1.5 ppm for In(III) and 2 ppm for Ga(III), for SDS and Rhamnolipids-5 ppm, for CTAB-2 ppm). The elimination of Triton X-100 interference was the least effective in relation to the indium signal leading to its reduction by almost 70% at a Triton X-100 concentration of 5 ppm. The next step was to examine the impact of commercially available organic matter, such as HA, FA, and NOM on the voltammetric signals of gallium and indium. The measurements were performed similarly as for surface-active substances using the standard procedure and the preliminary mixing of the sample with the resin. The obtained results are presented in Table 6. As concerns indium, in the presence of humic substances, the observed interferences were not as big as in the presence of the surfactants using the standard procedure. However, it should be noted that the influence of humic substances on the gallium signal was more noticeable than on the indium signal. In the case of all examined humic substances, the elimination of their interferences by Amberlite resin was very effective ( Table 6).
Application of the proposed procedures
In order to validate the proposed procedure, recovery tests were carried out by taking a fresh natural water sample from the Bystrzyca river (eastern areas of Poland). The voltammogram recorded for this sample did not show any signals of Ga(III) and In(III), which indicated that the concentrations of gallium and indium were below the detection limit of the proposed procedure. So the analyzed samples were spiked with Ga(III) and In(III) at different concentration levels and the contents of these elements were determined using the standard addition method. Three replicate determinations gave the average recovery values between 97.6 and 102.3% for In(III) with relative standard deviation between 4.7 and 5.5% and the average recovery values between 96.2 and 98.7% for Ga(III) with relative standard deviation between 4.4 and 5.3%. The Table 5 The influence of different surfactants on 1 × 10 −7 mol L −1 In(III) and Ga(III) analytical signals using the standard procedure and the procedure with mixing with Amberlite XAD-7 resin. The relative Ga and In signal was determined relative to their signals obtained in the absence of surfactants The symbol "-" means there was no signal typical voltammograms obtained during this analysis are presented in Fig. 3.
Conclusions
The renewable mercury film silver-based electrode (Hg(Ag)FE) and cupferron as an alternative for the simultaneous quantification of traces of Ga(III) and In(III) in one measurement by adsorptive stripping voltammetry were successfully proposed. The main advantage of the procedure is the use of a more environmentally friendly Hg(Ag)FE as the working electrode which is less toxic than the HMDE electrode and allows one to obtain comparable parameters. The application of the renewable mercury film silver-based electrode shortens the total time of measurements because in the case of this electrode, electrochemical cleaning is not necessary in contrast to film electrodes created electrochemically on glassy carbon such as PbFE or BiFE. Another advantage of the proposed procedure is the fact that no deaeration of the solution is necessary, which makes it easy to use under laboratory and field conditions. The application of Amberlite XAD-7 resin made it possible to elaborate a simple and fast voltammetric procedure in which interferences from surface- The symbol "-" means there was no signal (III) and Ga(III) determination in Bystrzyca river sample. a Diluted two times. b As (a) + 5 × 10 −8 mol L −1 In(III) and Ga(III). c As (a) + 1 × 10 −7 mol L −1 In(III) and Ga(III). d As (a) + 2 × 10 −7 mol L −1 In(III) and Ga(III) active compounds were minimized. To prove its practical applicability, the proposed procedure was successfully applied to the quantification of indium and gallium in environmental water samples. The above-described procedure looks promising and can be recommended for monitoring the water quality of environmental waters.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-09-02T13:47:39.134Z | 2019-09-02T00:00:00.000 | {
"year": 2019,
"sha1": "2b7d97c343352776a9ee5712af56bdcde0c9dc80",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11581-019-03212-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "2b7d97c343352776a9ee5712af56bdcde0c9dc80",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
238731097 | pes2o/s2orc | v3-fos-license | Common Genitourinary Fistulas in Rural Practice: Treatment and Management
Acquired genitourinary fistulas are common in rural practice. They are pathological communications between the urinary and genital tracts, or between either of the tracts and gastrointestinal tract or skin. Vesicovaginal fistula is the commonest and most devastating. They may result from prolonged and obstructed labor, injuries during obstetric, gynecologic, pelvic and urologic procedures, circumcision, fall from heights, road traffic accidents and female genital mutilation. They present as urinary leakage with characteristic odor. Diagnoses are mainly clinical and confirmed by dye tests, contrast radiography and endoscopy. Treatment is individualized according anatomic sites and etiology. Timing of repair is of essence; delayed repair for obstetric and early for focal injuries. Multidisciplinary team approach and cooperation is encouraged in the management of some of these cases. The sustenance of the 2 – way referral system is emphasized in cases beyond the scope of rural practice. Repairs when undertaken by skilled compassionate fistula surgeons with attention to principles of fistula management and surgical treatment, success rate can approach 90%. Interposition of vascularized grafts have improved success rate. The burden of this condition will be reduced through integration of rural practitioners in the preventive strategies of health education of the public and girlchild; improvement of healthcare, education and transportation infrastructures.
Introduction
Genitourinary fistulas are abnormal tracts between the genital and urinary tracts. Abnormal tracts connecting the urinary system to any structure of the pelvic floor [1], gastrointestinal tract and the skin are also regarded as urinary fistulas [2]. Obstetric fistula is an abnormal hole connecting the vagina to the bladder (VVF), the rectum (RVF), the ureter (UVF) or a combination of these which leads to uncontrollable leakage of urine or feces or both through the vagina, and resulted usually as a complication of difficult labor. Urinary fistulas are severe physical, social and psychological debilitating conditions [3]. It presents as a surprise, taking the patient and caring physician unawares. The commonest type, vesicovaginal fistula (VVF) is still very common in the rural areas especially in Northern Nigeria, [4] and Ethiopia [5]. Thus, this condition is basically a rural disease. Rural area is characterized by meager earnings, low education and poor infrastructure [6]. In the developing countries the attending healthcare worker may be a Traditional Birth Attendant (TBA), traditional healer, quack, midwife, medical officer, obstetrician and gynecologist, surgeon or urologist. In the context of this work, the rural practitioner is a qualified medical doctor practising in the rural area, and is available and accessible to those who suffer from genitourinary fistulas.
The questions are, "will the integration of rural practitioners in the efforts towards elimination of obstetric fistulas reduce the prevalence and burden of the conditions?" What roles will the rural practitioner play in the treatment and management of genitourinary fistulas?
The true incidence of genitourinary fistulas in the developing countries is not known, [7] but some authors have put rates for VVF at 1-3 per 1000 deliveries [8], 3.5 per 1000 births [9] and 5-10 per 1000 deliveries [10]. In contrast VVF is no longer common in the developed countries as a result of improved obstetrics care; and results mainly as a complication of pelvic surgery, malignancy and radiotherapy [11].
This chapter will dwell on fistulas caused by trauma, including obstetric and iatrogenic, and its aim is to highlight the strategic position of rural practitioners in the prevention of genitourinary fistulas, the benefits that will be derived from their education and training on the subject matter, and to suggest a framework for their roles in the treatment and management of these conditions.
Objectives
The objectives of this work are to: a. Rekindle attention to the burden of genitourinary fistulas in the rural areas.
b. Emphasize the importance of preventive strategies and stratify them for easy identification of roles and levels of participation by rural practitioners and specialized centers.
c. Empower the rural practitioner with information to identify and specify complex fistula varieties that require referral to specialized centers.
d. Prepare the rural practitioner to initiate informed early treatment and care for the genitourinary fistula patient.
e. Rekindle advocacy and solicit for regular fistula missions to reduce the prevalence and number in the waiting list.
f. Engage and train interested rural practitioners on effective preventive strategies and efficient fistula surgery, as they are more available and accessible to these rural fistula patients. Table 2 summaries the etiology of genitourinary fistulas encountered by the author in rural practice from January 2000 to December 2020. Two of the VVF cases were associated with big vesical calculi; one of them had also vesicocutaneous fistula. Urethrovaginal fistulas are not common as noted in Table 2. They were complications of vaginal hysterectomy and consequences of vaginal procedures by quacks and homeopaths.
Rectovaginal Fistulas and other urinary fistulas are less common. RVF resulted from trauma mostly, and when it occurred during obstructed labor, it was associated with VVF. Urethrocutaneous fistulas in infants resulting from circumcision mishaps were not rare. These procedures were performed by traditional health attendants, hospital attendants, nurses, midwives and medical officers. The surgical residents at the Federal Medical Center Owerri, Nigeria perform circumcision under the supervision of team consultants since 2000. The less commonly occurring vesicouterine fistula (VUF) and vesicocervical fistula are complications of difficult cesarean sections (CS) [16], and uterine rupture. When urinary fistula occurs as a complication of treatment the effect is devastating to the trained care giver even though the propensity for medicolegal litigation is very low in the rural areas. The patient often stays isolated, withdrawn, miserable and depressed. The husbands and relatives of patients in my experience have been supportive and cooperative in contrast to other reports especially from northern Nigeria [3][4][5][6][7][8][9][10]18].
SN Etiology
Type(s) Contributory factors to this burden are poor transport infrastructure, lack of skilled medical personnel and collapsed public healthcare delivery system [6]. Specialists in surgery and obstetrics and gynecology show little interest in fistula surgery, and rarely practice in rural areas. Bad roads prolong the time interval between onset of labor and arrival to hospital or make it impossible for the journey [7,9]. In southern Nigeria many roads are not passable during the peak of rainy season: July-September. Brain drains affect developing countries seriously as their trained healthcare professionals relocate or emigrate to Europe, America, Canada, Saudi Arabia for greener pastures [29,30]. In this situation, these hapless young pregnant women turn to the familiar, available and accessible traditional healers, quacks, traditional birth attendants and poorly trained midwives whom they can afford their services for obstetrics care.
Repeat Cesarean Section.
Inappropriate obstetrics care (labor at home, in the church).
Urinary bladder stone.
Vaginal foreign body.
Retain gauze after vaginal surgery.
Urethral and bladder foreign body.
Classification and pathogenesis of genitourinary fistulas
The anatomic classification of urinary fistulas has been mentioned in Table 2. Figure 1 shows them graphically.
The exact pathological mechanism in the formation of obstetric fistula is not clear. However, the compression of maternal soft tissues of bladder base, urethra, cervix vagina and rectum posteriorly, against the unyielding pubis and sacral spine during prolonged obstructed labor; with the resultant ischemia, epithelial necrosis and subsequent sloughing had been postulated as the pathophysiologic process in the formation of obstetric fistulas by many workers in the developing world [4-10, 14, 16, 20-22, 24].
Arrowsmith et al. described obstetric fistula formation within the spectrum of "obstructed labor injury complex" [20]. Urinary fistulas arising from surgical complications, wounding from accidents and stabbing are focal injuries [7]. Gunshots are more complex as they are associated with the phenomena of "tract cavitation and expansion" injuries [31]. Fistulas resulting from obstetric and high velocity gunshot injuries are larger. Ischemia, erosion and migration maybe responsible for the formation of fistulas by foreign bodies in the vagina, bladder, urethra or retained gauze during vaginal surgery.
Clinical presentation
Leakage of urine is the usual complaint. Discharge of feces from the vagina indicates rectovaginal fistula, alone or in association with VVF. The genitourinary fistulas are associated with offensive urine odor. There may be leakage of urine from the vagina, anus or through a hole in the skin depending on the type and location of the fistula. The patient may give a history of prolonged or obstructed labor prior to the leakage by 3 to 10 days in the case of VVF. History of assisted vaginal delivery, before the leakage may indicate VVF [17]. Cesarean section, hysterectomies or any other pelvic surgery may precede the urinary leakage by 10--14 days. VVF, UVF, VCF, VUF, VCuF and RUF may result from these obstetric and pelvic surgeries. The differential diagnoses of VVF include stress, urge and over flow incontinence. Pain is not usually associated with VVF, and urinary leakage in VVF may commence immediately after catheter is removed.
VVF may present many weeks after pelvic surgery. A 65 years old lady presented to the author with offensive vaginal discharge and urinary retention 10 weeks after vaginal hysterectomy by a gynecologist. It turned out to be VVF resulting from eroding infected gauge that migrated into the bladder and pointing at the tip of the urethra. The gauze probably used to pack away the bladder must have been forgotten in the wound during the surgery. The patient may present with with a referral letter indicating the definitive or provisional diagnosis. In developing countries difficult urinary fistulas are referred to the urologist or fistula centers. Frequency, urgency, dysuria, vaginal discharge, bleeding or pain during coitus may be present. There may be irritation, rash or dermatitis and whitish crystal formation on the skin surrounding the fistula, Figure 2.
History of accidentally falling astride a sharp object, stab, or gunshot injury and sustaining a penetrating injury in the perineum or suprapubic region may be elicited; leakage of urine from the anus may suggest VRF or RUF.
History
History from clinical presentation as noted above will guide the clinician towards the likely fistula he/she is dealing with.
Physical examination
A general examination should be performed noting nutritional state of the patient and comorbidities. In rural practice nutritional anemia is common and they need to be addressed to enhance wound healing.
Pelvic examination
Inspection of the perineum for sinuses, fistulas or associated tears; followed by digital bimanual and bivalve speculum examination which assist in identifying the fistula; and provides the opportunity to note the location, size, number and whether simple or complex. An idea about inflammation, fibrosis and pliability of tissue surrounding the fistula and that of the introitus and vagina are ascertained during the examinations. Stenosis and fibrosis of the introitus and vagina sometimes complicate VVF [7,32].
Ongoing inflammation, infection and induration around the fistula are contraindications for immediate repair.
Dye test in VVF management
Indications i. Confirmation or identification of small and hidden fistulas that cannot be verified by direct vision examination.
ii. To differentiate between VVF and UVF iii. Differentiate between urogenital fistula and urinary incontinence
Method
It can be performed in the treatment room or theater. Methylene blue or indigo carmine is mixed with sterile water and instilled into the urinary bladder under gravity without spillage. A sterile gauze or cotton ball is placed at the vault, mid and distal vagina. Patient is asked to walk about and return for inspection after 30 minutes.
Interpretations
• If the gauze at the vault is wet and not stained, a ureterovaginal fistula is suspected.
• If the gauze at the vault is stained, a high VVF is suspected • If the gauze at the mid vagina is stained a mid-vesicovaginal fistula is suspected • Staining at the most distal part of the gauze in the distal vagina near the introitus suggests urinary incontinence.
• If the staining of the gauze at this distal vagina spares the most distal portion a urethrovaginal fistula is suspected.
• In the case where UVF is strongly suspected the vagina is carefully cleaned and test is performed again with fresh gauze in the vagina and intravenous indigo carmine given. Blue staining of the proximal end of the gauze confirms UVF. An intravenous urogram can also be used to confirm it where it is available.
Cystoscopy
Ideally cystoscopy should be performed for patients presenting with VVF. However, in the setting of rural practice in developing countries of Africa, such necessary services are not always available. The author uses a hand-held batteryoperated portable cystoscope, Figure 3, to scope urinary fistula patients whenever necessary in the rural setting. It is very cheap to operate. Apart from visualizing the fistula, it helps in assessing the location, and size, whether simple or complex, and location of the ureteric orifices in relation to the fistulas. This is important in planning and choosing the approach for the repair [2,32].
Imaging
Imaging may be needed, but most hospitals in rural practice lack imagine facilities. Patients who could afford contrast studies are referred to facilities that have them to access studies as intravenous urogram, with cystogram in UVF and VVF, retrograde urethrogram (RUG), Figure 4 and micturating cystourethrogram (MCUG) in RUF, urethrovaginal, urethrocutaneous, and vesicocutaneous fistulas; barium enema, vaginography in RVF, and contrast CT scan. Many of our patients are poor and cannot afford these tests. In the rare situation where the fistulas could not be identified with office procedures despite a suggestive history, Rony A Adam [32] described a process where the patient is given phenazopyridine. (Pyridium) and wear a series of gauze at home over a long period. The gauze balls are placed separately in different plastic bags and brought for inspection later. Patients are instructed on proper conduct of the test in order not to contaminate the gauze during insertion.
Prevention
Urinary fistulas especially obstetric when they occur is associated with misery and isolation, expensive and difficult to treat. Healthcare financing is low in many developing countries [33] and may not be able to accommodate the management of genitourinary fistulas. Nigeria is perceived to bear the world's heaviest burden of obstetric fistulas, followed by Ethiopia, Uganda and Sudan [34]. In Nigeria, 12,000 fresh cases occur annually while 150,000 in the pool await repair [35]. Only 43% of births are attended to by skilled medical personnel in Nigeria [36]. Thus, some of these common genitourinary fistulas are avoidable. Hence some authors, National strategic Framework for Elimination of Obstetric Fistula in Nigeria, Fistula Foundation, and Professional groups recommended preventive strategies for genitourinary fistulas [34,36]. The rural area is the veritable ground for it, and rural practice is one of the best channels to use.
Three perspectives can be recognized: primary, secondary and tertiary.
Primary prevention
The goal is to remove or stop the factors known to cause or contribute to urinary fistula formation. Health education and improvement on community health. Involve community healthcare stakeholders as traditional rulers, village heads, women, youth and religious leaders, teachers and traditional birth attendants, traditional healers and heads of healthcare facilities in this program. Emphasis should be to discourage girlchild marriage, early pregnancy, delivery at home or in the church, conducting labor for a long time before referring to a superior facility, and female genital organ mutilation. Educate the community to embrace the attitude to have deliveries in suitable and efficient healthcare facilities. Encourage the girlchild to go to school and be able to comprehend the dangers in early marriage and pregnancy. Government to upscale health and transportation infrastructures to ensure timely comprehensive emergency obstetric care to all women as is obtainable in developed countries where the condition is eradicated. Effective training of midwives to conduct safe vaginal deliveries, and medical doctors to conduct safe vaginal deliveries, cesarean sections, gynecologic and pelvic surgeries. Regular workshops for public and private primary healthcare staff to monitor and recognize prolonged labor for quick referral. Multidisciplinary team approach for anticipated difficult cases. It can be rewarding to invite an experienced specialist or expert to the local center. The author has been invited by gynecologists and medical officers to join their surgeries in more than 35 instances. Part time or visiting appointments can be offered to such experts.
Secondary prevention
The goal is to recognize and repair injuries caused to urinary and genital tracts during surgeries; and to offer early attention and treatment to genitourinary injuries from other causes. The use of appropriate suture material and size in the surgery on urinary tract; and safe surgical conduct. Improved operating light is very important. Many theaters in rural practice use improvised theater lamp [6]. The author uses LED head light gear, Figure 5 to augment whatever light that is available. It is pertinent for the pelvic surgeon to appreciate the applied anatomy of pelvic structures, and note that the trigone is situated at the anterior aspect of upper 1/3 of the vagina, and the cervical os is at the base of the trigone (inter ureteric ridge).
Tertiary prevention
Involves interventions geared towards prevention of complications from urinary fistulas. Treat infections, skin care, nutritional support, correction of nutritional deficiencies and anemia, social support and community reintegration to avert depression, abandonment and divorce. Advocacy for bilateral cooperation and collaboration to sponsor obstetric fistula repairs and training for more fistula surgeons. Repairs should be undertaken by skilled fistula surgeons. Nigeria|Fistula Foundation in her recent report stated that it has provided 9,464 fistula repair surgeries to Nigeria women since 2010 [36].
Principles of fistula management
In addition to thorough evaluation of the genitourinary fistula patient, the following management principles are important. They should have adequate nutrition, successful treatment of infection, effective urinary drainage, removal or by pass of any distal obstruction and rule out any associated malignancy [2,32,37]. Adherence to the principles of surgical repair of urogenital fistulas is paramount to successful repair [2, 4, 5, 7-10, 14, 32, 37]. These include optimal operating light, adequate exposure of the fistula, excision of devitalized and ischemic tissues, removal of foreign bodies from the fistula, careful dissection, keeping to anatomical plane between organ cavities, use of small sized delayed absorbable sutures on small automatic needles, water tight closure, use of well vascularized flaps for repair and support, multilayer closure, non-overlapping tension free suture lines, stenting of urinary tract, adequate drainage after repair, prevention and treatment of infection, and adequate hemostasis.
Conservative method
Conservative treatment though not popular may be attempted when patient presents early and while waiting for infection and inflammation to subside. The author has recorded success on a few cases that ranged from 0.5 cm -1.5 cm, Table 4. Small fistulas with oblique tracts have been reported to be amenable to conservative management [2].
Fistula repairs should be undertaken by "tutored and trained fistula surgeon" who has passion to ameliorate the suffering of patients. Some medical officers belong to this group [38]. The best opportunity to achieve a successful repair is at the first attempt [2][3][4][5][6][7][8]. There should be no room for trial and error. The trainee surgeon should be assisted and monitored by experienced fistula surgeons. In rural surgery for VVF, the best outcomes do not often come from trained specialists as obstetrician and gynecologists; general surgeons, urologists and plastic surgeons.
Timing of repair In rural practice, obstetric fistula is commonest. Patients arrive late . In the case of those who arrive early, we allow 8-12 weeks. If the fistula was iatrogenic or resulted from any other focal injury, we close the fistula as soon as infection is controlled. Controversies surround the timing of repair of VVF [4,8,14,37,38].
Approach Whoever is undertaking VVF repair must be familiar with both vaginal and abdominal approaches, techniques and maneuvers. One approach may not be suitable for every case [40]. Most surgeons in the developing world use the vaginal approach [4-9, 12, 16, 37, 38].
Anesthesia
Anesthesia should be simple, safe and easy in rural practice. Heavy 0.5% Bupivacaine spinal and intravenous (iv) Ketamine anesthesia; conscious sedation with diazepam and pentazocine injections with local infiltration anesthesia of 1 or 2% lidocaine or lignocaine with or without adrenaline are commonly used. Sometimes iv Ketamine is used to supplement spinal anesthesia in lengthy surgical sessions. Ketamine is safe, 1-2 mg/kg for induction and 25-50 mg iv boluses in titrated doses [41]. Atropine 0.6 mg, diazepam 5 mg stat and given 30 minutes before the start of operation. Atropine prevents secretions and bradycardia, while diazepam prevents dysphoria and psychotomimetic effects during recovery. Bupivacaine spinal anesthesia may last up to 3 hours and is superior to 2% heavy lidocaine spinal anesthesia which may last for 90 minutes. Endotracheal intubation anesthesia is rarely used in rural practice [6].
Tools for VVF-repair
Tools for VVF repair is shown on Table 5. Two assistants are required in prone position. One will be holding up the posterior vaginal wall with a Sim's speculum [37].
Preoperative counseling
It is done in the language she will understand when conservative management has failed. Expectations are discussed, especially that the repair may fail, but hope will not be lost. The need for catheterization for 2-3 weeks, length of hospital stays; possible post-operative frequency, urgency, urgency incontinence for some time after removal of catheter. Patient is counseled thoroughly on informed consent and reminded that challenges may warrant change of plans intraoperatively. Choice of suture materials Small size delayed absorbable sutures ranging from 5/0-4/0, monofilament and braided multifilament from 4/0 to 3/0 with 3/8 and 5/8 atraumatic needles are recommended, Table 5. This minimizes the amount of suture material in the wound and still provides adequate closure of wound edges [42].
Position for repair of VVF
This depends on the preference of the surgeon.
Prone position
Prone position is used in many fistula centers where skilled and experienced anesthetists will perform cuffed endotracheal intubation inhalation general anesthesia. The specifics of prone position are well illustrated in primary surgery volume one, edited by Maurice King et al. [37].
Lithotomy position
Exaggerated lithotomy position with slight head down position, buttocks just beyond the edge of the table.
Repairing technique of VVF
The principal steps are: dissecting out the fistula, mobilizing the vaginal skin from the bladder and precervical fascia, mobilization of precervical (pubovesical) fascia, if possible, attention to ureteric orifices, closure of bladder wall, doing a second layer with the precervical fascia over the first suture layer, placement of vascularized graft when indicated and closing the vaginal skin.
Steps in vaginal approach
i. After spinal anesthesia, antibiotic prophylaxis is given.
iii. Skin preparation and draping. iv. Pass size 16, 2-way Foley catheter, inflate the balloon and connect to a urine bag.
v. Infiltrate the layer between the vaginal wall and bladder wall with adrenaline in normal saline 1:100,000. If patient is hypertensive, use plain normal saline. This facilitates dissection and reduces bleeding when the adrenaline-saline solution is used.
vi. The fistula is dilated, and size 14 or 12, or 10 or 8, 2-way Foley catheter depending on the size of fistula is inserted into the fistula tract and the balloon inflated with 5 mls of sterile water.
vii. Commensurate traction is applied distally on the catheter in the fistula to enhance access, purchase and exposure.
viii. The vaginal skin is incised elliptically around the fistula. Using sharp dissection with knife and slander scissors, the vaginal skin is carefully dissected from the bladder wall for a distance of 0.5-1.5 cm, to allow for tension free closure eventually, Figure 6A. Some authors recommend 1 cm towards the cervix and 0.5 cm laterally [37].
ix. Where possible separate the layer of tissue between the bladder and vagina (precervical fascia) from the bladder wall. This may be difficult in large and fibrotic fistulas. Use suture ligation with 5/0 polyglactin to control bleeding.
x. The fistula collar, Figure 6B, may or may not be excised depending on the size of the fistula. In large fistulas with repeated repair attempts, conservation is prudent. In the past some workers insist on total excision of fistulous tracts and fibrous tissue [43].
xii. Extra mucosal closure of the bladder is done, starting at each end and coming towards the center, with 5/0 poliglecaprone (Monocryl) on a 5/8 atraumatic needle, at 3-5 mm interval. Throughand -through bladder mucosal closure can be done with good result especially in large fibrotic fistulas [37], where tissues are not very pliable or bleeding mucosal edge [32]. The ureters can be avoided by conserving fistular collar in large fistulas and doing careful extra mucosal closure. In high fistulas near the cervix the bladder is usually closed transversely and in low fistulas near the urethra the first layer is sutured longitudinally [2,32,37]. There are no hard and fast rules about this, the bladder should be closed in the line of least tension [32].
xiii. The tightness of the repair is checked by instilling 200-300 mls of methylene blue normal saline solution into the bladder. More stitch is put at any leaking point, or the stitches removed, to start a fresh if the leakage is copious.
xiv. The precervical fascia is closed if possible or the first layer is imbricated by suturing the bladder muscularis layer together with 4/0 or 3/0 polyglactin 910, Figure 6C. The stitches of this layer are staggered between those of the first layer so that no stitch lies on top of each other.
xv. If the fistular is large or significant dead space exists, a graft is indicated. The bladder peritoneum can be mobilized or a Martius fat pad transpositional flap is raised and placed over the closed fistula [2,3].
xvi. The vaginal skin is closed perpendicular to the bladder closure line, if possible, otherwise close according to easy approximation of edges. Figure 6D.
xvii. Repeat cystoscopy with intravenous indigo carmine to assess ureteric patency if available.
In Latzko technique, the fistulous tract is not excised. It is imbricated into the bladder with interrupted extra mucosal sutures on a small tapered needle [44] [48,49]. It is well illustrated by Ganabathi K, Sirls L, Zimmern PE and Leach GC [50].
Abdominal approach in VVF repair
Extra peritoneal and intraperitoneal techniques of VVF repairs have been well discussed by Gabanathi K, et al. and Wein AJ et al. [51,52].
Post-operative management of VVF repair
i. Presumptive intravenous antibiotics with 3rd generation cephalosporine in combination with metronidazole or tinidazole continued for 5 days is recommended, because of the peculiar setting of rural practice. Routine presumptive antibiotics regimen is not practiced in developed countries [32].
ii. Efficient and effective bladder drainage. Urine bags should be emptied hourly and recorded in a chart [37]. iii. In transperitoneal technique, nil orally until bowel function returns.
iv. Urethral catheter is removed when macroscopic hematuria has cleared, usually about the 3rd day in the case of double catheter drainage, and leave the suprapubic for three weeks.
v. The catheter is spigoted at day 18 and bladder training is commenced: release urine hourly for 3 hours, then 2 hourly for 6 hours and thereafter 3-4 hourly from day 20. If all is well, catheter is removed on day 21.
vi. Patient is observed for 2 days for normal micturition and dryness. If she leaks urine, examination in the left lateral position is done, to ascertain whether urine is coming from the fistula or urethra.
vii. If she is leaking from the urethra, discharge and reassess at 6 weeks. If she is leaking from the fistula, recommence bladder drainage for 21 days. If she does not close, remove catheter and recommence salt (Sitz) bath.
viii. Counsel and work her up for future repair.
Adjuncts
• Anticholinergics to control bladder spasms, oxybutinine 5 mg twice or three times daily; Tolterodine 2 mg twice daily, and solifenacin 5 mg daily are useful.
• Loose vaginal gauze as wick drains and changed daily. Some authors use vaginal packs after abdominal approach [2], while others do not [32].
• Estrogen may be given to enhance vaginal skin [2,32,50]. Estrogen is rarely used in rural practice.
Postoperative counseling
• Sexual intercourse is forbidden for 3 months.
• Subsequent pregnancies shall be delivered by cesarean section.
Failure of VVF repair
Failure after repair may result from.
a. host factors as presence of foreign body, tissue ischemia, infection, metabolic diseases as diabetes mellitus, peripheral vascular diseases and rarely malignancy.
b. Surgical factors as undetected distal urinary obstruction, inadequate postoperative urinary drainage.
c. Surgical technique as inexperience, inadequate excision of devitalized tissues and scar tissue, use of inappropriate suture materials and lack of adherence to detailed measures in the principles of surgical repair of vesicovaginal fistula.
Complex vesicovaginal fistula
These include: • Multiple vesicovaginal fistulas involving the urethra and intestine, associated with trauma of fall from heights, anterior posterior-compression fractures from road traffic accidents, and gun short injuries.
• Giant Vesicovaginal fistulas of more than 5 cm in diameter. Those associated with partial or complete loss of urethra, stress incontinence, narrow vagina and small bladder capacity.
• Those involving the cervix and lower uterine segments.
• Those complex fistulas are referred to fistula units in tertiary institutions and fistula centers. Elsewhere the author has emphasized the importance of sustaining the 2-way referral system in the practice of medicine [6]. It supports a good rural surgical practice.
Etiology and clinical presentation
This is an abnormal connection between the rectum and vagina. The etiology, pathogenesis, clinical presentation and diagnosis of RVF have been discussed in the preceding sections and highlighted on Tables 1 and 2. RVF can be classified as low, mid and high vaginal fistula. Low is from the vaginal opening to the hymenal ring, mid from the hymenal ring to the external cervical os, and high from the external cervical opening to the vault of the vagina (area of the cul-de-sac) [32].
Management
Conservative management may be tried. Some resulting from penetrating and stab wounds responds to antibiotics, salt bath and fluid diet. Defunctioning colostomy has been performed for some cases. Obstetric RVF will require surgical correction after treating infection and resolution of inflammation.
Time of repair:
A waiting period of 3-6 months is allowed, and salt bath continues before repair.
Surgical repair of RVF
A defunctioning sigmoid colostomy may be done. Assessment under anesthesia as soon as possible to ascertain the location, size and state of the fistula, presence of sloughs, and edema. If the fistula is above 8 cm from the fourchette refer to higher center for repair from above. For mid and low fistulas, repairs can be undertaken from below. If there is associated VVF, it should be repaired first [37].
Low fistula
Spinal anesthesia, prophylactic antibiotics, supine lithotomy position, aseptic technique, transperineal, transvaginal or transanal approach may be used [32,37]. The tissue around the fistula is infiltrated with adrenalin-normal saline solution as in VVF. An incision along the anterior anal sphincter border or transverse along the posterior fourchette is deepened and dissected proximally separating the vaginal wall from the perineal body, anal sphincter, anal and rectal walls, developing a reasonable dissection of the rectovaginal space proximally, distally and laterally. The fistula is excised, homeostasis achieved, extraluminal closure of the rectum is done using interrupted 3/0 polyglactin and imbricated with seromuscular layer incorporating the internal anal sphincter using interrupted 2/0 polyglactin. Vaginal wall is closed with 3/0 polyglactin. The external anal sphincter if disrupted is repaired end-to-end with interrupted polyglactin O.
Mid fistula
The transvaginal approach is preferred. The principles and techniques are the same. The fistula tract is dissected and excised, wide dissection of the rectovaginal space is done, layered closure of the rectum avoiding the lumen, and interrupted vaginal wall closure with 3/0 delayed absorbable suture.
Postoperative care
• Presumptive antibiotics for 5 days, since the wound is contaminated.
• Pain is controlled with pentazocine injection 30 mg 4-6 hourly for about 72 hours.
• Liquid diet for about 5 days, then low residue diet.
• Urethral catheter is left for 7 days.
Etiology and clinical presentation
This is a pathological communication between the ureter and the vagina. Etiology includes surgical injuries especially hysterectomy [2,56]. More cases of UVF are appearing in rural practice due to increasing rates of cesarean sections performed by unsupervised medical officers working alone. Other causes of UVF have been discussed by Payne CK and Raz S [56].
Vaginal urinary leakage after gynecologic or obstetric surgery is the commonest symptom. Urine may drain from incision wounds and wound drain. When urine collects in the abdomen or retroperitoneum, nonspecific symptoms of flank and abdominal pains, hiccups, fever, abdominal distension, ileus, localized fluctuance and tenderness may occur.
Diagnosis
Confirmation of the leakage as urine. Oral phenazopyridine hydrochloride (pyridium) is given. Brown coloration of the leakage confirms it is urine. Intravenous indigo carmine can be used. Dye test as described under VVF can be done. Staining of the gauze at the vault confirms UVF. Intravenous urogram (IVU) and micturating cystourethrogram (MCUG) can also be used. The MCUG will diagnose a bladder fistula, confirm or rule out ureteric reflux; while IVU shows the excretion function of the kidneys, site of contrast extravasation, dilatation of upper tract and contrast in the vagina. A postvoid film is needed to assess for a distal fistula. Once the diagnosis is made or suspected, refer the patient to a urologist.
Etiology and clinical presentation
An abnormal communication between the uterus or cervix and the urinary bladder. It is uncommon. The commonest cause is lower segment cesarean section [2,5,57]. Other causes include myomectomy [17], vaginal operative delivery, induced abortion and, dilatation and curettage. Presentation is the classical "Youssef's syndrome" of symptom complex: "menouria, cyclic hematuria associated with amenorrhea, secondary infertility and urinary continence" [58]. Diagnosis can be made by a combination of contrast cystogram with voiding cystogram and cystoscopy. Refer to a tertiary healthcare institution for multidisciplinary team management.
Etiology and clinical presentation
UrVF is an abnormal connection between the urethra and the vagina. The commonest cause in the developing world is obstructed labor followed by female genital mutilation as 'GISHIRI CUT in Northern Nigeria [8,15,37]. In the developed world it occurs as a result of vaginal surgery for incontinence, anterior colporrhaphy, vaginal prolapse and urethral diverticulum [2]. It is often associated with VVF [37]. It presents as urinary leakage from the vagina. A small fistula may produce minimal discomfort, while a large one leaks copiously. Distal small fistulas may be asymptomatic.
Diagnosis
The diagnosis is made clinically and confirmed by urethrocystoscopy if available or by micturating cystourethrogram.
Treatment
Treatment is by surgical repair. However, some workers recommend that distal urethral fistulas can be observed or managed with an extended meatotomy [59].
Operative repair
Spinal anesthesia, lithotomy position, aseptic technique is used. Size 16 urethral catheter is passed. The tissue around the fistula is infiltrated with adrenalin normal saline solution 1:100, 000 or plain saline. The fistula tract is encircled with incision. The vaginal skin is dissected free from the urethra all-round the fistula to about 5 mm. An inverted 'U' shaped incision is marked out on the anterior wall of the vagina with the base at the proximal margin of the encircled fistula. The area within the incision is infiltrated with the adrenalin saline solution and dissected off the periurethral fascia as a vaginal wall flap, to a reasonable distance not less than 2 cm. The edges of the fistula are mobilized, reflected over the fistula but not excised. It is closed with interrupted 5/0 monocry (poliglecaprone) or vicryl in the line of least tension. The periurethral fascia is closed perpendicular to the first as a second layer when possible. A Martius flap is raised and tunneled to the repair as an additional layer. The anterior vaginal wall flap is advanced over the closure and sutured with 4/0 vicryl to the distal margin of the wound. This repair technique is well illustrated by Rovner ES, and Leach GE et al. [2,60]. The repair of UrVF may be very difficult due to relative lack of connective tissues in the mid and distal urethra. Interposition tissue flap is often indicated. Multiple and complex urethrovaginal fistulas should be referred to higher centers for multidisciplinary team approach.
Etiology and clinical presentation
This is a rare connection between the lumen of small bowel and urinary bladder. The etiology in the rural areas include penetrating and gunshot injuries to the lower abdomen and pelvis; and iatrogenic trauma. In the developed world, it is caused by diverticulitis, malignancy, Grohn's disease, trauma, foreign body and infection [2,61].
Once suspected, the patient should be referred to a higher center for multidisciplinary team management.
Etiology and clinical presentation
A rare abnormal connection between the small bowel and vagina. A complication of hysterectomy in the author's experience, Table 2. Elsewhere cases arising from Crohn's disease have been reported [62].
Treatment
Refer promptly and accordingly once diagnosed or suspected in rural practice.
Etiology and clinical presentation
This distressing acquired abnormal communication between the urethra and rectum is seen in males. The author has encountered only seven cases in 28 years; 4 from gunshot Figure 7, two from stab injury and 1 iatrogenic endoscopic injury during endourology procedure, Table 2. Other causes in the literature are iatrogenic trauma during prostatectomy, cryotherapy, anorectal surgery, pelvic irradiation, urethral instrumentation, infection and Crohn's disease [2,63]. The symptoms may include fecaluria, hematuria, LUTs, fever, malaise, urinary tract infection (UTI), nausea and vomiting [64].
Diagnosis
Diagnosis is by history, physical examination, urine microscopy and culture; high index of suspicion; and confirmed by retrograde urethrogram (RUG) and MCUG. Urethrocystoscopy and sigmoidoscopy may visualize the fistula.
Conservative
Some will heal on conservative management [63,64]. The author managed the RUF that resulted from iatrogenic trauma during a Direct Vision Internal Urethrotomy (DVIU) procedure with urethral catheterization continuous bladder drainage for 3 weeks, low residue diet and appropriate antibiotics cover.
The York-Mason procedure is a transrectal approach requiring jack-knife prone position and skilled anesthesia. It has been found to be effective with low morbidity [70].
Etiology and clinical presentation
An abnormal communication between the urinary bladder and the skin. The commonest variety is the type connecting the bladder and the skin of the lower abdomen or suprapubic region; Figure 2. This commonly follows prolonged or neglected suprapubic catheterization. Other sites encountered are perineum and upper thigh. Males are commonly affected. Other causes include gunshot and stab injuries, fall from heights and following pelvic surgery, Table 2.
It presents as urinary leakage through the skin.
Diagnosis
Diagnosis is clinical and confirmed by MCUG.
Conservative
Removal or bypass of distal urethral obstruction will heal some.
Surgical treatment
Others will require surgical excision of fistulous tract, closure of urinary bladder in layers and wound closure may be primary or delayed depending on its state of cleanliness and contamination.
Etiology and clinical presentation
This is an acquired connection between the urethra and skin. It commonly occurs on the penis, Figure 8.
In rural practice, it results commonly as circumcision mishap [71]. There are reported cases following surgery of urethral stricture and diverticulum; and hypospadias repair [72]. Others include paraurethral abscess, gunshot wounds and chronic inflammatory disease.
Diagnosis
Diagnosis is clinical.
Treatment
There is no standardized surgical repair technique for this condition. Each case should be individualized and treated according to its merit. Urethrocutaneous fistulas should be referred to the urologist.
Role of the rural practitioner and future research
The roles of the rural practitioner have not been clearly defined in the treatment and management of the genitourinary fistula patient. The following roles are suggested from this study. They should: i. participate in the three preventive strategies mentioned in Section 5, and should participate in the treatment of the fistula from the beginning.
ii. Resuscitate and refer complex and recurrent fistulas promptly to centers with good fistula repair record. Sophisticated ones as UVF, VUF, VEF, EVF, RUF, vesicocutaneous and urethrocutaneous fistulas are beyond the scope of rural practice, and should be referred appropriately once the diagnoses are suspected.
iii. They may undertake the repair of simple fistulas after undergoing adequate training and exposure.
It will be worthwhile to determine the degree of involvement of rural practitioners in the treatment and management of genitourinary fistulas at present, and the impact on the burden of the disease when they are fully integrated.
Conclusion
Genitourinary fistulas which occur often in rural practice embarrass the patient and practitioner. The dearth of skilled medical personnel and trained fistula surgeons in the rural areas, made worse by brain drain, poor transport, education and health infrastructures complicate the burden of genitourinary disease. Thus, the patient will be most grateful to the rural practitioner who promptly guides and refers her to a good fistula surgeon who repairs her fistula successfully. The rural clinician should participate effectively in the preventive strategies, initiate treatment and care as soon as fistula occurs, refer complex and sophisticated ones, and may undertake repair of simple fistulas after adequate training and exposure. Good skill, dedication with passion, attention to the principles of fistula management and surgical treatment will achieve high repair success rate. More efforts in training the rural medical practitioner in fistula surgery, education of the girlchild and the public, deployment of more resources to improve social welfare infrastructures, the treatment and rehabilitation of victims, and regular frequent fistula treatment missions will reduce the prevalence of this condition. It is believed that the realization of these objectives will reduce the burden of genitourinary fistulas. | 2021-09-25T16:21:32.766Z | 2021-08-23T00:00:00.000 | {
"year": 2021,
"sha1": "afcbc554bbf11bad1a98a285654769d73a4b5e95",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/78204",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c3b01f65d0b76affee150ce3240815edcb9a101e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55957661 | pes2o/s2orc | v3-fos-license | The estimation of transition curves geometry in railway engineering from measured data
Abstract. The position of the track is defined as a course of the track in its ground plan. [1] In the simplest way the course of track can be defined as an axis, which is constructed not only from straight parts, but from the arcs and transit curves as well. In consideration of the normative changes it is necessary to change the point of view on measuring of track geometry parameters and hand in hand to adjust the way of processing and the results interpretation.
1 The current status of the geodesy in the railway engineering Through track geometry measurements, which are defining the course of the track, some normative changes enter into force.From 1 st July 2015 there is in force norm STN 73 6360-1: Railway applications.Track.Part 1: Geometrical position arrangement of 1 435mm gauge railways and norm 73 6360-2 Railway applications.Track.Part 2: Acceptance of construction works, maintenance works and assessment of service condition track gauges 1 435mm, which replaced norm STN 73 6360.These norms are interested in the requirements necessary for the designing, the construction, the reconstruction and the modernization of the normal track gauge with speed to 300km/h, than the technical parameters of the structural and the geometrical setting of the track, the rail branches and their spatial position, the taking over of the construction and the maintenance works.Through the track condition assessing is needed to check the geometrical position of track by the geodetic instrument with the continual recording.There are measured the position coordinates of track axis and the elevation of the non-overridden tracks roof.In special, reasonable occasions it is possible to control the position by conventional geodetic method.In other reasonable occasions it is possible to check track gauge and overriding by manual measuring method without continual recording.[2] The usage of these suppositions needs the technology of measurement and calculations to be adjusted depending on the design, the distance and the location of the object.[3] The possibility of the movements' estimation is the analysis of measurement as the parametrical defined geometrical parts.These can be after the measurement of displacement and deformation stages compared between stages.The biggest benefit of this method is closely total elimination of errors depending from centration and signalisation of points.In railway engineering, there are not expected discreet deformations, on the other hand deformations of whole parts are.
Hand in hand with the trans-European corridors modernisation, Railways of Slovak republic realized the reconstruction of track Nové Mesto nad Váhom-Púchov, where in one part-Turecký vrch, first time ever in Slovakia was designed ballastless track system RHEDA 2000®.[3] The whole length of Turecký vrch tunnel in the axis is 1775 m, the length of tunnels tube is 1738.5 m plus 25m continuing part from south side and 10m continuing part from the North.The double rail track in the tunnel is designed for speed 200km/h with opposite arcs with radius 2000 m. [4] The measurement was done not only in the part of ballastless track system but on the north side of the tunnel as well.In the transition part was first time used new type of construction in total length of 20 m that is using standard components of the rail superstructure without its´ stabilisation.[5]
The estimation of the track positioning from the measured data
The easiest is to estimate the parameters of the straight part.For the most precise interpretation of rails´ course is sufficient to fold the regression line through measured data (it is better to estimate the rail axis but it is possible to estimate one of the track as well).This regression line can be described algebraically by linear equation [6] b ax y . ( Through the estimation of arc parameters are estimated coordinates of the arc centre and its´ radius.In Cartesian coordinate system with the arc centre S [Y 0 , X 0 ] and radius R is the set of points defined by linear equation
The comparison of measured data with the different types of transition curves
The troubles through estimation rise because of its parametrical definition.The basic functions of transition curves are: x a smooth transition between the straight section and the circular arc, the curvature gradually changes from zero to the final value; x provide space for smooth change when creating or changing an override; x ensure a smooth curvature change in complex arcs or between opposite arcs.
The transition curve can be only the curves that meet these conditions: x one end of the curve has a cross-section with a downstream straight line; x the other end of the curve has a common cross-section with a downstream arc; x the curvature of the transition point at the point following the circular arc have to be the same as the curvature of the circular arc; x the curvature of the transition point at the point following the straight section have to be equal to zero; x the course of the change of the curvature over the entire length of the transition period corresponds to the course of the override change.
Parabola
A parabola is a two-dimensional mirror-symmetrical curve.The parabola is described as a lotus of points that are equidistant from both a directrix (line) and a focus (point).The focus does not lie on the directrix.
The parabola can be algebraically described by equation The comparing calculation was done by the least squares method it is necessary to create design matrix, which contains from partial derivatives of the a, b, c parameters with total length n x r (n-number of measured points, r-number of unknown parameters), the weight matrix could be neglected, every point was measured with same precision.From the results of the calculation, parameter a=0.000508 b= -0.866551and c= 0.102152.The mean error m 0 calculated from equation where k means the number of needed measures (3) and v means the vector of residuals.The value of m 0 is the best comparative criterion.The size of parabolas´ mean error is 0,147m.
Klothoide
Whereas a Klothoide is forbidden for Railway Engineering in Slovakia, it is used in every other country in European Union but it is often used worldwide.The klothoide is also known as the curve with linear increase of curvature.From the Figure 1 it is possible to see that the growing in axis x and y are marked by dx and dy: The l means the distance from the start of transition curve, LK is the distance of whole curve and R is the radius of klothoide curvature with is equal to radius of arc, which transition curve is connected to.
To possible use of the least square method it is necessary to derivate equations ( 6) and (7) so these functions must be linearized by Taylor series.
After the integration by the distance l we become equations: ...
After the adjustment by the least square method it is necessary to astimate the values, l i is distance of every point from the beginning, other valuea ase L K -the lenght of whole curvature and the radius of curvature R. The number of unknown is equal to number of meausred points n+2.The number of equation that are used in method is 2.n, every coordinate has it own equation.
The shape of desing matrix is To simplify the calculation it was not needed to calculate with more than first three units of Taylor series because of their low impact to calculations presented by thousandths of millimetre.Through calculations were the values of R, L K and l i estimated and there was only calculation of the supplements.The values were R 0 =1997m, L KO =450m and l i0 was calculated from equation The size of klothoides´ mean error is 0,220m.
Conclusion
After the legislative changes in present age, it is necessary to change the access of measurement adjustment.It is needed not only to change the methods of measurement from discrete to continual but to change the method of adjustment as well.One possibility is to divide the track into the parts with same directional guidance (line, transition curve and arc) and calculate their parameters.These can be compared between epochs and show better conclusion without impact of pointing error etc.
Fig. 1 .
Fig. 1.The growing of x and y coordinates in klothoide of R=1997.362m and L K =432.233m.The equations of coordinate calculations look like: | 2018-12-13T11:11:26.850Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "23dac34b88d63c79823eed36e27f406a8c3f8f23",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/31/matecconf_rsp2017_00029.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "23dac34b88d63c79823eed36e27f406a8c3f8f23",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Engineering"
]
} |
211064486 | pes2o/s2orc | v3-fos-license | DEAD-Box Helicases: Sensors, Regulators, and Effectors for Antiviral Defense
DEAD-box helicases are a large family of conserved RNA-binding proteins that belong to the broader group of cellular DExD/H helicases. Members of the DEAD-box helicase family have roles throughout cellular RNA metabolism from biogenesis to decay. Moreover, there is emerging evidence that cellular RNA helicases, including DEAD-box helicases, play roles in the recognition of foreign nucleic acids and the modulation of viral infection. As intracellular parasites, viruses must evade detection by innate immune sensing mechanisms and degradation by cellular machinery while also manipulating host cell processes to facilitate replication. The ability of DEAD-box helicases to recognize RNA in a sequence-independent manner, as well as the breadth of cellular functions carried out by members of this family, lead them to influence innate recognition and viral infections in multiple ways. Indeed, DEAD-box helicases have been shown to contribute to intracellular immune sensing, act as antiviral effectors, and even to be coopted by viruses to promote their replication. However, our understanding of the mechanisms underlying these interactions, as well as the cellular roles of DEAD-box helicases themselves, is limited in many cases. We will discuss the diverse roles that members of the DEAD-box helicase family play during viral infections.
The DEAD-box Helicase Family of RNA-Binding Proteins
The broad family of DExD/H helicases, including DEAD-box helicases and the related DEAH-box helicases, belongs to superfamily 2 of the RNA helicase family. This family is characterized by a series of amino acid motifs that form the RNA and ATP binding sites of the helicase core [1,2]; DEAD-box helicases contain the amino acid sequence DEAD in motif II as well as an additional upstream Q motif [3]. DEAD-box helicases are conserved from bacteria to mammals [4]. Structurally, the DEAD-box helicase core contains two domains resembling bacterial RecA, which cooperatively bind RNA and ATP, interacting with RNA along the sugar-phosphate backbone [4]. Because of the lack of interactions with the RNA nucleotide bases, the binding of DEAD-box helicases to RNA is thought to be generally sequence-independent but structure-dependent [5,6].
In vitro studies of DEAD-box helicase unwinding activity have revealed ATP-dependent RNA helicase function, but these proteins tend to have only weakly processive helicase activity, and instead are thought to promote local RNA:RNA rearrangements [7]. Beyond their activity on RNA duplexes, studies have also revealed that some members of this family have other activities regulating nucleic acid binding, including the modulation of RNA:DNA interactions, DNA:DNA interactions, and RNA-protein (RNP) complex remodeling [8,9].
Although DEAD-box helicases share a conserved core domain, variable N-and C-terminal regions allow members of this protein family to act on diverse targets. These flanking domains target the While the innate immune roles of some DEAD-box proteins involve nucleic acid sensing, other innate immune functions involve protein-protein interactions with DEAD-box-helicases and are nucleic acid independent. DDX3X may serve as a scaffold for protein-protein interactions to promote the transduction of innate immune signaling cascades. The binding and phosphorylation of DDX3X by TBK1 facilitates IFN-β induction in response to diverse stimuli including poly(I:C), poly(dA:dT), and Listeria monocytogenes infection [26]. DDX3X also interacts with IKKε to mediate the phosphorylation of IRF3, and with IKKα to support its activation downstream of TLR7 [27,28].
Some DEAD-box helicases are themselves ISGs, further reinforcing their antiviral activity via positive feedback. RIG-I, MDA5, DDX60, and DDX60L are canonical ISGs [29][30][31]. DDX60 and DDX60L share approximately 70% amino acid identity and both have antiviral functions; however, their roles are distinct [29,31]. DDX60 binds vesicular stomatitis virus (VSV) ssRNA and dsRNA in vitro and promotes RLR activation [31]. DDX60 also contributes to RIG-I-independent degradation of HCV RNA, although its role in this process is unclear [32]. Activation of the EGF growth factor receptor attenuates DDX60 activity, so viruses known to activate EGF including influenza A virus, While the innate immune roles of some DEAD-box proteins involve nucleic acid sensing, other innate immune functions involve protein-protein interactions with DEAD-box-helicases and are nucleic acid independent. DDX3X may serve as a scaffold for protein-protein interactions to promote the transduction of innate immune signaling cascades. The binding and phosphorylation of DDX3X by TBK1 facilitates IFN-β induction in response to diverse stimuli including poly(I:C), poly(dA:dT), and Listeria monocytogenes infection [26]. DDX3X also interacts with IKKε to mediate the phosphorylation of IRF3, and with IKKα to support its activation downstream of TLR7 [27,28].
Some DEAD-box helicases are themselves ISGs, further reinforcing their antiviral activity via positive feedback. RIG-I, MDA5, DDX60, and DDX60L are canonical ISGs [29][30][31]. DDX60 and DDX60L share approximately 70% amino acid identity and both have antiviral functions; however, their roles are distinct [29,31]. DDX60 binds vesicular stomatitis virus (VSV) ssRNA and dsRNA in vitro and promotes RLR activation [31]. DDX60 also contributes to RIG-I-independent degradation of HCV RNA, although its role in this process is unclear [32]. Activation of the EGF growth factor receptor attenuates DDX60 activity, so viruses known to activate EGF including influenza A virus, HCV, and VSV may be able to counteract DDX60 functions [32]. A study contrasting the responses of hepatocyte cell lines to IFN-γ uncovered the importance of DDX60L expression for controlling HCV infection, and ectopic expression of DDX60L was also found to restrict HCV infection independent of interferon signaling, suggesting an additional direct effector mechanism [29].
Negative Regulation of Innate Immune Responses by DEAD-Box Helicases and Viral Antagonism
Some DEAD-box helicase family members are negative regulators of interferon responses. This can occur via competition for RNA substrates or protein-protein interactions. For example, DDX24 binds dsRNA and ssRNA and sequesters it away from RLRs, thus interfering with IRF7 activation [33].
In the virus-host arms race, viruses have evolved diverse countermeasures to evade detection by the host and evade host immune responses. The existence of such countermeasures targeting DEAD-box helicase proteins further underscores these proteins' importance as antiviral factors. A variety of viruses hijack or target DDX3X to promote infection. The binding of DDX3X by the vaccinia virus K7 protein interferes with DDX3X-dependent TBK1/IKKε activation [34]. Moreover, DDX3X binding to hepatitis B virus (HBV) polymerase also interferes with DDX3X-dependent TBK1/IKKε activation early in infection, while at later stages the helicase function of DDX3X is required to limit HBV transcription [35,36]. Hepatitis C virus (HCV) takes advantage of the immune modulatory function of DDX3X to support its own replication: the binding of DDX3X to the HCV 3 UTR activates IKKα-dependent noncanonical NF-κB transcription, promoting lipogenesis, which aids in viral replication [37]. However, DDX3X negatively regulates type I interferon production during arenavirus infection, and interactions between DDX3X and arenavirus nucleoprotein promote viral replication [38].
The relocalization of nuclear DDX21 to the cytoplasm during dengue virus infection contributes to interferon responses; this is consistent with previous reports of DDX21 involvement in innate immune sensing [20][21][22]. In turn, DENV NS2B/3 protease counteracts DDX21, leading to its proteasome-dependent degradation [20].
Conserved Interferon-Independent Antiviral Functions of DEAD-Box Helicases
DEAD-box helicases can also contribute to antiviral effector functions independent of interferon signaling ( Figure 2) [39]. Studies in invertebrates and plants have allowed the identification of several antiviral helicases which are necessarily interferon-independent, as these taxa lack an interferon system. Moreover, many of these antiviral functions are conserved in mammals. The screening of DEAD-box helicases in a Drosophila cell culture model of arthropod-borne RNA virus infection has revealed antiviral functions for several DEAD-box helicases that also have human homologs, including DDX6, DDX17, DDX24, and DDX56 [40,41].
DDX17 and DDX5 are nuclear resident paralogs that have diverse functions in transcription, splicing, miRNA biogenesis, mRNA export, and ribosome biogenesis [42]. DDX17 but not DDX5 was found to be antiviral against Rift Valley Fever Virus (RVFV). Moreover, DDX17 but not DDX5 relocalized to the cytoplasm during RVFV infection [41]. DDX17 activity against RVFV depends on the binding of a stem-loop structure onto the small segment of the tripartite RNA genome, and the addition of that stem-loop region to an unrelated virus rendered the chimeric virus sensitive to DDX17 s antiviral effect [41]. DDX17 may also have an antiviral function as a cofactor for the zinc-finger antiviral protein (ZAP), unwinding ZAP-bound viral mRNA to promote viral RNA decay by the RNA exosome [43,44]. The Arabidopsis homolog of DDX17, RH30, has also been shown to be antiviral against tombus viruses such as tomato bushy stunt virus and cucumber necrosis virus, both in plant hosts and a yeast surrogate host model [45]. RH30 relocalizes from the nucleus to the sites of viral replication and interacts with both viral proteins and viral RNA, particularly with structured cis-acting elements, thus interfering with template recruitment to the replicase complex [45]. In contrast to the antiviral roles of DDX17, knockdown of its paralog DDX5 attenuates replication of Japanese encephalitis virus (JEV), and helicase activity of DDX5 is required for this function [46]. DDX5 is recruited to the replication sites of JEV in the cytoplasm, and binds to the viral 3 UTR to promote infection [46]. DDX3 is also recruited to JEV replication foci and directly promotes JEV replication by binding the untranslated regions [47].
DDX23 is a nuclear resident helicase with known roles in splicing. Studies in the invertebrate chordate amphioxus identified DDX23 as a dsRNA binding protein, and found that dsRNA binding was conserved in human cells [48]. How DDX23 controls viral infection in amphioxus is unclear, and whether DDX23 binding directly to viral RNA also impacts infection has not been explored. In humans, DDX23 binding to dsRNA potentiates antiviral signaling downstream of TRIF and MAVS [48]. Moreover, after dsRNA treatment or VSV infection, DDX23 translocates from the nucleus to the cytoplasm, suggesting that the relocalization of DDXs may be a common strategy to repurpose these RNA binding proteins for antiviral defense [48].
Nucleolar Helicases and Viral Infection
The nucleolus is the site of ribosomal RNA (rRNA) transcription, rRNA biogenesis, and assembly of ribosomal subunits. Nucleoli form as sites of concentrated processing factors around clusters of rDNA genes, and liquid-liquid phase separation drives the organization of subcompartments in which rRNA is cleaved and modified [49,50]. Many DEAD-box helicases localize to the nucleolus and participate in ribosome biogenesis; their precise functions are generally poorly understood, but they are thought to be involved in the remodeling of pre-ribosomal RNP complexes to promote rRNA biogenesis and in the release of snoRNAs after rRNA modification [51].
There is a growing appreciation for the role of the nucleolus in cellular stress responses, including responses to viral infections [49,52]. Known roles for nucleolar proteins in viral infection can involve interactions with RNA or proteins of nuclear-replicating viruses, the relocalization of nucleolar proteins to other cellular compartments, or targeting by viral proteins that localize to the nucleolus. For example, the nucleolar helicase DDX56, which normally functions in rRNA biogenesis [53], promotes West Nile virus infection [54][55][56]. The West Nile virus capsid biochemically interacts with DDX56 during infection, both in nucleoli and in the cytoplasm, and DDX56 helicase activity is required to promote the packaging of viral RNA into particles [54][55][56]. In the case of HIV-1, a DDX56-Gag interaction facilitates HIV-1 particle assembly, suggesting a broader role for DDX56 in viral assembly [57].
DDX21 is another nucleolar helicase with cellular functions in ribosomal RNA production [58] that also has a variety of moonlighting functions. DDX21 has been shown to be antiviral against both nuclear-and cytoplasmic-replicating RNA viruses through a variety of mechanisms. In the nucleus, DDX21 interacts with the Borna disease virus (BDV) RNA, binding the 5 UTR of the BDV X/P mRNA and decreasing its translation [59]. In vitro RNA folding assays suggested that DDX21 binding causes structural alterations in the 5 UTR of the viral mRNA, thus interfering with the reinitiation of translation by ribosomes on this polycistronic message [59]. Influenza also replicates in the nucleus, but the activity of DDX21 against influenza appears not to require nucleic acid binding [60]; instead, a protein-protein interaction between DDX21 and the influenza PB1 polymerase inhibits the assembly of the viral replicase complex until being disrupted later in infection by increasing levels of the influenza NS1 protein [60]. During infection with DENV, DDX21 relocates to the cytoplasm and promotes interferon responses, as described above [20].
Taken together, these examples paint a picture where nucleolar proteins are influenced by, and exert effects on, cytoplasmic processes.
P-Body and Stress Granule Helicases in Viral Infection
P bodies are nonmembranous organelles in the cytosol that contribute to the regulation of cellular RNAs-resident RNAs can be stored, translationally repressed, and/or degraded [61]. Many viruses interface with P bodies in diverse ways (Reviewed in [62]). DDX6 resides in P bodies and facilitates P body assembly, as well as promoting the decapping and turnover of cellular mRNAs [63]. DDX6 also has cellular roles in the regulation of gene expression via translational repression [64].
P-Body and Stress Granule Helicases in Viral Infection
P bodies are nonmembranous organelles in the cytosol that contribute to the regulation of cellular RNAs-resident RNAs can be stored, translationally repressed, and/or degraded [61]. Many viruses interface with P bodies in diverse ways (Reviewed in [62]). DDX6 resides in P bodies and facilitates P body assembly, as well as promoting the decapping and turnover of cellular mRNAs [63]. DDX6 also has cellular roles in the regulation of gene expression via translational repression [64]. DDX6 has been shown to have pro-or anti-viral roles in a variety of viral infections, including several arthropod-borne viruses. DDX6 enhances RIG-I signaling, interacting with both RIG-I and influenza viral RNA in infected cells [65]. However, DDX6 was also identified in a screen for suppressors of aberrant ISG activity; the deletion of DDX6 activates innate immune signaling pathways and primes cells to be more responsive to IFN, likely via the disruption of cytoplasmic RNA turnover and the accumulation of self RNA in P bodies [66]. Bunyaviruses snatch mRNA caps from host transcripts in P bodies, and thus cellular factors such as DDX6, which promote the decapping of host mRNAs and thereby deplete the pool of capped RNAs in P bodies, attenuate bunyaviral infection including La Crosse virus (LACV) and RVFV [40]. In mosquitoes, DDX6 is antiviral against the flaviviruses West Nile virus and Zika virus and is counteracted by the noncoding sfRNA derived from the flavivirus 3 UTR, which binds to DDX6 and sequesters it [67]. Human DDX6 is similarly antiviral against ZIKV and is sequestered by sfRNA [68].
Viruses can also hijack P-body components to aid in replication. DDX6 and other P body components have been found by mass spectrometry to bind the dengue virus 3 UTR RNA, but in constrast to other flaviviruses, DENV infection is attenuated by DDX6 knockdown [69]. West Nile virus also subverts P body function by disrupting P bodies and recruiting component proteins including LSM1, GW182, DDX3, and XRN1 to viral replication sites where they positively contribute to replication by an unknown mechanism [70].
Stress granules are another type of nonmembranous organelle which form when translation is impaired [71]. The stress granule resident helicase DDX3X has diverse roles in interferon signaling (described in other sections of this review) as well as in other aspects of cell biology. DDX3X is antiviral against influenza A virus infection via its role in nucleating stress granules: the C-terminal domain interacts with influenza NS1 to sequester the viral protein in stress granules and limit the amount available to carry out viral replication [72]. It is also possible that many of the other roles of DDX3X in interferon signaling and viral replication are taking place in stress granules; however, this has not been explored. In plants and yeast, the helicases RH20/Ded1p (DDX3) are coopted to form part of the tombus virus replicase complex, and their presence helps to maintain full-length genome integrity and prevent recombination [73]. DDX3 was also shown to contribute to export of unspliced HIV mRNA, supporting HIV replication [74].
DDX1 is another stress granule resident protein [12] and has been found to inhibit or facilitate diverse viral infections including HIV, foot and mouth disease virus (FMDV), and transmissible gastroenteritis virus [18,71,75,76]. DDX1 was identified in a two-hybrid screen for HIV-Rev interactors and was shown to modulate the localization of Rev and the splicing of HIV mRNA. [77]. DDX1 acts as a chaperone to remodel the structure of the Rev-responsive element (RRE) RNA sequence, promoting increased Rev binding [75,78] and acting as a cofactor for Rev oligomerization [79]. The RRE can adopt several distinct structural conformations with differing affinities for Rev and functional consequences for HIV replication [80]. Another yeast two-hybrid screen led to the discovery that DDX1 supports the replication of the coronavirus infectious bronchitis virus (IBV), relocalizing to viral replication centers and interacting with the viral nonstructural protein nsP14; the ability of nsP14 to interact with DDX1 is conserved in severe acute respiratory syndrome coronavirus (SARS-CoV) [81]. Moreover, in a yeast model of Tombus virus infection, Ded1 and Dbp2 enhance viral replication by unwinding 3 secondary structure to promote plus-strand RNA synthesis [82]. DDX1 inhibits foot and mouth disease virus and increases IFNβ production in infected cells [76]. DDX1 acts as a coactivator of NF-κB-mediated transcription via interaction with RelA, a function which requires intact helicase activity [19].
These interactions highlight the ways that both viruses and hosts can use RNP interactions to promote or antagonize one another's function (Figure 2). The ability of some viruses to use DEAD-box helicase remodeling functions on their own RNA suggest that there may be other settings in which DEAD-box-helicase remodeling of viral RNA structure is detrimental to replication.
Summary and Discussion
The anti-viral and pro-viral DEAD-box helicase relationships that have been described so far in the literature likely represent only the tip of the iceberg in terms of the multitude of roles that DEAD-box helicases play in innate immunity and the modulation of viral infection (Table 1). These relationships were discovered through diverse approaches and have revealed interactions of viral infection with many aspects of RNA biology. Several examples have been identified through mass spectrometry approaches, arising either from studies investigating binding of viral proteins to host factors [35,60,83,84] or to nucleic acids [21,69,85]. Interactome studies have provided additional evidence for the importance of DEAD-box helicases in HIV and other viral infections [77,79,[84][85][86][87]. Biochemical studies of HIV-Rev, alone or Rev in combination with RRE RNA, have also identified positive roles for DDX3X, DDX5, DDX17, and DDX21 in HIV infection, although some of these interactions could be indirect [84]. Other DEAD-box helicases have been found genetically by screening to identify genes that affect infection [41,88]. The genetic screening of DEAD-box helicases in poxvirus infection revealed both positive and negative regulation of infection by DEAD-box helicases [88]. And still other studies have focused on changes in gene expression that indicate either cellular innate immune responses or a viral manipulation of the host cell environment [31,89,90]. DDX3 was found to be upregulated in response to HIV-Tat expression, and contributes to the export of unspliced HIV mRNA, supporting HIV replication [74]. Microarray studies of HIV latency and reactivation found that DDX18 and DDX39 are upregulated during HIV latency and early reactivation and that the expression of additional DEAD-box helicases DDX10, DDX21, DDX23, and DDX52 is induced immediately following reactivation [89]. The biochemical characterization of these interactions provides crucial mechanistic insight into how these proteins impact infection. Confirming and defining protein-protein interactions with methods such as coimmunoprecipitation and defining regions of RNA binding by CLIP-seq have elucidated the mechanism of action of some of these proteins during viral infection.
Although much remains to be defined about the antiviral roles of the DEAD-box helicase family, some broad themes have emerged. First, the functions of DEAD-box helicases in viral infection appear to often, but not always, rely on direct interactions between the host DEAD-box protein and viral RNAs. Some DEAD/DEAH-box helicases, such as the RLRs RIG-I and MDA-5, are specialized innate immune sensors, and others, such as DDX3X, have cellular roles but also contribute to innate immune signaling pathways. Still others bind viral RNA as antiviral effectors, whether induced by interferon signaling in the case of DDX60L or independently of known innate immune pathways, as is the case for DDX17 and DDX56. Continuing to biochemically define the RNA features recognized by antiviral helicases and the ways those RNA structures may be altered by helicase activity could help us identify functionally-important RNA structures in viral replication. Additionally, in recent years, the application of RNA probing technologies such as SHAPE to viral RNAs is beginning to enable detailed analysis of secondary and tertiary structures that may be functionally relevant [30,91,92]. A comparison of SHAPE data to DEAD-box helicase binding sites may enhance our understanding of structural motifs that stimulate innate immunity.
Second, DEAD-box helicase/virus interactions are highly context-dependent: different viruses are affected in different ways by the same DEAD-box helicases, and although RNA binding is important in many cases, there also exist examples where helicase function is dispensable for the antiviral activities that have been characterized. Detailed mechanistic studies naturally lag far behind the high-throughput genomic and proteomic identification of virus-host interactions, but a deeper investigation of the virus and host determinants underlying antiviral DEAD-box helicase function may illuminate key characteristics governing the outcome of a virus-helicase interaction. Table 1. DEAD-box helicase relationships with viral infections. Interactions are categorized as pro-viral if they support viral replication, and they are categorized as anti-viral if they counteract viral infection either directly or by activating innate immune signaling. Viral countermeasures are listed if a viral component disrupts an otherwise antiviral interaction. Other categories of interactions including gene expression changes are listed in the otherwise implicated column.
Gene
Pro-Viral Anti-Viral Viral Countermeasures Otherwise Implicated
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-02-09T14:07:04.932Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "4b0b64a13f50cd48966c8f60c1955b4b1812660f",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/viruses/viruses-12-00181/article_deploy/viruses-12-00181-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "437bd7428f04f7858147b98375ed47cace71ddf0",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
260698833 | pes2o/s2orc | v3-fos-license | Beef quality assessment of local and imported sources illustrating a contrary view that freezing is the best way of beef preservation using morphometric analysis
The histological analysis of local and imported beef samples throughout storage at various intervals in 4 °C, before and after being frozen at − 18 °C, to detect the changes happened in the microstructure of muscle fibers to evaluate the meat nutritive properties in a step toward rapid evaluation of meat quality. The obtained results illustrated that freezing–thawing step of beef leads to the loss of its muscle fiber structure due to the high moisture content failure, highlighted the idea that imported beef show significant shrinkage in their muscle fibers from the beginning of its purchase to consumers as they seem to be imported as frozen and thawed just before exposed and sold as fresh. Through consumption survey, however consumers prefer local meat, it was detected that 67% of population is eating imported beef with 39.4% more than twice per week. Therefore, consumers’ minds should be changed to depend on locally slaughtered beef on facing their needs of the recommended daily intake of protein.
Introduction
Meat is a main source of many nutrients which are necessary to sustain the normal functioning of the human body.Meat is concentrated sources of high biological value protein which is the main source of nine essential amino acids, as well as micronutrients like iron, selenium, zinc and vitamin B12 needed by the human body.Beef which categorized as red meat is highly pleased by consumers due to their nutritive value, pleasant palatability, and high quality.Therefore, beef intake in a rationale amount should be recommended as part of a healthy diet [1].Beef consumption in Egypt reached 720,000 metric tons (MT), up by 3.5% greater than before confirmed by the United States Department of Agriculture (USDA) official 2019 [2] and it was expected to be 1,354,000 tones per 2029 according to OECD-FAO Agricultural Outlook (https:// data.oecd.org/ agrou tput/ meat-consu mption.htm).Egypt's meat-centric food culture remains unchanged, then due to beef affordability and low Egyptian income, consumers replaced the local meat slaughtered in the country by the imported meat (Brazilian and Indian slaughter).Brazil, Australia, Sudan, and Ethiopia are Egypt's main suppliers for 1 3 imported beef as it is considered the 11th largest country in the world in beef importation due to the shortage of meat production in the country [3].
Meat is a perishable food that can be easily spoiled by the growth of different kinds of spoilage bacteria then pose potential danger of diseases.It has been estimated that 25% of food of plant and animal origin, globally is wasted postharvest or post slaughter, due to microbial spoilage, so that this is actually the most common cause of alterations in food quality [4,5].Storage and handling of meat besides the origin of meat are considered critical reasons for the deteriorative changes and quality problems of the meat.For instance, during freezing and thawing of the meat, there would be a defrosting of muscle fibers, which adversely affects the physico-chemical properties of the product and consequently change their nutritive value for consumer [6].Therefore, it is crucial not only to preserve their initial properties but also to improve them in the process of technological processing.
Meat quality can be detected by many methods, one of such detectors is through the histological analysis of the meat components.Histological examination of meat is performed to determine the degree of aging and freshness by detecting changes in the microstructure of muscle tissue.Today, different bioimaging techniques are available for the microscopic determination of food elements.Commonly, the most used method is optical microscopy which identify all the structures of the meat via their morphological characteristics [7].The topographic histology of meat allows the evaluation of the quality of meat via detecting and measuring the content of tissues (muscle, connective and adipose).In addition, the histological image of the meat can identify and evaluate the composition.In this regard, several histological studies have been published for quality evaluation of hotdog [8], hamburger [9], and other meat products [10,11].It was recorded that 25% of studies made on meat and meat products were depending on histology as a discipline [12] because histological examination provides valid information besides, the low cost of the technique [13].In addition, Histologic methods provide an accurate tool in detecting specific tissue components and architectural alterations induced by various type of meat processing [10,14].
Freezing is among the most used and important methods for food preservation especially for meat and their products to save their qualities until reach the consumers [15].However, freezing followed by thawing affect adversely on the nutritive value of the meat as it causes structural changes in the meat tissue.In addition to this, freezing usually results in a decrease in the diameter of the muscle fiber and in the length of the sarcomeres [16,17].Besides muscular structure change, freezing-thawing process results in chemical change modifications, thus interfering with the overall organoleptic quality of the beef [18].Further freezing and thawing of meat more than once might resulted in change of meat color, loss of moisture and increased oxidation of meat protein in which electrons are transferred from one atom to another.When this occurs in meat, it can lead to a significant deterioration in quality [19][20][21].Consequently, any change in meat capability to hold and sustain moisture leading to a significant effect on the tenderness and juiciness of the meat [21,22].
The purpose of the current study is to assess the degree of cellular damage of beef caused by freezing and thawing process and evaluation of muscle quality induced histologic alterations using the microscopy tool, which is a rapid, easy, inexpensive, accurate histological examination giving an obvious information without interpretation bias and comparing the quality of both local and imported beef samples marketed for consumers in Egypt.The data will change consumers about their habit to use the imported beef as a source of protein in their life.
Sample collection
Beef samples of local and imported slaughtered carcasses sources, were purchased from the local markets in Mansoura City, Egypt, to evaluate the histological changes happened during its storage in fridge temperature (4 °C) before and after sample freezing at − 18 °C for the determination of nutritive value of the meat preserved and detect the effect of freezing on the different source of beef to evaluate their quality.
From each source (local and imported beef samples), six different batches were used for the histological analysis at different intervals as follow; directly after purchase, after 6 h and 24 h after being left in fridge temperature (4 °C), after being totally frozen; sample was taken after complete thawing, then after 6 h and 24 h of storage in fridge temperature (4 °C).From each batch, the sample which taken for the analysis was weighted approximately 5 g each.
Histological technique
The samples were fixed overnight using 10% neutral buffered formalin.Fixed samples were processed in ascending concentrations of alcohol (from 70 to 100%), then xylene and embedded in in paraffin wax to produce blocks [23].The blocks were cut into 5 μm thick sections and stained with hematoxylin-eosin (HE) [24] and were examined using camera (Olympus model C7070WZ) attached light microscope (Olympus CX41), images were captured and analyzed using image J software (version 1.32 J).
Histometric analysis
To evaluate the area of myofiber, and the area in-between myofibers, Histometric analysis was done using the image J software (version 1.32J) [25].Each sample represented by three HE sections, each HE section was examined for the percentage (%) of the extracellular spaces locating in-between the red myofibers of each HE section.On the other hand, a fixed number of myofibers were examined in each section and the myofiber area was measured (µm) (Supplementary Fig. 1).
Statistical analysis
The data obtained were subjected to statistical analysis and expressed as mean ± standard deviation to determine the difference of area of myofiber and the percentage of extracellular spaces in-between myofibers among the different intervals using Kruskal-Wallis test followed by Dunnett's test (p < 0.05).The data were analyzed using Predictive Analytics Software (PASW Statistics 18).
Population survey
Total of 200 populations of various ages had been asked many questions about their beef source and their habit to consume fast food including beef from restaurants and street vendors in Egypt.
Results and discussion
The current study revealed the histological changes occurred to the beef samples of two different sources (local and imported) when stored for certain intervals under fridge storage condition (4 °C) with and without freezing, highlighting the effect of freezing and compare between the quality of beef of both sources.
The morphological features of the muscle fibers of local beef samples during the different intervals (0, 6 h and 24 h) from purchase and after applying freezing
The histological sections of local beef which were directly taken after purchase showed skeletal muscle structure consisting of bundles of regular eosinophilic muscle fibers separated by extracellular spaces (Fig. 1a).After 6 h of storage in fridge temperature (4 °C), the meat autolysis started to begin and was determined by the decrease in myofiber diameter by 120 µm which measured by histometric analysis (Fig. 1b, Table 1).By passing 24 h in fridge, the section of local beef showed a continuous significant (P < 0.05) decrease in the myofiber area by shrinkage degree of 147 µm (Table 1) and significant (P < 0.05) increase in the extracellular spaces (Fig. 1c, Table 1) with an apparent loss of the white lines in between myofibrils.
The aforementioned results agreed with previous study which described the effect of storage on meat histological structure, it stated that after slaughter, meat had few extracellular spaces and more muscle fibers while after 12 days in 4 °C, the muscle fibers shrank and the extracellular spaces increased due to the expel of intracellular water to extracellular spaces [26] Another study showed the degradation of muscle fibers 72 h postmortem and reflected on the histological structure as samples showed uneven staining with fragmented myofibers [27,28].These data revealed that autolysis of muscle fiber happened directly after animal slaughter due to postmortem glycolysis process which is of temporary benefits for meat quality development [29].
Checking the defrosted local beef sample after freezing at − 18 °C and thawed completely histologically, the image (Fig. 1d) revealed myofiber area with diameter of 148.83 µm which is smaller by 130 µm when compared to its analogous examined before freezing (Table 1).After 6 h of leaving the frozen local beef sample in the room temperature, the shrinkage of myofiber continued reaching a diameter of 108.35 µm (Fig. 1e; Table1) followed by slight shrinkage after passing 24 h after thawing reaching 100.69 µm leaving irregular-shaped myofibers with the smallest extracellular spaces detected for the local beef sample (Fig. 1f, Table 1).Interestingly, the extracellular spaces in between the muscle fibers showed significant decrease (P < 0.05) detected by histometric analysis (Fig. 2, Fig. 3) which could indicate the loss of water volume retained during freezing as it was explained before along with the myofibers shrinkage, the fluid directly oozed to extracellular space and easily lost as drip [30].
The morphological features of the muscle fibers of imported beef samples during the different intervals (0, 6 h and 24 h) from purchase and after applying freezing
The imported beef taken directly after purchase without being frozen showed small myofibers with appearance of irregular shaped myofibers (Fig. 2a).By passing 6 h of storing in 4 °C, there was a slight increase in the myofiber area (Fig. 2b) which continued through the whole storage period of 24 h in fridge temperature (Fig. 2c).Analyzing the sample after being frozen, the defrosted muscle fibers shrunk leaving wide extracellular spaces in-between after and within 6 h interval in fridge temperature creating an irregular myofibers (deformed myofibers).However, by leaving the sample 24 h after being thawed in fridge temperature, the regular myofibers totally disappeared and replaced by large irregular eosinophilic myofibers with smaller extracellular spaces in-between (Fig. 2f ).The morphological changes in the muscle structure under the microscope had been confirmed via histometric analysis of the myofibers area (µm) and the extracellular space (%) measurements (Table 2).The myofibers area (µm) was the smallest size where the imported beef sample was defrosted directly after being stored at − 18 °C measuring 88.72 µm which was significantly (P < 0.05) small when compared by all the measure of myofibers area in all the intervals.The extracellular space (%) of the imported beef sample varied throughout the different intervals showing the lowest percentage in sample which stayed for 24 h at 4 °C storage 23.727% giving significant (P < 0.05) difference when compared with other storage intervals for the same sample with and without freezing.The largest extracellular space (P < 0.05) was detected in the imported beef estimated directly after purchasing without any freezing (39.181%), there was no difference between the histometric measure of the extracellular space between interval (0 h) for directly purchased and defrosted sample, such data suggesting that the imported beef had been already subjected to many degrees of freezing-thawing before purchasing for consumers, making them of low quality protein source.Researchers suggested that the drip loss and denaturation of muscle protein depend on storage temperature, they were the lowest at − 18 °C which confirms the unpleasant changes in samples when left in unsaved storage conditions rather than the ultimate storage temperature [31].
The statical significance results of myofibers diameter and extracellular space measurements between local and imported beef samples during the different intervals (0, 6 h and 24 h) from purchase before and after applying freezing
Comparing the results of morphological changes among the different intervals of storage of both local and imported beef samples, there was a significant (P < 0.05) change in the myofibers diameter among all the readings with the freshly purchased local beef samples having the largest myofibers diameter (278.35µm), however, the reading of the myofibers diameters of the corresponding samples after being defrosted were totally changed and the diameters of the local beef sample in the different storage intervals (0, 6, 24 h) significantly decreased while the diameters of the myofibers of imported beef were highly increased (P < 0.05) when compared with the local beef samples (Fig. 3).Such shrinkage of muscle fibers in local beef sample after thawing due to denaturation prove the phenomenon of freezing-induced protein denaturation with various analytical parameters, and evidence from literature showing the relationship between freezing-thawing and denaturation of myofibrillar and sarcoplasmic proteins, respectively [27,28,32] while the drastic changes in the imported sample via different storage intervals at 4 °C resulted from the repeated freezing-thawing operation which leaded to total destruction in muscle fibers [33], that destroy the regular shape of muscle structure giving increase in their diameters reaching 183.19 µm (Table 2).Analyzing the results of extracellular space, (Fig. 4), there was a significant (P < 0.05) difference between the records of local and imported beef sample results in all the intervals before and after freezing (0, 6 and 24 h), respectively, except for the results of each sample after being frozen and thawed for 24 h at 4 °C, there was no significant difference.
From the whole data, freezing-thawing process of local beef which frozen for the first time in our experiment was affected gradually on the beef quality by the natural denaturation of muscle protein and changing the water capacity of the tissue.However, the initial results of histometric analysis data of imported beef sample examined without freezing give an indication that imported beef had been already subjected to repeat freezing-thawing process.Furthermore, the destruction of the muscle fiber, resulted in the histometric analysis of imported beef sample after being frozen, suggesting that imported beef of low quality and of low nutritive value when compared by our local beef sample.Similarly, histological change attributable to the alterations in tissue microstructure, and protein properties resulted from multiple freezing and thawing of meat were studied previously [19,22].
The surveillance results
The consumer survey illustrated that there is no significant difference between choosing among local or imported beef as 52.9% of people prefer local than 47.1% of people who choose imported beef.Regardless the percentage of people who usually choose local beef, around 65.2% of populations are eating fast food including imported food at least once per week, followed by 23.9% of people who not used for fast food and 11% who never eat fast food at all.Such data referred that most people are eating imported beef even they do not choose imported beef because the majority of restaurants and street vendors in Egypt are rely on imported beef in their dishes.
Conclusion
Through simple histological analysis, beef quality can be determined by estimating the morphological structure of muscle fiber and the extracellular space surrounding muscle fibers where the muscle fiber diameter decrease and dramatic change of their shape beside the increase in intermuscular space.The current study confirmed that although the freezing of meat including beef is used, as a method of preservation, it leads to shorten the shelf-life as well as production of low-quality beef via the freezing-thawing step especially in imported beef which is thought to be frozen and sold for consumers as fresh.Therefore, public education about that the consumption of local slaughtered beef is better, as a source of daily intake of protein, than the imported beef which seems to be thawed before selling to them as fresh ones.Further analysis methods are needed to strengthen the idea that most of imported meat purchased in Egypt of low quality.
Fig. 1
Fig. 1 Photomicrograph of local beef sample before and after being frozen at − 18 °C.The sample stained with HE and examined under microscope directly after purchase (a), after 6 h of storage at 4 °C (b), after 24 h of storage at 4 °C (c), after complete freezing followed by complete thawing (d), after 6 h of thawing stored at 4 °C (e), and after 24 h of thawing stored at 4 °C (f); MF = regular myofiber, E = extracellular space, *irregular myofiber; Scale bars = 50 µm
Fig. 2 Fig. 3
Fig. 2 Photomicrograph of imported beef sample before and after being frozen at − 18 °C.The sample stained with HE and examined directly after purchase (a), after 6 h of storage at 4 °C (b), after 24 h of storage at 4 °C (c), after complete freezing followed by complete thawing (d), after 6 h of thawing of stored at 4 °C (e), and after 24 h of thawing stored at 4 °C (f); * = irregular myofiber; Scale bars = 50 µm
Fig. 4
Fig.4 Graph showing the percentage of extracellular space (%) both samples (local and imported beef ) in three different intervals (0, 6, 24 h) before freezing (A) and after being frozen (B); a and b significant different (p < 0.05) for each interval
Table 1 The
4yofiber area diameter (µm) and extracellular space percentage (%) local beef after purchase and after being frozen in three different intervals (0, 6, 24 h) stored at4°C Mean values with different superscripts from "a" to "e" within the same row are significantly different at P < 0.05 .Mean values with different superscripts from "w" to "z" within the same row are significantly different at P < 0.05 Sample examined directly after purchase Sample directly examined after being frozen at − 18 °C
Table 2 The
4yofiber area diameter (µm) and extracellular space percentage (%) imported beef after purchase and after being frozen in three different intervals (0, 6, 24 h) stored at4°C Mean values with different superscripts from "a" to "e" within the same row are significantly different at P < 0.05.Mean values with different superscripts from "w" to "z" within the same row are significantly different at P < 0.05 | 2023-08-08T18:06:09.930Z | 2023-08-07T00:00:00.000 | {
"year": 2023,
"sha1": "2e46dc58258e77624ec0ba54bfb73c930194172a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s44187-023-00050-y.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "649e79943866e95e768c8a2088cbb67d6e88bfad",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
18816256 | pes2o/s2orc | v3-fos-license | MicroRNAs in idiopathic pulmonary fibrosis: involvement in pathogenesis and potential use in diagnosis and therapeutics
MicroRNAs (miRNAs) are a class of phylogenetically conserved, non-coding short RNAs, 19–22 nt in length which suppress protein expression through base-pairing with the 3′-untranslated region of target mRNAs. miRNAs have been found to participate in cell proliferation, differentiation and apoptosis. Idiopathic pulmonary fibrosis (IPF) is a chronic, progressive, and high lethality fibrotic lung disease for which currently there is no effective treatment. Some miRNAs have been reported to be involved in the pathogenesis of pulmonary fibrosis. In this review, we discuss the role of miRNAs in the pathogenesis, diagnosis and treatment of IPF.
Introduction
Idiopathic pulmonary fibrosis (IPF) is an interstitial lung disease with unknown cause and unclear pathogenesis. Of the idiopathic interstitial pneumonia family of diseases, it is the most common and has the highest morbidity and worst prognosis. Currently, there is no effective treatment for IPF 1 .
The formation of fibroblastic foci and the excessive deposition of extracellular matrix (ECM) are regarded as factors that directly induce IPF 2 . Myofibroblasts, which have the features of both fibroblasts and smooth muscle cells, overexpress α-smooth muscle actin (α-SMA) and extensively synthesize and secrete ECM. Finally, they lead to the remodeling of lung tissue observed in IPF patients [3][4][5] . Previous research has shown that myofibroblasts in the lung arise mainly from fibroblasts and epithelial cells and, to a lesser extent, from circulating fibroblasts derived from bone marrow cells. Of these, fibroblasts and epithelial cells are considered to be the main sources of myofibroblasts [6][7][8] .
MicroRNAs (miRNAs) are a class of non-coding singlestranded RNAs, 19-22 nt in length, which can complementary base-pair to the 3ʹ-untranslated region (UTR) of targets and repress the translation of target genes or degradation of target mRNAs 9 .
Recently, as the result of an in-depth study of miRNAs, we found that deregulation of miRNAs participates in the progression of fibrosis in different tissues including liver, kidney and myocardium. Here, we review the key roles played by miRNAs in the pathogenesis of IPF and their significance in its diagnosis and treatment ( Fig. 1 and Table 1 ).
The role of miRNAs in alveolar epithelial cells of IPF
Normal epithelial cells are closely linked to each other through the intercellular adhesion mechanism. E-cadherin is a key component in the tight junctions of epithelial cells where it maintains their integrity and polarity. The ability of epithelial cells to change into mesenchymal cells through the epithelial-mesenchymal transition (EMT) plays an important role in the development of IPF. As a result of the EMT, alveolar epithelial cells (AECs) lose their intrinsic polarity and intercellular adhesion and gain the ability to migrate. AECs produce a large amount of ECM which eventually leads to the development of fibrosis 7 . Current studies confirm that many miRNAs including let-7d, miR-200, miR-26a and miR-375 participate in IPF through regulating EMT 19,[23][24][25][26]29,[40][41][42] . They also have reduced expression in IPF patients whereas the expression of their target gene, high-mobility group A protein 2 (HMGA2), is obviously up-regulated. This leads to a change in epithelial cell phenotype, the deposition of collagen and development of IPF. Let-7 was originally discovered in Caenorhabditis elegans where its role is to regulate cell differentiation and proliferation, a role conserved in different species. The human let-7 family includes 12 members (let-7-al, -a2, -a3, -b, -c, -d, -e, -fl, -f2, -g, -i and miR-98), located on 8 different homologous chromosomes 43 . Through miRNA microarray analysis of lung tissue from healthy controls and IPF patients, Pandit et al. 19 found that levels of 18 miRNAs including let-7d were reduced in IPF patients. They also found that the expression of let-7d was decreased and the expression of HMGA2 was increased in A549 AECs stimulated by TGF-β1. This was subsequently shown by electrophoretic mobility shift assays, chromatin immunoprecipitation and luciferase assays, resulting from binding between Smad3 and the let-7d promoter. In addition, specific inhibition of let-7d in mouse lung tissue was shown to induce the EMT thereby increasing the thickness of the alveolar walls and eventually causing pulmonary fibrosis.
It has been reported that miR-26a plays an important role in the regulation of many diseases. Harada et al. 44 confirmed that miR-26a can inhibit cardiac fibroblast proliferation and differentiation by down-regulating the expression of TRPC3 and thereby control atrial fibrillation. Wei et al. 45 showed that miR-26a decreased the expression of collagen I which is induced by angiotensin II (Ang-II) and up-regulated the expression of connective tissue growth factor (CTGF). All these results indicate that miR-26a has the ability to prevent fibrosis.
In our study, differential expression and cluster analysis using a bioinformatics method showed that genes participating in EMT were differentially expressed in lung tissue of IPF patients, a result validated by immunofluorescence in pulmonary fibrotic mice. Moreover, we confirmed that miR-26a regulated EMT by binding to the HMGA2 gene and inhibiting its expression. Moreover, forced expression of miR-26a inhibited the occurrence of EMT and the expression of EMT-related genes. Taken together, our study confirms that miR-26a inhibits EMT and thereby reduces the occurrence of IPF 23 .
The TGF-β and Wnt pathways are the most well-known pathways involved in lung fibrosis. Besides participating in the proliferation and differentiation of fibroblasts, they also promote EMT. A recent study by Das et al. 40 showed that miR-326 was reduced in lung fibrosis causing induction of TGF-β1. In contrast, enhanced expression of miR-326 dampens lung fibrosis by posttranscriptional regulation of TGF-β1. Alternatively, Stolzenburg et al. 26 found that miR-1343 attenuates EMT and fibrogenesis by directly targeting TGF-β receptors 1 and 2. In another study, Wang et al. 24 found that miR-375 was decreased during trans-differentiation of AECs and that ectopic expression of miR-375 inhibited EMT by direct binding to the 3ʹ-UTR of Frizzled 8 and thereby blocked the Wnt/β-catenin pathway.
Previous studies have shown that the expression of the miR-30 family (miR-30a, miR-30c, miR-30d and miR-30e) is downregulated in patients with IPF 19 . miR-30c and miR-30e are located in the intron of nuclear transcription factor Y subunit γ (NFYC) and can inhibit the expression of Smad3. In the lung tissue of IPF patients, the level of NFYC mRNA is significantly reduced. Studies found that miR-30 was located in AECs and that downregulation of miR-30 increased the expression of endothelin receptor A and HMGA2 leading to EMT and the deposition of collagen 41 . In addition, the miR-200 family, whose overexpression can inhibit the EMT, was down-regulated in IPF patients. Injection of miR-200 to mice clearly enables them to resist pulmonary fibrosis 25 .
The miR-21 host gene in human is located on chromosome 17p23 and has independent promoters for transcription. miR-21 is widely expressed in tissues and is not essential for normal tissue development as verified by knockout of miR-21 in mice 47 . Liu et al. 10 found that miR-21 is up-regulated in IPF patients and only a small amount of miR-21 is expressed in normal lung tissue of mice. However, after stimulating with bleomycin, the expression of miR-21 was clearly up-regulated which promoted the accumulation of myofibroblasts. The research showed that even at 5-7 days after the lung injury, the expression of miR-21 could be inhibited by an miR-21 antisense probe sufficiently to reduce or eliminate IPF. TGF-β1 is the most important pro-fibrogenic cytokine which increases the expression of miR-21 in lung fibroblasts. Further studies showed that Smad7 is a direct target gene of miR-21. Thus, miR-21 causes the activation of the TGF-β1 pathway and ultimately promotes the occurrence and development of IPF by targeting Smad7. Taken together, TGF-β1 promotes IPF by regulating an miR-21/Smad7 feedback loop 11,48 .
Our research has revealed that the expression of miR-26a is significantly reduced in lung tissues of mice and patients with IPF accompanied by activation of the TGF-β1 pathway and increased expression of the miR-26a target protein CTGF. Inhibition of miR-26a promotes collagen deposition in the lungs of mice. In contrast, overexpression of miR-26a inhibits experimental pulmonary fibrosis in mice. Further studies confirmed that miR-26 inhibits lung fibrosis through its ability to regulate the expression of CTGF and thereby inhibit the differentiation and proliferation of fibroblasts.
We also found that Smad3, a downstream gene of TGF-β1, inhibits the expression of miR-26a and that miR-26a affects the nuclear translocation of Smad3 by regulating Smad4. The TGF-β1 pathway is activated by external stimulation to phosphorylate Smad3 which translocates into the nucleus and inhibits the expression of miR-26a. Subsequently post-transcriptional expression of CTGF promotes the differentiation and proliferation of fibroblasts in lung and further increases the collagen content. In addition, down-regulation of miR-26a increases the expression of Smad4 and promotes the translocation of Smad3 to the nucleus to inhibit the expression of miR-26a. This loop repeats and aggravates pulmonary fibrosis. Furthermore, treatment with exogenous miR-26a leads to inhibition of Smad3 translocation such that Smad3 inhibition of miR-26a vanishes, further strengthening the therapeutic effect of miR-26a. The above results indicate that miR-26a inhibits the proliferation and differentiation of fibroblasts by targeting CTGF and then reduces collagen secretion to ultimately reduce pulmonary fibrosis. Further evidence of the potential of miR-26a to prevent and treat pulmonary fibrosis 21 comes from Li et al. 22 who confirmed that it regulates cyclin D2 (CCND2) and inhibits the proliferation of fibroblasts induced by activation of TGF-β1.
The miR-155 gene located on chromosome 21p21 generates miR-155 in hematopoietic cells to play an important role in inflammatory and immunological reactions 49 . Marshall et al. 50 found that it participates in pulmonary fibrosis by targeting the Ang-II type 1 receptor (AT1R) which is located in stromal fibroblasts and has increased expression in lungs of IPF patients and in mice treated with bleomycin. This increased expression enhances collagen synthesis in fibroblasts and promotes the development of pulmonary fibrosis. Furthermore, Pottier et al. 15 showed that miR-155 is up-regulated in fibrotic mice. Functional studies have demonstrated that the keratinocyte growth factor gene (KGF) is a direct target of miR-155, up-regulation of which inhibits KGF expression. After transfection of miR-155, the migration ability of fibroblasts was significantly increased 15 . miR-31 is a negative regulator of pulmonary fibrosis. It is found that miR-31 expression is reduced in the lungs of mice with experimental pulmonary fibrosis and in IPF fibroblasts. Over-expression of miR-31 inhibits fibrogenic, contractile and migratory activities of fibroblasts in vivo to alleviate bleomycin-induced pulmonary fibrosis 27 .
An increasing number of studies [51][52][53] have shown that the generation of reactive oxygen species (ROS) contributes to the pathogenesis of fibrotic diseases including IPF. A recent study showed that hydrogen peroxide (H 2 O 2 ) cause the dysregulation of many miRNAs in human fetal lung fibroblasts (HFL-1) 32 . Among them, miR-9-5p was identified as being anti-fibrotic because of its reduced response to H 2 O 2, and because many genes involved in the TGF-β pathway are its predicted targets. Another study showed that miR-9-5p was down-regulated in a mouse model of lung fibrosis and in IPF patients. Moreover, forced expression of miR-9-5p attenuated the TGF-β1-induced fibrogenic pathway in HFL and prevented experimental lung fibrosis in mice by regulating expression of TGF-β receptor type II (TGFBR2) and NADPH oxidase 4 (NOX4) 32 .
miRNAs inhibit collagen deposition and regulate the synthesis of ECM in IPF
IPF is induced by the necrosis of parenchymal cells that result from inflammation and deposition of ECM. If the synthesis of ECM and collagen deposition are inhibited, the development of IPF can be greatly weakened or even eliminated. Current studies confirm that miRNAs can participate in pulmonary fibrosis by directly regulating the generation of collagen 34,36,41,[54][55][56] .
Cushing et al. 51 found that the expression of miR-29 was significantly reduced in mice with bleomycin-induced pulmonary fibrosis and down-regulated in human embryonic lung fibroblasts (IMR-90) stimulated by TGF-β1. This suggests that miR-29 may be involved in pulmonary fibrosis. A further study showed that down-regulation of miR-29 was negatively related to the upregulation of genes, promoting fibrosis such as the collagen genes in ECM and basement membrane 57 . In fact many genes which regulate ECM, such as ELN, FBN1, COL1A1, COL1A2 and COL3A1, are target genes of miR-29 57 . In addition, miR-29 inhibits TGF-β1-induced ECM synthesis in human lung fibroblasts through activating the PI3K/AKT pathway 37 . Furthermore, studies have shown 34,35,55 that over-expression of miR-29 can inhibit bleomycin-induced pulmonary fibrosis in mice. More importantly, a recent study by Khalil et al. 38 showed that interaction of IPF fibroblasts with collagen 1 resulted in decreased protein phosphatase (PP) 2A and histone deacetylase (HDA) C4 phosphorylation leading to decreased nuclear translocation of HDAC4 and finally reduction of miR-29 and a pathological increase in type I collagen expression.
The TargetScan database (http://www.targetscan.org/) predicts that Col1a2 (Collagen, type 1, α2) is a potential target of miR-26a. Wei et al. 45 confirmed that miR-26a directly regulates Col1a2 and inhibits cardiac fibrosis. The issues of whether Col1a2 also mediates the anti-fibrotic effects of miR-26a and whether there are other miRNAs directly regulating collagen synthesis in pulmonary fibrosis warrant further research.
miRNAs participate in pulmonary fibrosis through other mechanisms
miRNAs participate in IPF by multiple mechanisms. Methylation, including DNA methylation and histone methylation, is one of the important ways that genes are regulated which is closely related to embryonic development, aging, cancer and many other physiological and pathological processes [58][59][60] . Some recent studies suggest that deregulation of methylation may be involved in the fibrotic process [61][62][63] . For example, in IPF patients, 80% of the miR-17 92 cluster promoter was found to be occupied by areas of cytosine polyguanine (CPG) and was significantly hypermethylated compared with normal lung tissue. A further study showed that introduction of miR-17 92 into lung fibroblasts of IPF patients reduced the expression of many fibrotic genes such as CTGF, COL1A1 and COL13A1 by direct regulation of DNA methyltransferase-1 (DNMT-1) 39 .
On this basis, we hypothesize that miRNAs are involved in IPF through complex pathways. For instance, miRNAs can participate in pulmonary fibrosis by regulating early inflammation of lung damage indicating they may be therapeutic targets in IPF 17,64 . Zhang et al. 64 found that miR-199a-5p was increased in cystic fibrosis (CF) macrophages and lung tissue which induced a hyperinflammatory response in CF MΦs through targeting caveolin-1 (CAV1) to activate toll-like receptor 4 (TLR4) signaling. Furthermore, inhibition of miR-199a-5p restored CAV1 expression and alleviated the hyper-inflammation in CF MΦs. In addition, Su et al. 17 confirmed that miR-142-5p and miR-130a-3p regulate macrophage fibrogenesis in liver fibrosis and lung fibrosis. They found that the up-regulation of miR-142-5p and down-regulation of miR-130a-3p in macrophages in response to interleukin (IL)-3 in tissue samples from patients with liver cirrhosis and IPF by direct regulation of their targets, the suppressor of cytokine signaling 1 (SOCS1) and peroxisome proliferator-activated receptor γ (PPARγ), respectively. More importantly, inhibition of miR-142-5p or over-expression of miR-130a-3p attenuated liver fibrosis and lung fibrosis in mice.
Deregulated miRNA network in IPF
It is known that one miRNA can regulate several target genes and one gene can be regulated by several miRNAs at the same time. In addition, many fibrosis related genes such as TGF-β1 and HIF-1 can act as transcription factors to regulate the expression of miRNAs. Thus, miRNAs and their targets form a complex network in the process of IPF.
Smad3, a transcription factor, can regulate the expression of many miRNAs including let-7d and miR-154. Using electrophoretic mobility shift assays, chromatin immunoprecipitation and luciferase assays, Pandit et al. 19 confirmed that TGF-β1 inhibits the expression of let-7d by promoting Smad3 to bind with its promoter. Milosevic et al. 14 found that Smad3 binds to the 322 bp site of the miR-154 precursor and regulates its expression. Our group found that p-Smad3 influenced the down-regulation of miR-26a and that, moreover, miR-26a inhibited the nuclear transcription of p-Smad3 by regulating Smad4 21 . One can speculate that miR-26a affects the generation of other miRNAs by regulating Smad3 and that miRNAs interact with each other and exert synergistic roles in pulmonary fibrosis.
Some studies have shown that miRNAs can regulate the generation of other miRNAs. Chen et al. 65 found that miR-107, by binding to the let-7 sequence, inhibited the expression of let-7 and induced the initiation and metastasis of breast cancer. Guo et al. 66 constructed an miRNA-miRNA interaction network involving a mutual regulatory pattern in different species. In recent work in our laboratory, we showed that miR-26a increased the expression of let-7d by regulating Lin28B suggesting that miR-26a and let-7d act synergistically to ameliorate pulmonary fibrosis 42 . Furthermore, miR-26a reduces the expression of miR-21 possibly through the mediation of Smad3 or another transcription factor. Based on these results, we constructed an miRNAstranscription factor (TF)-miRNAs regulation network in IPF which warrants further validation through future experiments.
We also analyzed the miRNA expression profile in IPF using the microarray dataset (GSE32538) 67 . Surprisingly, we found that more than 80% of miRNAs were down-regulated in IPF patients. This is consistent with the results of previous studies showing miRNAs are decreased in the lungs of mice with experimental pulmonary fibrosis and in IPF and that they all exert potential antifibrotic effects in the progression of IPF (Table 1). Thus, further studies are needed to investigate what causes the global downregulation of miRNAs in IPF.
miRNAs act as biomarkers for early diagnosis of IPF
miRNAs that are differentially expressed in respiratory diseases may be biomarkers for their diagnosis, molecular classification and prognosis. The existing literature indicates that (1) miR-21 and miR-126 are up-regulated and miR-672 and miR-143 are down-regulated in an asthmatic rat model 68 and (2) miR-155, miR-21, miR-17-92 and miR-221/222 are upregulated and let-7, miR-1, miR-29 and miR-126 down-regulated in lung cancer 69 . In fact, studies 70,71 have shown that different sub-types of lung cancer or non-cancer diseases can be distinguished by their miRNA expression profiles. For example, Chen et al. 70 reported that eight miRNAs are differentially expressed in the serum of lung cancer patients compared with normal lung tissues.
Lam et al. 71 showed that the expression of miR-26a was significantly decreased in rats with experimental silicosis and patients with lung cancer. Furthermore, miR-26a was significantly down-regulated in lung tissues and sputum of rats exposed to cigarette smoke 72,73 . van Pottelberge et al. 72 reported that miR-26a was clearly down-regulated in the plasma of patients with chronic obstructive pulmonary disease (COPD) compared with normal smokers. However, a study by Ezzie et al. 74 found no obvious differential expression of miR-26a in COPD patients compared to normal controls. Therefore, there is still debate about the role of miR-26a in COPD.
Yang et al. 75 found 47 differentially expressed miRNAs in the serum of IPF patients including 21 that were up-regulated and 26 down-regulated. The results of quantitative RT-PCR confirmed that expression of miR-21, miR-199a-5p, and miR-200c was significantly increased in serum of IPF patients while the opposite was true for miR-31, let-7a and let-7d. These results suggest that miRNAs may be useful diagnostic markers for the diagnosis, prognosis and treatment of IPF 76 .
miRNAs as therapeutic targets in IPF
The fact that abnormal expression or mutation of miRNAs leads to disease suggests specific miRNAs can be used as potential targets for their treatment and control. Theoretically, the expression of down-regulated miRNAs in IPF can be restored by importing an adenovirus vector that contains the target miRNA. Conversely, upregulated miRNAs can be down-regulated using an antisense oligonucleotide such as 2 0 -O-methyl, 2 0 -O-methoxy ethyl and locked nucleic acid (LNA) antisense oligonucleotides to directly bind to miRNAs and block their activity 77 .
Lanford et al. 78 found that treatment of chronically infected chimpanzees with an LNA-modified oligonucleotide (SPC3649) complementary to miR-122 suppressed hepatitis C virus (HCV) viremia with no evidence of viral resistance or side effects. Phase I research on SPC3649 has now been completed and a phase IIA clinical trial begun. SPC3649 may be the first drug that targets miRNAs to be used in the treatment of a human disease.
In a mouse model of myocardial hypertrophy, inhibiting miR-21 with an antagonist was found to decrease the activity of ERK/ MAPK, block the myocardial interstitial fibrosis and reduce myocardial dysfunction 79 . Similarly, in renal fibrotic mice, inhibiting the expression of miR-21 reduced renal damage 80 . Meng et al. 81 were the first to report that chemotherapeutic drugs can affect miRNA expression in human cancer cells. They found that treating tumor cell xenografts with systemic gemcitabine altered the expression of miRNAs. They also found that miR-21 was upregulated in bile duct cancer cells which increased their sensitivity to chemotherapeutic drugs 81 . Kota et al. 82 successfully delivered miR-26a to mice with liver cancer using an adeno-associated virus (AAV) and found that AVV did not integrate into the human genome although it was clearly present in liver cells. These findings may provide new hope for the treatment of liver cancer.
Although extensive research has revealed the mechanisms of miRNAs in IPF and shown their therapeutic potential, clinical application of miRNAs is confronted with many problems. One is the lack of targeted miRNA delivery technology to solve off-target effects and improve the safety of miRNAs in vivo.
Long non-coding RNAs (lncRNAs) in pulmonary fibrosis
Long non-coding RNAs (lncRNAs) are a class of non-coding RNAs with more than 200 nucleotides (nt) without protein-coding function 83 . There is a great deal of evidence showing that they play a major role in various diseases, including cancer, cardiovascular disease and lung disorders 84,85 . Recently, lncRNAs have been recognized as pivotal mediators in the initiation and maintenance of various cancers and heart diseases by competitive binding miRNAs [86][87][88] . However, the role and mechanisms of lncRNAs in pulmonary fibrosis remains largely elusive.
Cao and colleagues 89 firstly identified the differential expression profile of lncRNAs in bleomycin-induced lung fibrosis in mice. Of the many lncRNAs, 210 were up-regulated and 358 down-regulated. In this study, they also validated two up-regulated lncRNAs, AJ005396 and S69206, in fibrotic lung tissue by in-situ hybridization 89 , In addition, Sun et al. 90 identified the differential expression of lncRNAs in paraquat-driven experimental lung fibrosis in mice, and also found that forced expression of the lncRNAs, uc.77 and 2700086A05Rik, caused EMT by regulation of Zeb2 and Hoxa3 and contributed to lung fibrosis.
Song et al. 91 identified two other up-regulated lncRNAs, MRAK088388 and MRAK081523, in lung fibrosis and found that MRAK088388 regulates N4bp2 by sponging miR-29b-3p whereas MRAK081523 regulates Plxna4 by binding to let-7i-5p. This indicates that MRAK088388 and MRAK081523 display regulatory functions as competing endogenous RNAs (ceRNAs) and contribute to pulmonary fibrosis. In addition, a recent study by Huang et al. 92 found 34 lncRNAs containing potential binding sites for several well-known lung fibrosis-related miRNAs including miR-21, miR-31, miR-101, miR-29, miR-199 and let-7d. They then tested and confirmed four lncRNAs which were inversely correlated to the miRNA expression in IPF. Further study revealed that silencing the lncRNA CD99 molecule pseudogene 1 (CD99P1) inhibited fibrogenesis in lung fibroblasts whereas knockdown of lncRNA n341773 promoted it 92 .
At the present time, detailed insight into the regulation and biological roles of lncRNAs in lung fibrosis is just beginning to emerge. A more detailed and integrated understanding of their action and mechanisms in pulmonary fibrosis could help pave the way for effective treatment options for fibrotic-related lung disease.
Conclusions and perspectives
Targeting specific miRNA has great potential in the treatment of pulmonary fibrosis. In theory, molecular target therapy has highly specific effects on target cells and can efficiently reduce damage to normal tissue. However, there are many problems to solve before miRNA pharmacotherapy of IPF can be introduced. These include: (1) How do we accurately confirm the target miRNA and its target gene experimentally and in clinical practice and accurately control the targeting of miRNAs; (2) Are the in vivo metabolic pathways of nucleic acid drugs clearly known on the basis of the relevant pharmacokinetic knowledge; (3) Is it feasible to interfere with miRNAs in the human body and will such interference bring unexpected adverse reactions; (4) How are miRNAs or antisense nucleotide inhibitors introduced into target cells safely and effectively; and (5) If we develop effective treatments based on interfering with miRNAs, will the cost be too high.
Although treatment of IPF with miRNAs may have defects, miRNAs probably represent the most exciting intervention target of the last ten years. In the future, we have reason to hope miRNAs or their inhibitors will form the basis of an effective treatment to alleviate the suffering of IPF patients.
Although treatment of IPF with miRNAs may have defects, miRNAs probably represent the most exciting intervention target of the last ten years. In the future, we have reason to hope miRNAs or their inhibitors will form the basis of an effective treatment to alleviate the suffering of IPF patients. | 2018-04-03T00:42:47.206Z | 2016-07-27T00:00:00.000 | {
"year": 2016,
"sha1": "76bb5836b0026e63389d2795bad42253ba5ed76d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.apsb.2016.06.010",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76bb5836b0026e63389d2795bad42253ba5ed76d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209487971 | pes2o/s2orc | v3-fos-license | Understanding the mechanism of bronchial thermoplasty using airway volume assessed by computed tomography
Bronchial thermoplasty (BT) is a recent treatment for moderate-to-severe asthma in which the airway smooth muscle (ASM) layer is targeted directly using thermal energy delivered during bronchoscopy. Although direct targeting of the ASM is appealing because of its role in bronchoconstriction in asthma, BT is not widely used because direct physiological effects after treatment (e.g. changes forced expiratory volume in 1 s (FEV1) or the concentration of methacholine required to decrease FEV1 by 20%) have not been shown consistently [1–3]. Instead, clinical response is demonstrated through indirect measurements, such as improved Asthma Control Questionnaire (ACQ) and Asthma Quality of Life Questionnaire scores, and reduced use of rescue medication [1, 2], acknowledging that there is also a considerable placebo component [3]. These findings leave doubt about the efficacy and mechanism of action of BT. Recent work, however, has demonstrated a direct change in a new physiological measure, namely airway volume assessed by high-resolution computed tomography (CT) [4].
We have also previously suggested, based on model predictions, that the principle mechanism of BT is redistribution of flow patterns due solely to structural changes in the treated airways [5]. The model incorporates extensive post mortem structural data from human subjects with different degrees of asthma severity, and key aspects of the model included airway-parenchymal interactions where inflated alveoli distend bronchial passages. Regional flow relationships are maintained within the model such that obstruction in proximal airway segments disrupts flow to the lung periphery. Importantly, the model only "treated" the same large airways that are targeted in normal BT practice, by mathematically reducing the thickness of the ASM layer to the level reported in biopsy studies. Using this approach, we predict that the effects of treatment propagate functionally to the peripheral airways via these flow patterns, but this does not necessarily involve structural changes to the peripheral airways. These functional effects are difficult to demonstrate clinically because they are very small at baseline but increase with the degree of ASM activation and disease severity [5]. Safety considerations preclude inducing these situations in the clinic, but they should still occur in uncontrolled situations outside the clinic, and subsequently manifest in indirect measures like the ACQ and rescue medication use.
In this letter, we show that changes in airway volumes, assessed in patients by high-resolution CT at both functional residual capacity (FRC) and total lung capacity (TLC), agree with model predictions for the changes in the volume of BT-treated airways. All patients met the definition of severe asthma, despite high-dose inhaled corticosteroids and dual long-acting bronchodilators; detailed subject characteristics are given in the figure 1 caption. Data were acquired and analysed using the same methodology as used in our previous study [4] but with new data from eight additional patients (now 18 in total). High-resolution CT imaging studies in this protocol were performed at baseline and then again 4 weeks after the left lung underwent BT treatment, but prior to any treatment of the right lung, which therefore served as a control.
@ERSpublications High-resolution CT assessment of airway volumes after bronchial thermoplasty (BT), together with model predictions regarding the efficacy and underlying mechanism of action of the treatment, combine to help to elucidate the underlying mechanism of BT http://bit.ly/2WPHY6y Cite this article as: Langton D, Noble PB, Thien FUnderstanding the mechanism of bronchial thermoplasty using airway volume assessed by computed tomography. Note that the conventional treatment order was altered to allow the untreated right lung to serve as a control mid-treatment [4]. [5]. The response threshold is defined as an increase in airway volume that exceeds half of the interquartile range of the intervisit variability, as assessed on the untreated right side (∼8.5% at FRC and ∼17% at TLC upper lobe (LUL); the left column ( figure 1a and d) gives CT data at FRC; the centre column (figure 1b and e), the model predictions (tidal average); and the right column (figure 1c and f ) the CT data at TLC. As in the study by LANGTON et al. [4], reported airway luminal volumes are summed from the lobar and segmental airways, and branches down to 2 mm in diameter, as reported on an independent, commercial basis by FLUIDDA (Kontich, Belgium). These airways are assumed to be BT-treated. In the model, BT-treated airway volumes are computed directly.
In both the LUL and LLL, treated airway volumes show significant increases at FRC and TLC, and in model predictions, as well as consistent response rates. Model predictions agree very well with the CT-acquired volumes. For comparison, the airway volumes in the untreated right lung are shown in figure 1g and h at FRC and TLC respectively; as in our previous study [4], no significant changes were observed in the untreated right lung. Treatment responses were not significantly different between the LUL and LLL. It is also worth noting that the high-resolution CT data were acquired before and after an inspiratory capacity manoeuvre, while the model assumed tidal breathing; this precludes direct, quantitative comparison of the two, but it is reasonable to assume that the airway volume changes observed in tidal breathing should lie between those observed at FRC and at TLC. The relationship between change in airway volume at TLC and change in ACQ score was also assessed but did not reach statistical significance ( p=0.14, consistent with [4]). However, a global measure such as the ACQ may respond more strongly to treatment of both sides and so firm conclusions await the availability of the full data set with both lungs treated.
What does this tell us about the underlying mechanism of BT? First, this is evidence of a direct, physiological effect of BT, to compliment previous reports of changes in total lung volumes [6]. Unlike indirect measurements (e.g. ACQ), no placebo component is likely. Second, the characteristics of the response agree extremely well between predictions and observations, supporting the model predictions for post-BT airway behaviour and therefore consistent with the hypothesis that treatment of a relatively small number of large airways can modulate downstream flow patterns, resulting in subsequent improvements in global function. Some untreated airways with diameter >2 mm may be measured by CT, but downstream changes in smaller airways are not directly assessed by CT but could be assessed by hyperpolarised gas magnetic resonance imaging [7], and indeed we may hope to see this done in the near term [8,9]. Establishing the mechanism of BT provides an opportunity for better patient selection and/or predicting response to therapy [10].
Further understanding might also be obtained by analysing airway volumes not just on a lobe-by-lobe basis, but on a more detailed airway-by-airway basis. A mixed response is perhaps to be expected, with some airways showing dilation and increased flow in response to BT, while others exhibit no change or even a reduction in calibre. This might be thought of as a kind of paradoxical constriction, as compared with expected dilationakin to paradoxical dilation in response to contractile stimulus [11]. Model predictions show just such a response: a mean 15.9% increase in treated airway volume, with 60% of treated airways showing a >5% increase and 51% of airways increasing >10%. The mixed response is evident in the airways predicted to decrease in volume: 20% decreasing >5% and 11% decreasing >10%. This mixed response is broadly consistent with the limited data available from BT on a segmental basis [7]. Comparable airway-by-airway data from CT remain to be tested in future pending improvements in image registration or perhaps using optical coherence tomography [12,13]; when available, it will further aid our understanding of the processes behind BT.
In closing, this report extends our previous findings [4] and those of DONOVAN et al. [5], and clearly demonstrates improvement in lung physiology after BT. It is the opinion of the authors that the field is moving in the right direction in considering methods of assessment beyond conventional lung function. | 2019-12-28T17:34:14.575Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "ce728fa03fa416c492a75bc4b3f30a12e6fcb827",
"oa_license": "CCBYNC",
"oa_url": "https://openres.ersjournals.com/content/erjor/5/4/00272-2019.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce728fa03fa416c492a75bc4b3f30a12e6fcb827",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20724387 | pes2o/s2orc | v3-fos-license | Radiographic sarcopenia predicts postoperative infectious complications in patients undergoing pancreaticoduodenectomy
Background Recently, skeletal muscle depletion (sarcopenia) has been reported to influence postoperative outcomes after certain procedures. This study investigated the impact of sarcopenia on postoperative outcomes following pancreaticoduodenectomy (PD). Methods We performed a retrospective study of consecutive patients (n = 219) who underwent PD at our institution between January 2007 and May 2013. Sarcopenia was evaluated using preoperative computed tomography. We evaluated postoperative outcomes and the influence of sarcopenia on short-term outcomes, especially infectious complications. Subsequently, multivariate analysis was used to assess the impact of prognostic factors (including sarcopenia) on postoperative infections. Results The mortality, major complication, and infectious complication rates for all patients were 1.4%, 16.4%, and 47.0%, respectively. Fifty-five patients met the criteria for sarcopenia. Sarcopenia was significantly associated with a higher incidence of in-hospital mortality (P = 0.004) and infectious complications (P < 0.001). In multivariate analyses, sarcopenia (odds ratio = 3.43; P < 0.001), preoperative biliary drainage (odds ratio = 2.20; P = 0.014), blood loss (odds ratio = 1.92; P = 0.048), and soft pancreatic texture (odds ratio = 3.71; P < 0.001) were independent predictors of postoperative infections. Conclusions Sarcopenia is an independent preoperative predictor of infectious complications after PD. Clinical assessment combined with sarcopenia may be helpful for understanding the risk of postoperative outcomes and determining perioperative management strategies.
Background
Pancreaticoduodenectomy (PD) is one of the most complicated procedures in the field of gastroenterological surgery. As a result, the postoperative mortality and morbidity rates after PD remain high (2.8-3.5% and 40%, respectively) according to nationwide surveys performed in Japan [1,2]. Furthermore, infectious complications after pancreatic surgery are common during the postoperative course, and can lead to fatal outcomes [3]. High morbidity rates are associated with the need for further treatment and extended hospital stays. Thus, a precise method of predicting postoperative complications is urgently required to ensure patient safety following PD.
Recent studies have shown that computed tomography (CT)-assessed sarcopenia (radiographic sarcopenia), which is characterized by skeletal muscle depletion and is an objective predictor of frailty, is associated with poor outcomes in gastrointestinal and hepatopancreatobiliary malignancies [4]. Previous studies have also shown that sarcopenia is associated with short-term outcomes, and especially with postoperative pancreatic fistula (POPF), in patients undergoing PD. [5][6][7][8] However, the influence of sarcopenia in terms of postoperative infectious complications has not been assessed in detail. We hypothesized that sarcopenia is associated with postoperative infections in patients undergoing PD.
With the above in mind, the aim of this retrospective study was to investigate postoperative outcomes following PD and to assess the influence of sarcopenia on short-term outcomes. In particular, we focused on the relationship between sarcopenia and infectious postoperative complications in patients following PD.
Patients
We retrospectively reviewed the medical records of 241 consecutive patients who underwent PD at the Okayama University Hospital between January 2007 and May 2013. This study was approved by the Ethics Committee of the Okayama University Graduate School of Medicine, Dentistry, and Pharmaceutical Sciences and Okayama University Hospital, and was conducted in accordance with the tenets of the Declaration of Helsinki. Due to the retrospective nature of the study, the need for informed consent was waived.
Clinical data
For all enrolled patients, the following demographic and clinical data were evaluated as preoperative factors: sex, age, height, weight, body mass index (BMI), body surface area (BSA), American Society of Anesthesiologists (ASA) physical status, laboratory values (albumin level and total lymphocyte count), liver function according to the Child-Pugh score, comorbidities, etiology of disease, and preoperative biliary drainage. ASA physical status was preoperatively evaluated by an anesthesiologist. Preoperative biliary drainage included endoscopic biliary drainage and percutaneous transhepatic biliary drainage. Data regarding operative time, amount of blood loss, portal vein reconstruction, pancreatic texture (soft or hard) assessed by the surgeon intraoperatively, and main pancreatic duct diameter, were recorded as intraoperative factors.
Surgical procedures and perioperative management
The standard surgical procedure was subtotal stomachpreserving PD. The basic reconstruction of the digestive system was performed by means of a modification of the method described by Child [9]. Pancreatojejunostomy was performed with a duct-to-mucosa anastomosis. Hepatojejunostomy was performed 10 cm distal to the pancreatojejunostomy. Gastrojejunostomy was performed by means of a two-layer anastomosis 50 cm distal to the hepatojejunostomy. A Braun anastomosis was also added.
The details of these surgical techniques have been reported previously [10,11]. In most cases, three drains were placed around the pancreatic and biliary anastomoses. All patients received prophylactic antibiotics every 3 h intraoperatively and for 3 d postoperatively.
Postoperative care was performed in a specialized surgical unit. Patients did not routinely receive somatostatin analogues and nutritional supplementation during the perioperative period. The drains were removed after postoperative day 5 if the drainage fluid was clear and no bacterial contamination was detected.
Image analysis and definition of sarcopenia
Diagnostic CT images taken within 3 months prior to surgery were chosen and evaluated using a CT image analysis system (Synapse Vincent; Fujifilm Medical, Tokyo, Japan). The total cross-sectional skeletal muscle area (SMA) at the level of the third lumbar vertebra was calculated in the study population using Hounsfield unit thresholds of −29 to +150 for skeletal muscle [12]. The details of the measuring method are as described previously [13,14]. In this study, the SMA (cm 2 ) was divided by BSA (m 2 ) to obtain the SMA/BSA index (SBI; cm 2 / m 2 ), which is our original method of defining sarcopenia [14]. In the present study, patients with values less than the gender-specific lowest quartile of SBI were considered to have radiographic sarcopenia.
Postoperative outcomes
For each patient, data regarding the following parameters were collected: postoperative mortality, morbidity including infectious complications, and postoperative length of hospital stay. Postoperative mortality included all in-hospital deaths before discharge. To analyze morbidity severity, each postoperative event was assessed and graded according to the Clavien-Dindo classification [15]. The major postoperative complications were defined as Clavien grade ≥ 3. POPF and delayed gastric emptying (DGE) were defined according to the International Study Group of Pancreatic Surgery guidelines [16,17].
Infectious complications
In the present study, we defined infectious complications as all postoperative infectious diseases including wound infections, infected abdominal fluid, intra-abdominal abscess, bacteremia, catheter-related infections, pneumonia, cholangitis, enteritis, and urinary tract infections. The definition of these infectious complications conformed to those of the American College of Surgeons National Surgical Quality Improvement Program criteria (NSQIP) [18]. Infected abdominal fluid was defined as drainage fluid with a positive culture from surgically replaced drains. Intra-abdominal abscess was defined as intra-abdominal fluid collection with positive cultures identified by ultrasonography or CT with clinical signs. A positive culture was not necessarily required in cases in which the NSQIP criteria were met and clinical signs were consistent with infectious complications.
Risk factors for postoperative infections
Univariate and multivariate analyses were performed to identify the predictors closely related to infectious complications after PD among preoperative and intraoperative factors.
Simple scoring system using perioperative risk factors A simple scoring system was performed according to the results of multivariate analysis to investigate risk factors for postoperative infections. Patients were divided into three groups according to these risk factors, and the short-term outcomes of each group were then examined.
Statistical analysis
JMP version 11 software (SAS Institute, Cary, NC) was used for all statistical analyses. Data were presented as means, medians, and standard deviations for continuous variables. Categorical data were presented as proportions. Differences between groups were assessed using the Mann-Whitney U-test for continuous variables, and Fisher's exact test or chi-square test for categorical variables. To investigate the impact of prognostic factors associated with postoperative infections, we used a logistic regression model for univariate and multivariate analyses; odds ratios and 95% confidence intervals were calculated. A P-value <0.05 was considered statistically significant.
Study population
Of the 241 patients who underwent PD, 22 were excluded because of the following reasons: unavailable preoperative CT images, 16; emergency surgery, 6. The demographic characteristics of the 219 patients (143 men [65.3%]; mean age, 65.9 years) are shown in Table 1. Pancreatic adenocarcinoma was the most common disease, occurring in 86 patients (39.3%). Preoperative biliary drainage was performed in 101 patients (46.1%). The mean operative time was 448 min (230-733 min) and the mean blood loss was 563 mL (10-3130 mL).
Measurement of body composition
The mean SBI values were 74.7 ± 9.9 cm 2 /m 2 for men and 58.3 ± 8.3 cm 2 /m 2 for women. The mean SBI was significantly lower for women than for men (P < 0.001). The cut-off values for the lowest quartiles of SBI were 68.5 cm 2 /m 2 for men and 52.5 cm 2 /m 2 for women. Accordingly, 52 patients were categorized as having sarcopenia.
Postoperative outcomes
The mortality and major complication rates for all 219 patients were 1.4% and 16.4%, respectively ( Table 2). All three cases of mortality were caused by severe infectious complications. Of 219 patients, 103 (47.0%) had at least
The impact of sarcopenia
The clinical demographic characteristics of patients with and without sarcopenia are shown in Table 1. Patients in the sarcopenia group had significantly lower albumin levels and a higher rate of preoperative biliary drainage, but other factors, including intraoperative factors, were not significantly different. Postoperative outcomes associated with the presence or absence of sarcopenia are presented in Table 2. The mortality rate was significantly higher in the sarcopenia group (5.5% vs. 0%, P = 0.004). Although the incidences of major complications, POPF, and DGE were not significantly different between the groups, the sarcopenia group had a significantly higher infectious complication rate (67.3% vs. 40.2%, P < 0.001).
The length of postoperative hospital stay did not significantly differ between the groups.
Comparison between patients with and without postoperative infections
Patient characteristics are shown in Table 3. There were no differences between patients with and without infections with respect to sex, age, BMI, ASA physical status, laboratory values, liver function, comorbidities, etiology of disease, blood loss, vascular reconstruction, and pancreatic duct diameter. Sarcopenia, preoperative biliary drainage, operative time, and soft pancreatic texture were more common (or longer, in the case of operative time) in patients with infections. Table 3 shows the results of univariate and multivariate analyses used to identify the predictors closely related to infectious complications after PD. In univariate analysis, four variables (sarcopenia, preoperative biliary drainage, operative time, and soft pancreatic texture) were found to be significant risk factors. Multivariate analysis showed that sarcopenia (odds ratio = 3.43; P < 0.001), preoperative biliary drainage (odds ratio = 2.20; P = 0.014), blood loss (odds ratio = 1.92; P = 0.048), and soft pancreatic texture (odds ratio = 3.71; P < 0.001) were significant risk factors for infectious complications after PD.
Simple scoring system using perioperative risk factors
According to the number of significant risk (R) factors (sarcopenia, preoperative biliary drainage, blood loss, and soft pancreatic texture) in multivariate analysis, patients were divided into three groups: R0/1 group (n = 91), R2 group (n = 89), and R3/4 group (n = 39). Table 4 shows the short-term outcomes after PD determined using the risk-scoring system. The incidence rates of mortality and major complications were not different between the groups. However, the infectious complication rates after PD were 28.6% for the R0/1 group, 49.4% for the R2 group, and 84.6% for the R3/4 group (P < 0.001). In a logistic regression model, all differences
Discussion
This retrospective study demonstrated that sarcopenia is an independent prognostic factor of infectious complications after PD. To the best of our knowledge, this is the first study to identify the prognostic significance of sarcopenia on postoperative infections following PD. Concerning surgical procedures, the standard surgical procedure has changed to subtotal stomach-preserving PD from 2007 at our institution. In addition, the reconstruction of gastrojejunostomy has changed to an antecolic route from 2007 [11]. However, major changes in surgical procedures have not been introduced after 2007, so the results of this study should be valid.
In the entire cohort, the mortality, major complication, and infectious complication rates after PD were 1.4%, 16.4%, and 47.0%, respectively. The results obtained from our institution were better than those reported in previous papers [1,2,19]. However, the median length of stay was 30 d in the present study, which was much longer than the mean value reported from studies performed in Western countries.
Further improvements to surgical procedures and perioperative management are needed in order to improve postoperative outcomes after PD.
Sarcopenia is a syndrome defined by progressive and generalized loss of skeletal muscle mass and strength that occurs with aging or secondary to diseases [20,21]. In the present study, we used preoperative CT to evaluate sarcopenia. CT is considered to be an objective and precise method for assessing sarcopenia [22][23][24]. Regarding the definition of sarcopenia, we used SBI to evaluate skeletal muscle mass. We considered that SBI would more precisely evaluate skeletal muscle mass in patients with different physiques, and would be a superior index for evaluating skeletal muscle mass [14]. In the present study, demographic characteristics, including age and body parameters, were not significantly different between the two groups.
The effect of sarcopenia on postoperative complications, especially POPF, has been reported previously [5][6][7][8]. In the present study, the incidence rates of major complications Data are presented as numbers (percentages). OR odds ratio, CI confidence interval and POPF were not significantly different between the two groups. This finding differs from the results of previous studies. However, the sarcopenia group had a significantly higher incidence of in-hospital mortality and infectious complications. All cases of mortality were also the result of severe infectious complications. Concerning infectious complications, the incidence rates of intra-abdominal abscess and infected abdominal fluid were significantly higher in the sarcopenia group. Sarcopenia might negatively affect the healing process at the pancreatic anastomosis [6].
Our multivariate analysis revealed that sarcopenia, preoperative biliary drainage, blood loss, and soft pancreatic texture were perioperative risk factors related to postoperative infections after PD. Preoperative biliary drainage has been reported to be a risk factor for postoperative infections following PD. [25] Operative factors and pancreatic factors such as blood loss and soft pancreatic texture have been reported to be associated with POPF after PD, which may also be related to postoperative infections [6]. However, this represents a new finding demonstrating the association between sarcopenia and infections complications after PD.
In this study, we established a simple and comprehensive scoring system that predicted postoperative infections after PD. Thirty-nine patients (17.8%) had three or four risk factors (the R3/4 group), and these patients had 84.6% of all postoperative infectious complications. Assessing and managing these risk factors may improve postoperative outcomes, especially in high-risk patients. Nutritional intervention combined with physical exercise appear to be effective for the management of sarcopenia [26,27]. Furthermore, perioperative antibiotic strategies to prevent bile contamination could prevent infectious complications after PD. [28] Finally, the development of specialized surgical procedures and techniques might contribute to reducing the rate of infectious complications.
Despite the important findings reported in this study, several limitations should be discussed. First, this was a small, single-center, retrospective study. Because of this, there may be some selection bias with respect to the patients who underwent PD. Second, we did not evaluate functional muscle status in terms of grip strength, walking speed, or exhaustion because of the retrospective nature of the study. The evaluation of both muscle mass and muscle function has been recommended in the diagnosis of sarcopenia [21], and future studies will be required to assess not only muscle mass but also function in sarcopenia. Third, we used the SBI for evaluating sarcopenia, which would be a useful modality for assessing sarcopenia. However, further studies are needed to assess the efficiency of SBI to diagnosis sarcopenia. Fourth, it remains unclear what nutritional interventions and physical exercise regimens would be valid for patients undergoing PD, because few studies have dealt with the effects of such interventions on sarcopenic patients with PD. Future studies are needed to examine the effects of perioperative interventions focusing on sarcopenia. Finally, there is insufficient evidence in the pathophysiology concerning the interaction between sarcopenia and infections. The depletion of skeletal muscle as a secretory organ of cytokines and peptides, and increasing adipose tissue as a key component of the immune system lead to the synthesis and secretion of several proinflammatory adipocytokines [29]. Thereby, decreased interleukin-15 and increased adipokines levels might be associated with the interaction between sarcopenia and immune depression [30]. Accordingly, We hypothesize that sarcopenia reflects the patients' frailty, including impaired immune function, which ultimately leads to infections [14]. However, further research should investigate the molecular mechanism of sarcopenia's effect on outcomes.
Conclusions
In conclusion, the results of the present study indicate that sarcopenia is an objective and independent preoperative predictor of infectious complications after PD. Furthermore, assessing sarcopenia is easy and practicable. Accordingly, we propose that clinical assessment combined with sarcopenia may help clinicians to understand the risk of postoperative outcomes and determine perioperative management strategies. | 2017-06-28T08:20:08.491Z | 2017-05-26T00:00:00.000 | {
"year": 2017,
"sha1": "65180b422e3be8cf412378898b01c0f5062038f6",
"oa_license": "CCBY",
"oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/s12893-017-0261-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65180b422e3be8cf412378898b01c0f5062038f6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238999700 | pes2o/s2orc | v3-fos-license | Comparing environmental impacts of alien plants, insects and pathogens in protected riparian forests
The prioritization of alien species according to the magnitude of their environmental impacts has become increasingly important for the management of invasive alien species. In this study, we applied the Environmental Impact Classification of Alien Taxa (EICAT) to classify alien taxa from three different taxonomic groups to facilitate the prioritisation of management actions for the threatened riparian forests of the Mura-Drava-Danube Biosphere Reserve, South East Europe. With local experts we collated a list of 198 alien species (115 plants, 45 insects, and 38 fungi) with populations reported in southeast European forest ecosystems and included them in the EICAT. We found impact reports for 114 species. Eleven of these species caused local extinctions of a native species, 35 led to a population decrease, 51 to a reduction in performance in at least one native species and for 17 alien species no effects on individual fitness of native NeoBiota 69: 1–28 (2021) doi: 10.3897/neobiota.69.71651 https://neobiota.pensoft.net Copyright Katharina Lapin et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. RESEARCH ARTICLE Advancing research on alien species and biological invasions A peer-reviewed open-access journal
Introduction
Invasive alien species are a major threat to European forest ecosystems (CBD 2001;FAO 2009;Europe and Unece 2015). Globally, they have become the second most common extinction threat to endangered species due to the increasing human-mediated transportation of species far beyond their native range (Bellard et al. 2016). Previous studies on individual or multiple alien species have revealed severe impacts of alien species on ecosystem functions, ecosystem services, and biodiversity in forest ecosystems (Seidl et al. 2018); these impacts are linked to a multitude of impact mechanisms: parasitism, competition with native species, physical changes to the environment, and pathogen transfer (Kenis and Branco 2010;Pyšek et al. 2012;Ricciardi et al. 2013;Langmaier and Lapin 2020).
As a result of the rapidly increasing impact of biological invasions, the control of invasive alien species -i.e. any species or lower taxon of animals, plants, fungi, and other microorganisms whose occurrence in a region outside its natural range that has negative impacts on an ecosystem and its services (CBD 2002) -has been implemented in international, national, and regional policies and legislations such as the EU Biodiversity Strategy or EU Regulation No. 1143/2014 on invasive alien species. Their aim is to mitigate the ecological and socioeconomic effects of alien species. The few cross-taxon assessments performed have shown that terrestrial invertebrates, and terrestrial plants in particular, are associated with ecological and economic impacts in Europe (Vilà et al. 2010;Kumschick et al. 2015).
Riparian forests are highly vulnerable to biological invasion (Marinšek and Kutnar 2017;Medvecká et al. 2018). Their high nutrient levels and frequent natural and man-made disturbances facilitate invasions, and the rivers themselves serve as effective corridors for the spread of alien species (Kowarik 1992;Pyšek and Prach 1993;Schmiedel et al. 2013;Lapin et al. 2019). Management of alien species in riparian areas is therefore essential for preserving and restoring the biodiversity and ecosystem services of these endangered ecosystems (Rivers et al. 2019). However, the resources for conservation management in protected riparian forests are limited and require effective prioritization. A cross-taxon impact assessment, of the alien species present or likely to be present in the near future, because the species have been observed in neighboring areas, in a protected area could be useful for the prioritization of management actions and to facilitate the evaluation of management methods (Roy et al. 2019;IUCN 2020b).
Besides horizon scanning frameworks (Roy et al. 2019) and risk assessment protocols, scoring systems for impact assessments have thus gained considerable importance not only for policy makers or the scientific community, but also for conservation managers of protected areas. Several tools have been developed to quantify, compare, and prioritize the impact of alien species (Vilà et al. 2019). The generic impact scoring system (GISS), for example, focuses on the environmental and socio-economic impacts of alien species (Nentwig, et al. 2016). Here, we follow the scoring system of the Environmental Impact Classification of Alien Taxa (EICAT), which classifies alien taxa in terms of the magnitude of their highest observed environmental impacts in recipient areas, based on the level of organisation impacted of a native species and its reversibility (Blackburn et al. 2014;Hawkins et al. 2015). Recently, the International Union for Conservation of Nature adopted EICAT as a global standard similar to the IUCN Red List for extinction threat (IUCN 2020d).
In the past few years, EICAT has been widely applied and discussed (Kumschick et al. 2017;Kumschick et al. 2020). However, most impact assessments have primarily focused on EICAT classification within single taxonomic groups, such as global impact assessments of birds (Evans et al. 2016), ungulates (Volery et al. 2021), bamboos (Canavan et al. 2019, or amphibians (Kumschick et al. 2017), while only few studies have performed cross-taxon assessments. Even fewer studies have undertaken cross-taxon assessments for a specific habitat or a geographic region (Shivambu et al. 2020). This study investigates the cross-taxon impacts of alien species in order to facilitate the prioritization of management actions for the endangered riparian forests of the transboundary UNESCO Mura-Drava-Danube Biosphere Reserve in Southeast Europe. The riparian forest of the Biosphere Reserve was selected as a representative protected area for the European challenge to combat the spread of invasive alien species.
The objectives of the study are (1) to provide a cross-taxon impact assessment of alien taxa, in the Mura-Drava-Danube Biosphere Reserve, in terms of the magnitude of their highest observed environmental impacts in riparian temperate forests in Europe, (2) to determine differences in the impact severity and impact mechanisms of fungi, insects, and plants, with consideration for the time period since their introduction (residence time), (3) to identify knowledge gaps and the availability of data on alien taxa for application of the cross-taxon impact assessment. With our work we wish to support the prioritization of taxa for control and management within this vulnerable riparian ecosystem. Additionally, we quantify environmental impacts on forest ecosystems, thereby supporting forest management decisions.
Area description
The Mura-Drava-Danube Biosphere Reserve covers an area of nearly 850,000 ha in the countries of Austria, Slovenia, Hungary, Croatia and Serbia. The entire core zone of this important ecological corridor -a belt of riparian forests along the three rivers -has been designated as part of the Natura 2000 framework and contains protected areas of various categories. New parts of the Biosphere Reserve were recently nominated and now it is the largest protected river area in Europe and the only UNESCO Biosphere Reserve spanning across five countries. A share of 27% of the Biosphere Reserve is covered by forest. This portion increases to 61% within the core zone. Between the countries, there are remarkable differences regarding the ownership structure and forest management practices. The annual mean temperature ranges from 9.3 °C in the north-western part of the study area to 11.7 °C in the area between Đurđevac (Croatia) and Barcs (Hungary). The whole Biosphere Reserve shows strong variation of annual precipitation ranging from sites with nearly 1000 mm in the West to almost 500 mm in the North-Eastern Hungarian part of the Biosphere Reserve. The Biosphere Reserve is characterized by highly fertile plains along the rivers with an intense agricultural use for cereal, maize and pasture cropping on the one hand, and forestry on the other. The rivers are embedded in eutric Fluvisols (33%), surrounded by Luvisols (14%) and Cambisols (5%). Phaeozems (35%) are the dominant soil type.
Data collection
A list of 390 alien species (165 fungal species -including species of pseudo-fungi, 48 insect species, and 177 plant species) with reported populations in Southeast European forest ecosystems was extracted from the Global Invasive Species Compendium database using the invasive species Horizon Scanning Tool (beta) (incorporating data up to March 2019, (CABI 2018). Additional information on alien species from the observations of Austrian, Slovenian, Croatian, Serbian, and Hungarian national experts and the alien species alert and observation list from the "Life Artemis project" (DeGroot et al. 2017;Marinšek and Kutnar 2017) was included. In total, 188 alien species were excluded by the expert panel of assessors before the beginning of the assessment process because these species do not generally occur in riparian forest ecosystems and exhibit a very low potential occurrence in the riparian forests of the Biosphere Reserve. Ultimately,198 species (115 plants,45 insects,and 38 fungi) were included in the list of alien species (Appendices 1, 2).
The 198 species were distributed among the assessors. All assessors and reviewers were invited to a workshop in September 2019 during which the EICAT assessment protocol was demonstrated and practiced. The assessors had different backgrounds and years of expertise, e.g. geneticists, biodiversity conservationists, forest science and also junior staff/technicians. The applied assessment protocol followed the Guidelines for using the IUCN Environmental Impact Classification for Alien Taxa (EICAT) Categories and Criteria (IUCN 2020b, c;Volery et al. 2020). The assessors undertook a review of published literature and local reports to identify the environmental impact of the selected 198 alien species in forests. The databases Google Scholar and Scopus were used along with Google web searches to collate publications. We adapted the EICAT protocol search string in order to focus only on impacts observed in forest ecosystems using the following search terms: "forest" AND "Europe" AND ("introduced species" OR "invasive species" OR "invasive alien species" OR "IAS" OR "alien" OR "non-native" OR "non-indigenous" OR "invasive" OR "pest" OR "feral" OR "exotic"). Publications describing an environmental impact in a different ecosystem type or other climatic regions than temperate climate were not included. Each record was assessed separately. The impacts identified in the literature were classified according to their magnitude following five categories: minimal concern (MC), minor (MN), moderate (MO), major (MR) or massive (MV). Following the EICAT protocol, each alien taxon was assigned an EICAT category based on its highest observed impact across all recorded impacts. The impact mechanisms for each alien species were also identified from the assessed publications and categorized into one of 12 impact mechanism categories as defined in the EICAT guidelines (IUCN 2020b, c;Volery et al. 2020). Insect herbivory was included in the impact mechanism `Parasitism`, because these insects are not killing but parasitizing on the trees. All assessments were independently crossvalidated for consistency by an assigned independent reviewer in three review loops. The final scores were agreed upon by consensus among all authors, which was reached in constructive discussions in several online-meetings.
Data analysis
Microsoft Excel 2010 was used for the data management, and R version 3.4.2 (R Core Team 2017), with the libraries "ordinal" (Christensen 2019), "stats" (R Core Team 2017) and "ggplot2" (Villanuev et al. 2016) for data analysis together with Python version 3.7 (Van Rossum and Drake 2009). For analysis of the respective alien species' native region, we categorized the area of geographic origin by continents (Africa, Asia, Australia, Europe, North (including Central) America, and South America). The time of the first record in the wild in Europe was included to analyze the influence of residence time on a species' impact. This information was obtained by reviewing scientific literature on the first records of each species.
We calculated the concurrence (Con) to analyze whether obtained EICAT impact categories vary among impact reports as well as the variance in impact magnitudes (Var) of the impact reports of each alien taxon regarding their impact categories across the impact mechanisms and taxonomic groups. For the analysis of both, the concurrence and variance, only alien species with two or more assessed impact reports were included. In total, 59 species with multiple impact reports per alien species were analyzed regarding their dissimilarity in the consensus on the impact category. For the concurrence we used the percentage of references within the most frequent category (the category with the most references assigned to the species assessments). In the next step, we calculated the average percentage for a) each mechanism and b) each taxonomic group individually. The calculation of concurrence implied the division of the number of references of the most frequent impact category (n i freq ) by the total number of references (n i total ) within the same species i, which was performed for each species individually. We then calculated the sum of all individual species by mechanisms, respectively taxonomic groups. To arrive at concurrence, we divided the resulting sum by the number of species (N) for each mechanism respectively for each taxonomic group. In this result, a high percentage indicates high consensus whereas a low percentage indicates low consensus. The equation for concurrence is as follows: For the variance in impact magnitudes, we investigated the statistical variance of the different EICAT impact categories, calculating the average percentage for a) each mechanism and b) each taxonomic group individually. A high variance score indicates high dissent.
We modelled the effect of the explanatory variables taxonomic group, geographic origin (southern or northern hemisphere), and years since first record in the wild in Europe on the maximum EICAT impact category per species. As the response variable of impact categories was ordinal, we used cumulative link models (CLM). For the model selection, the Akaike Information Criterion (AIC) was used in which all models within 2 AIC units from the lowest AIC were chosen as the best models (Anderson and Burnham 2002).
The residence time was analyzed for the difference with taxonomic group and impact category. An ANOVA was used between residence time compared to taxonomic group, impact category and their interaction. With the model selection, all models within 2 AIC units from the lowest AIC were chosen as the best models.
For analyzing the data deficiency of the impacts per species, we used a generalized linear model (GLM) with binomial error structure. The dependent variable was based on the presence and absence of an impact description. The independent variables were taxonomic groups, years since the first recorded introduction to Europe and geographic origin. We used a backward stepwise model selection to come to the best model on the basis of the AIC (Burnham and Anderson 2002). All models within 2 AIC units from the lowest AIC were conditional average.
Results
In total, 303 references with information on 114 alien species were used, with an average of 2.7 ± 0.14 (mean ± SE) references per species. The average number of references for plants was 2.8 ± 0.06 and thus lower than the average of 3.2 ± 0.06 for insects but higher than the average number of species references for fungi which was 1.89 ± 0.05. It is important to note that for most species only one single reference was available, as the mode for all individual taxonomic groups was equal to 1. The references used extended across a time span of 39 years, with the oldest one published in 1981 and the most recent one in 2020. The results show that, in total, 11 alien species (Plants: n = 6, Fungi: n = 5) were assessed as having caused on at least one occasion a Major impact, which led to the naturally reversible local extinction of a native taxon (i.e. change in community structure). A Major impact was the most harmful impact category of the 114 alien species assessed (Table 1); No alien species were assigned to the highest and most harmful impact category Massive (naturally irreversible local or global extinction of a native taxon). 35 alien species were assigned to the impact categories Moderate and caused population decline, 51 to Minor and caused reduction in individual performance and 17 to Minimal Concern and had no or negligible impact on other native species, across the taxonomic groups -plants, insects, and fungi, as shown in Figure 1. The full list of EICAT assessment results is provided in the Appendix 1: Table A1.
Most of the assessed alien species originate from North America (56.1%), followed by Asia (36.0%), Australia (1.3%), South America (0.69%), Africa (0.6%), and 3.0% were native in Europe, but non-native to the study area. The distribution of impact categories differed between taxonomic groups as well as in terms of years elapsed since the first introduction to Europe, i.e. residence time (Figure 1). Residence time was only different between taxonomic groups (LR Chisq = 95.52, df = 2, P < 0.001). Plants exhibited the longest residence time (years since the first recorded introduction to Europe), while fungi and insects were recorded to arrive in Europe more recently (Figure 2).
We classified nine different impact mechanisms for 114 alien species, through which environmental impacts were caused (Table 2). Overall, the most frequent impact mechanisms were Parasitism (49 alien species, or 43.0%), Competition (29 alien species, or 25.4%), and Structural impact on ecosystems (8 alien species, or 7.0%). This order varied among the different taxonomic groups: For fungi the most frequent impact mechanism was found to be Parasitism (87%) followed by Competition (11%) and, lastly, Hybridisation (1%). For insects, Parasitism occurs most frequently (90%), followed by Structural impact on the ecosystem (6%) and Predation (2%). Whereas for plants Competition (50%) occurred more frequently followed by Parasitism (22%) and Structural impact on the ecosystem (9%). The impact category with the most references found was Moderate (MO) for plants, and Minor (MN) for fungi and insects (Figure 3). Furthermore, we identified differences in the variability of impact magnitudes (concurrence) across taxonomic groups (Appendix 2: Table A2): Assessments of alien species from the taxonomic group insects varied the most (highest concurrence 87.5%, SD = 0.1), followed by fungi (concurrence = 82.2%, SD = 2.9), and plants (concurrence = 65.9%, SD = 15.2). The consensus concurrence on impact categories across impact mechanisms was the lowest for Competition (concurrence = 66.6%, SD = 4.3) and the highest for Transmission of diseases (concurrence = 100%, SD = 0.0) ( Table 2).
The best model explaining the impacts of the invasive alien species included explanatory variables taxonomic group and geographic origin (Hemisphere) ( Table 3). The parameter estimates were provided by the likelihood confidence intervals. Insects had a significantly lower impact on native forests than fungi, while plants had a similar impact to fungi (Table 3). Alien species from the Southern hemisphere had a lower impact than species from the Northern hemisphere although the difference in impact was not significant (Table 3). We were unable to conduct an EICAT impact assessment for 84 alien species due to data deficiency. For the data deficiency, the averaged model included the year of introduction, the taxonomic group and geographic origin (Table 4, Figure 4). The averaged model showed that for all taxonomic groups the impact descriptions were more likely to be found for the recently introduced species (Table 4). Furthermore, the fungi had a higher probability for an impact to be described than the insects and the plants (Table 4). There was no difference between alien species coming from both hemispheres in data deficiency.
Discussion
The management of harmful invasive alien species has become one of the greatest technical and financial challenges for the management of protected areas (Foxcroft et al. 2019;Mill et al. 2020). The prioritization of alien taxa is essential for setting costeffective management goals, for high priority species, which possess a severe negative impact. This is particularly important when a large pool of alien species is present (Campagnaro et al. 2018;Fogliata et al. 2021), like in the riparian forest of the UN- Table 3. Results from the cumulative link model (CLM) demonstrating the relationship between the impact category of the EICAT impact assessments and explanatory variables: taxonomic groups and native geographic origin, showing the parameter estimates for the minimum adequate CLM; * P < 0.05, ** P < 0.01. The taxonomic groups were compared to plants and the southern hemisphere is compared to the northern hemisphere. The estimate shows the slope or the estimated difference from the reference level. ESCO Mura-Drava-Danube Biosphere Reserve. As with many other protected areas in Europe, a the Mura-Drava-Danube Biosphere Reserve also relies on transnational cooperation to face the common cross-border challenge adapting forest management to climate change, as well as for conservation of riparian forest ecosystems (Turnock 2002;Sallmannshofer et al. 2021). A prioritization of alien species is especially important to combat the spread of most harmful invasive alien species by harmonizing the management efforts of various administrations in the transboundary protected area. Table 4.
Using the EICAT assessment, this study successfully categorized impacts on European forest ecosystems caused by 114 alien species of three taxonomic groups (plants, insects, and fungi) with reported populations in Southeast European forest ecosystems, all of which might pose a threat to the UNESCO Mura-Drava-Danube Biosphere Reserve. The information on environmental impacts was available for 90% of the fungi, 52% of the plants and 44% of the insects. The fact that more information was available for fungi is likely due to the small number of fungi included on the list of potentially occurring alien species in the assessment area (only 19% of 189 alien species were fungi). Moreover, although the tools and methods to identify fungal species have been positively influenced by advances in molecular biology, proper identification as well as invasion biology of fungi and fungal-like organisms have not yet been sufficiently explored. This is of particular importance as control measures depend on proper identification of diseases and their causal agents (Chetana et al. 2021). In addition, in this study we specifically assessed the impact of alien taxa on European forest ecosystems, which are highly affected by invasive alien species (Seidl et al. 2014). Therefore, impact reports were limited to observed impact on European forest ecosystems; well-described impacts on agriculture and horticulture (DiTommaso et al. 2016;Aneva et al. 2018) were not included in the assessment and are not covered in EICAT. This focus on impacts on forest ecosystems allowed us to provide a cross-taxon classification for the protected riparian forests of the Biosphere Reserve, as well as to identify reported impact mechanisms and knowledge gaps, and to facilitate discussions among local experts and stakeholders in the assessment area. Furthermore, our study shows that many invasive alien species are particularly affecting the riparian forest ecosystems. For instance, the fungi Hymenoscyphus fraxineus caused a population decline of the tree species Fraxinus excelsior, which is an important target tree species of the habitat type 91F0 (Riparian mixed forests of Quercus robur, Ulmus laevis and Ulmus minor, Fraxinus excelsior or Fraxinus angustifolia, along the great rivers (Ulmenion minoris)) under the EU Habitat directive. It has been shown that Fallopia spp. changes the chemistry of the litter layer and outcompetes the native species, this especially affects the herb layer but also the growth of the saplings, hence the reproduction of the riparian forests (Lavoie et al. 2018).
The assessment of the current impact information showed that none of the 114 alien species were categorized with the EICAT impact category Massive (MV), because the reported impacts unlikely result in irreversible extinctions of native species populations in the context of EICAT (IUCN 2020a). However, six alien plants and five alien fungi were found at the top of the ranking list of harmful alien species -classified in the EICAT category `Major` (MR) -leading to local extinctions of native species in European forest ecosystems. For example, the Himalayan balsam (Impatiens glandulifera Royle) has been observed to have negative impacts on herbaceous native plant species diversity due to shading, which led to local extinctions (Čuda et al. 2017;Tanner and Gange 2020). The impacts of I. glandulifera are recognized across Europe and therefore this species is also included on the list of invasive alien species of Union concern (Regulation (EU) 1143/2014). In total, five alien plants (Major impact: Impatiens glandulifera, Humulus scandens; Moderate impact: Heracleum mantegazzianum, Asclepias syriaca, Ailanthus altissima) in the upper ranking of this study are considered as invasive species on the Union List and therefore subject to restrictions and measures set out in the Regulation (EU) 1143/2014. Other alien species in the top of the ranking list of harmful alien species in this paper, such as the False indigo (Amorpha fruticosa L.), showed severe and well-documented impacts on the native species composition of invertebrates, plant diversity, and forest regeneration in riparian areas of South-East Europe (Nagy et al. 2018;Kiss et al. 2019), which are challenging to control (Szigetvári 2002;Brigić et al. 2014). Based on the results we suggest to consider including Amorpha fruticosa as invasive species on the EU Union List to facilitate an effective early warning system and rapid eradication measures throughout Europe, where it mainly established in southern EU member states so far. Furthermore, only one invasive plant species causing Major impacts in this study, Heracleum mantegazzianum (rank 22), is ranked among the "more than 100 worst" alien species list for Europe, while two top ranked fungi, Ophiostoma novo-ulmi (rank 29) and Hymenoscyphus fraxineus (rank 18) were identified as species of the greatest concern in Europe (Nentwig et al. 2018). The other identified alien species with high impacts were missed by Nentwig et al. (2018), which indicates that the policy-relevant listing approach is lacking some of the more harmful alien species.
The invasive fungi at the top ranking of this study include globally recognized forest pathogens which parasitize on native trees, such as Ophiostoma novo-ulmi that causes vascular wilt disease of elms known as Dutch elm disease. The disease has resulted in a massive, destructive pandemic in which most of the native elms (Ulmus spp.) have died (Alford and Backhaus 2005;Brunet et al. 2013). Breeding of several resistant clones and reintroduction of resistant native elms mitigated the threat of extinction (Brasier and Webber 2019; Jürisoo et al. 2019;Martín et al. 2019). Another invasive ascomycete fungus, Hymenoscyphus fraxineus, of the high-ranked alien species, causes ash die-back, a lethal disease of ash trees (Fraxinus spp.) in Europe since the early 1990 (Cross et al. 2017;Enderle et al. 2019). The observed impacts on the forests of South-East Europe, including a riparian zone and the generalist nature of the pathogen led to a 'Major' classification of the regionally fast spreading invasive fungus Botryosphaeria dothidea, which causes disease on both native (e.g. Populus spp.) and introduced forest tree species (Jurc et al. 2006;Karadzic et al. 2020;Zlatković et al. 2018). Practical management options for B. dothidea and other members of the Botryosphaeriaceae family are limited. Biological control methods against the disease caused by these fungi are being developed, but Botryosphaeriaceae invade xylem vessels thus making the application of pesticides or biological control products difficult or even inefficient (Aćimović et al. 2019;Karličić et al. 2020).
Invasive alien insects on average showed the lowest impacts. This is similar to the only other quantitative cross taxa comparison (based on the Generic Impact Scoring System GISS) which also included non-forest animals and plant species . Most of the insect species in the study area feed on leaves at levels that do not detrimentally affect the performance of the affected trees and only few references report damage to native trees. For example, the fruit and nut breeding Nearctic insect Chymomyza amoena was assigned to the lowest impact category Minor concern (MC), because no negative impact on native host species was observed despite its rapid spread since its arrival to Europe in 1975. However, the impact classification of alien insects may increase in time, if more research on other mechanisms is conducted like the competition with native species, which was recently discussed by Paulin et al. (2020) for North American oak lace bug (Corythucha arcuata). The feeding by C. arcuata can lead to a shortage of food for specialized oak-associated species and can cause larger negative impacts than previously expected (Paulin et al. 2020). Further, some invasive alien insects with a high negative environmental impact, such as the emerald ash borer (Agrilus planipennis), were not included for the EICAT assessment in this study, as the species was not yet found or is expected to currently occur in the Biosphere Reserve.
Alien species from the Northern hemisphere have higher environmental impacts than alien species from the Southern hemisphere. The residence time, measured as the time period that an alien species has been first recorded in Europe, was linked to the origin, especially for plants: alien plants showed an average residence time of 242 years, followed by 62 years for fungi and 60 years of residence time for insects. Alien species from the Northern hemisphere were present in Europe for a longer time period than alien species from the Southern hemisphere. They also occur more frequently, as only 2.5% of the alien species in the study area originate from the Southern hemisphere.
The EICAT classification revealed the impact mechanisms of 85% of the assessed alien species. Two impact mechanisms accounted for 68% of impacts across taxonomic groups: Parasitism for fungi and insects, and Competition for plants. This may partly be due to the different focus of the assessed studies; most references on insects and fungi studied the impact of insects and fungi on the health of their host trees. The assessed impact reports for this study on fungi and insects were mostly published by experts in forest protection, and for plants by experts in invasion biology. This may explain the different focus on the studied impact and impact mechanism of alien species, which impact tree species of economic interests (insects and fungi), and alien species, which impact the species richness (plants). However, the indirect impact mechanisms are more difficult to analyse, therefore impact reports usually focus on studying the direct impact mechanisms, rather than the indirect ones. Especially for insects, the indirect impacts are chronically underestimated, because the research direction is mainly focussed on the effects of insects on individual trees.
The EICAT classification identified knowledge gaps for 84 alien species, which were assigned to the category 'Data deficiency' (DD). We had to assign species to the category DD for three reasons: Firstly, no references were found on the species; second, references were found, but no impact was described or observed that can be assigned under EICAT; third, references describing impacts were found, but these impacts were not reported from European forest ecosystems. We suggest prioritizing research efforts on alien species with a commonly known impact outside of forests to investigate their potential impact on European forest ecosystems. For example, the invasive alien cicada Stictocephala bisonia caused plant damage and crop losses in Europe, but the impact on forest ecosystems has not been studied, although the species has been spreading in European forests (Walczak et al. 2018;Hörren et al. 2019). Furthermore, the risk of hybridization and competition of Asian weeping willow (Salix babylonica L.) with native species has been reported for forest ecosystems outside Europe, but the impacts were not yet investigated for European forest ecosystems (Amy and Robertson 2001; Richardson and Rejmánek 2011;Thomas and Leyer 2014). For some alien species, valuable references for forests on other continents, which are similar to European temperate forests in ecological conditions, were not included in this study, but could provide interesting results for the prioritization of alien species in forest ecosystems. Paap et al. (2020) encourages the collaboration of the two disciplines, invasion biology and plant pathology, to increase the success and efficiency for global biosecurity (Hulme 2021). In this study we experienced that interdisciplinary knowledge of the team of assessors is beneficial for cross-taxa EICAT assessments, which increased the understanding of the magnitude of environmental impacts of alien species of different taxonomic groups. The classification of alien species into harmful impact categories is needed for both forest health and invasive species management, as harmful alien species can cause great socio-economic impacts caused by decrease of timber production as well as the increase of management expenses (Hauer et al. 2020). It is therefore highly suggested to do a socio-economic impact assessment with SEICAT (Bacher et al. 2018) in order to include it in further management considerations.
This study has several implications for forests and forestry. Traditionally, forest management in the context of invasive alien species was focused on pests and diseases (Liebhold 2012). Many of them are also invasive alien species with a huge impact on the forest and the potentially harmful ones are listed in the EU regulations as quarantine species (Schrader and Unger 2003). Our study shows that fungi do have a very high environmental impact in forests, but plants are also represented among the highest impacting invasive alien species in the riparian forests of the transboundary Mura-Drava-Danube Biosphere Reserve in Southeast Europe. Therefore, more attention should be paid to invasive plants and the ground layer vegetation.
Conclusions
We see the classification of alien species according to the magnitude of their environmental impact as an important tool for prioritizing the species on which conservationists and forest managers should focus their immediate attention and for policy makers to ensure funding for protecting our forests from invasions. Especially in respect to the high level of biodiversity and heritage value provided in riparian forest ecosystems (Richardson et al. 2007;Ellison et al. 2017) as well as their numerous abiotic and biotic threats, the ranking approach is to be considered complementary to a site-led management approach, where prioritization is driven by urgency of control relative to the extinction of the native species (Downey et al. 2010).
We demonstrated that EICAT assessments were useful to prioritize alien species in the local assessment area and to refocus research efforts on recent knowledge gaps. More research on the impacts and impact mechanisms of more recently introduced alien species, especially insects and fungi, is needed to implement effective management measures in the early stage of the invasion. Additionally, analysis of available control methods is another prerequisite for planning conservation activities.
We join the recommendation that EICAT assessments should be performed as transparently as possible, which allows an open discussion of the results ). This study is only the second study after Volery et al. (2021) that publishes the original impact data that led to the EICAT classifications. The EICAT assessment can also be repeated after some time, as updated impact evidence can be found or new alien species occur in the region of the assessment area (IUCN 2020a). In conclusion, we recommend applying the EICAT protocol when planning conservation activities, because it decreases the danger of overlooking potential high-risk alien species. Although we are aware that the assessments reported here are a snapshot in time and space and impact magnitudes might change over time, a repeated application of EICAT will be very useful to study spatio-temporal trends in impact magnitudes. | 2021-10-15T16:25:09.894Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "7adbc16a4f2b88e35d81179ac9d6a7224c48c9ff",
"oa_license": "CCBY",
"oa_url": "https://neobiota.pensoft.net/article/71651/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d2aee02055e0eae82a90969eb3a1fabf2d3207a3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
1432798 | pes2o/s2orc | v3-fos-license | Bilingual Generation of Weather Forecasts in an Operations Environment
In 1986 the first experiments in text generation applied to weather forecasts resulted in a prototype system (t~AREAS[6,3]) for producing English marine bulletins from forecast data. Subsequent work in 1987 added French output to make the initial system bilingual (RAREAS-2111]). During 1988-1989 a full-scale operational system was created to meet the needs of daily marine forecast production for three regional centres in the Canadian Atmospheric Environment Service 1. In contrast to the earlier systems, the most recent one uses general models for both text planning and sentence realization (see sections 4 and 5 below).
Introduction
In 1986 the first experiments in text generation applied to weather forecasts resulted in a prototype system (t~AREAS [6,3]) for producing English marine bulletins from forecast data.Subsequent work in 1987 added French output to make the initial system bilingual (RAREAS-2111]).During 1988-1989 a full-scale operational system was created to meet the needs of daily marine forecast production for three regional centres in the Canadian Atmospheric Environment Service 1.In contrast to the earlier systems, the most recent one uses general models for both text planning and sentence realization (see sections 4 and 5 below).This new implementation, dubbed FoG for Forecast Generator, may constitute one of the first "indu'~trial" uses of text generation.FoG is of interest to computational linguists for three additional reasons: ,* the conceptual input to the text generation process is derived from data that also drive a graphic display on a workstation for forecasters; this determination of text frora a selected subset of graphically displayed data represents an important paradigm for the transformation of information; • conceptual processing results in an "interlingual"representation, a kind of deep syntactic structure for both English and French in this sublanguage; • sentence generation is carried out using a "streamlined" version of the Meaning-Text linguistic model; this may represent the first time that such a general model has been adapted to the descriptive problems arising in telegraphic sublanguages.
The Graphical Environment of Weather Forecasting
Operational meteorologists normally work with graphical representations of the information available to them."Charts" are used to display the large volumes of observational data and also the results of global simulations of the atmosphere.The graphical entities displayed on these charts (such as weather fronts, and low pressure systems) are manipulated to adjust for more recent data, and for perceived errors in the simula£ions.This results in manually created weather depictions which are valid at some future time (24 to 36 hours in the future).
The weather situation is always being monitored and updated as new information is received and assimilated.During the normal course of events, much of the communication between forecasters is done using these charts.When it is time to write a forecast for some user community, the forecaster has to extract the pertinent information from these charts and recast them into a structured text.In addition, the 'primary' information taken from the charts has to be modified for local geographic effects.The forecaster appears to do this while the text is being composed.This mental transposition of meteorological information from graphical to text form is believed to be open to a number of subjective errors.In addition, the pressure to compose text often conflicts with the scientific demands of analyzing an emerging weather situation.
FoG is part of the recently implemented Forecast Production Assistant (FPA) [7], which uses interactive computer graphics to Mlow the meteorologist to view and edit a display of the weather situation.All of the fields produced by the large scale computation are directly available on the FPA together with any manually produced products.This makes it possible to obtain numerical values directly from the charts and to use them in other applications such as FoG.
All of the fields required to produce forecast text can be obtained from the computer graphics.
From Data to Concepts
A sampling procedure is used to determine values of these fields at specific latitude and longitudes which have been pre-selected as being representative of weather conditions over a specified geographic area.
Computer animation techniques are used to interpo~ late between the standard chart times (normally ew.cry 1.2 hours) to whatever time resolution is required for the text product.Currently, charts are available at intervals of three hours through the forecast period.The problem is that this yields nine values for a 24 hour forecast.Practical considerations limit the number of events (e.g.shifts in wind speed or direction) in a forecast to three or four, depending on the severity of the weather.The conceptual phase of the processing treats the sampled data so that only the significant events in time and space are passed on to textual encoding.
Conceptual processing involves several stages.1) Events requiring "weather warnings" are identified and stored before any data smoothing is done, 2) Sampled data is smoothed with respect to time so that only the significant weather changes are retained, 3) Spatial smoothing is done so that areas sharing similar weather conditions can be grouped together in the text.We have noticed, however, that the notion of "significance" is partly dependent on the ability of the lexicon of the forecast language to make semantic distinctions.Thus, a wind change of 30 degrees is more likely to be judged significant when it crosses the boundary between, say, northeasterly and easlerly, than when it stays entirely within the range of one of these terms.The semantic granularity of temporal adverbs has a similar "anticipatory" effect on the way generalizations are made over tirne.This constitutes a kind of filter on content determination that precedes formal text planning.
tion (the conceptual representation corresponding to one future text) into sentencc~sized chunks of infor~ mation ("sentence partitioning") within the complex text structure.The chunks are then linearly ordered according to principles that are sometimes domainspecific, but often more general (e.g., temporal sequence).There is a subsidiary problem of making hill or partial copies of certain concepts to assure continuity of reference between consecutive sentences.The output of the text structuring process gives, fbr each forecast area, a partitioned and possibly enriched structure called the "text representation".
The final stage in text planning involves converting the single partitioned text representation into an a(> tual sequence of conceptual representations for individual sentences.The strong similarity between forecast styles and structures used in Canadian marine forecasts in the two official languages makes it possible to formulate a single interlingual structure, which can map quite directly to the "deep" syntactic structure of the corresponding sentence in either English or French.The primary issue here is the identity of information conveyed in the two parallel sublanguages, and the fact that sentence scoping may be performed in identical ways on the text con-tent representations used for English/I~'ench.There is no guarantee that such an interlingua would suffice for a language using a very different conceptual system or communication style for weather phenomena (e.g., hmktitut).
Meaning-Text Realization Component 4 Text Planning
Text planning in l, bG consists of three stages: content determination, text structuring and interlingua production.Content determination covers the problems of (1) converting the smoothed data on significant meteorological events into complex objects appropriate for inferencing, one object for each meteorological event of interest, and (2) using the structured data objets to compute additional concepts needed to talk about transitions between weather events.':['he output of content determination is, for each !Forecast area, an enriched data object called a "text content representation".
Text structuring consists basically of finding the optimal way of cutting each text content represents-
The last part of forecast generation involves the relatively well-developed technique of sentence realization.By this we mean the conversion of interlingual representations of English/French sentences into acceptable word strings in one or tile other of these two languages.To guarantee generality and long-term flexibility of linguistic modelling, we have chosen to use the Meaning-Text linguistic theory of Mel'auk [8], which has also served as the framework for text generation in other technical sublanguages [5,4].Because of the lack of semantic paraphrase in the forecasting sublanguage considered, however, we have eliminated the semantic net representations from the processing stages, passing directly from interlingual representations to deep syntactic dependency trees[10].We have implemented a fragment of an existing Meaning-Text model for English [9] and adapted this model for 15"each.
Implementation
FoG is written in Quintus Proleg and runs on a Hewlett-Packard 9000 workstation as part of the FPA system.The graphics software on the FPA workstation is programmed mostly in C. As of April 1990, the entire FPA is undergoing testing by three regional weather centres in Eastern Canada and is being phased into daily production, initially during one of the three daily work shifts.
Future Plans
Since FoG is now configured only to produce marine forecasts for the Halifax, Gander and Great Lakes regions of Canada, an early priority is to adapt the software (mostly the text planner) to the different content and style of forecasting found in Pacific Canada and other marine regions, and to specialized marine forecasts (e.g., for small craft).Concurrently, investigation should continue into extending the system to other forecast types, including agricultural and general public forecasts.We expect that our linguistic model will also facilitate the addition of high-quality voice output as an option at some future time. | 2014-07-01T00:00:00.000Z | 1990-08-20T00:00:00.000 | {
"year": 1990,
"sha1": "87dcdf02a4782ae1b43e8519b718a24b738ef232",
"oa_license": null,
"oa_url": "https://doi.org/10.3115/991146.991205",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "177ff88bbcebd9ae1c107d85536c90755d5e411f",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4668854 | pes2o/s2orc | v3-fos-license | Natural product pectolinarigenin inhibits osteosarcoma growth and metastasis via SHP-1-mediated STAT3 signaling inhibition
Signal transducer and activator of transcription 3 (STAT3) has important roles in cancer aggressiveness and has been confirmed as an attractive target for cancer therapy. In this study, we used a dual-luciferase assay to identify that pectolinarigenin inhibited STAT3 activity. Further studies showed pectolinarigenin inhibited constitutive and interleukin-6-induced STAT3 signaling, diminished the accumulation of STAT3 in the nucleus and blocked STAT3 DNA-binding activity in osteosarcoma cells. Mechanism investigations indicated that pectolinarigenin disturbed the STAT3/DNA methyltransferase 1/HDAC1 histone deacetylase 1 complex formation in the promoter region of SHP-1, which reversely mediates STAT3 signaling, leading to the upregulation of SHP-1 expression in osteosarcoma. We also found pectolinarigenin significantly suppressed osteosarcoma cell proliferation, induced apoptosis and reduced the level of STAT3 downstream proteins cyclin D1, Survivin, B-cell lymphoma 2 (Bcl-2), B-cell lymphoma extra-large (Bcl-xl) and myeloid cell leukemia 1 (Mcl-1). In addition, pectolinarigenin inhibited migration, invasion and reserved epithelial–mesenchymal transition (EMT) phenotype in osteosarcoma cells. In spontaneous and patient-derived xenograft models of osteosarcoma, we identified administration (intraperitoneal) of pectolinarigenin (20 mg/kg/2 days and 50 mg/kg/2 days) blocked STAT3 activation and impaired tumor growth and metastasis with superior pharmacodynamic properties. Taken together, our findings demonstrate that pectolinarigenin may be a candidate for osteosarcoma intervention linked to its STAT3 signaling inhibitory activity.
Signal transducer and activator of transcription 3 (STAT3) has important roles in cancer aggressiveness and has been confirmed as an attractive target for cancer therapy. In this study, we used a dual-luciferase assay to identify that pectolinarigenin inhibited STAT3 activity. Further studies showed pectolinarigenin inhibited constitutive and interleukin-6-induced STAT3 signaling, diminished the accumulation of STAT3 in the nucleus and blocked STAT3 DNA-binding activity in osteosarcoma cells. Mechanism investigations indicated that pectolinarigenin disturbed the STAT3/DNA methyltransferase 1/HDAC1 histone deacetylase 1 complex formation in the promoter region of SHP-1, which reversely mediates STAT3 signaling, leading to the upregulation of SHP-1 expression in osteosarcoma. We also found pectolinarigenin significantly suppressed osteosarcoma cell proliferation, induced apoptosis and reduced the level of STAT3 downstream proteins cyclin D1, Survivin, B-cell lymphoma 2 (Bcl-2), B-cell lymphoma extra-large (Bcl-xl) and myeloid cell leukemia 1 (Mcl-1). In addition, pectolinarigenin inhibited migration, invasion and reserved epithelial-mesenchymal transition (EMT) phenotype in osteosarcoma cells. In spontaneous and patient-derived xenograft models of osteosarcoma, we identified administration (intraperitoneal) of pectolinarigenin (20 mg/kg/2 days and 50 mg/kg/2 days) blocked STAT3 activation and impaired tumor growth and metastasis with superior pharmacodynamic properties. Taken together, our findings demonstrate that pectolinarigenin may be a candidate for osteosarcoma intervention linked to its STAT3 signaling inhibitory activity. Osteosarcoma is the most common malignant bone tumor in children and adolescents and arises from cells of mesenchymal osteoblast origin. 1,2 Despite advances in surgery and multiagent chemotherapy, nearly 30% of patients still die from osteosarcoma. 2 And the survival rates for osteosarcoma remain relatively low over the past two decades. 3 Therefore, it is necessary to develop novel therapeutic approaches for osteosarcoma treatment.
Signal transducer and activator of transcription 3 (STAT3) is an important transcription factor that involves in proliferation, survival, apoptosis, angiogenesis and metastasis. 4,5 Upon stimulation by cytokines (interleukin-6 (IL-6), IL-11 and etc.) and growth factors (EGF, PDGF and etc.), STAT3 can be phosphorylated at tyrosine residue 705. STAT3 phosphorylation facilitates its homo-and heterodimerization, and the dimer then enters the nucleus where it regulates transcription, leading to increased downstream gene transcription such as Vegf, Bcl-2, BcL-xL, Survivin, XIAP, MMPs and etc. 6 Src homology region 2 (SH2) domain-containing phosphatase 1 (SHP-1) belongs to a family of non-receptor protein tyrosine phosphatases (PTP) and acts as a negative regulator of numerous signaling pathway. 7 Previous studies reported SHP-1 tyrosine phosphatase inhibited JAK/STAT3 signaling and contributed to antitumor activity in a wide variety of tumor. 8,9 Recent studies have indicated that STAT3 is constitutively activated in many cancers, including, but not limited to, head and neck squamous cell carcinoma (HNSCC), 10 breast cancer, 11 ovarian cancer, 12 lung cancer 13 and leukemia. 14 With respect to osteosarcoma, the expression level of p-STAT3 is strongly associated with its prognosis and approximately 20% osteosarcoma was shown to express high levels of p-STAT3 Tyr705 . 15 The activated STAT3 pathway is vital for cell growth and metastasis of human sarcoma. 16 Consequently, STAT3 pathway may represent a target for therapeutic intervention in osteosarcoma.
A variety of inhibitors of STAT3 have shown to inhibit tumor cell growth and metastasis both in vitro and in vivo. 17,18 Agents
R E T R A C T E D A R T I C L E
derived from natural sources have gained considerable attention from researchers and clinicians because of their safety, efficacy and immediate availability, and they are the best sources of drugs and drug leads for novel drug discovery. Natural agents, such as Cucurbitacin E, 19 Galiellalactone, 20 Atiprimod 21 and betulinic acid, 9 have shown significant efficacy in blocking STAT3 activation. Pectolinarigenin, a flavonoids compound, which can be isolated from the aerial parts of C. chanroenicum has been shown to possess numerous biologic activities such as anti-inflammation and anti-allergy. 22,23 Some research also reported pectolinarigenin repressed cancer growth in vitro, including lung cancer, hepatocellular carcinoma, melanoma and colorectal adenocarcinoma. 23 However, the function and regulatory mechanism of pectolinarigenin in osteosarcoma growth and metastasis are still not well understood.
In our current study, we used a dual-luciferase assay to reveal the natural product pectolinarigenin counteracted STAT3 activity. We found pectolinarigenin inhibited constitutive and IL-6-induced STAT3 phosphorylation and blocked STAT3 DNA-binding activity and blocked STAT3 cyoplasmic-tonuclear translocation in osteosarcoma cells. We also showed pectolinarigenin blocked a transcription repression program composed of STAT3/DNA methyltransferase 1 (DNMT1)/ HDAC1 histone deacetylase 1 (HDAC1), thus restoring the expression of STAT3-negative mediator SHP-1. Functional assays and western blot analyses indicated pectolinarigenin suppressed osteosarcoma cell growth, motility and reduced the expression of STAT3 related proteins. We further demonstrated the inhibitory efficacy of pectolinarigenin in osteosaocma growth and metastasis using preclinical animal models. In conclusion, these findings implied pectolinarigenin can act as an anticancer agent in osteosaocma via inhibiting STAT3 signaling.
Results
Pectolinarigenin inhibits STAT3 signaling in osteosarcoma. STAT3 is constitutively activated and prognostic value has been identified to associate with the phosphorylated STAT3-signatures in osteosarcoma. As such, targeting STAT3 signaling with small molecule inhibitors is an emerging therapeutic strategy for osteosarcoma. Screening with a dual-luciferase reporter assay, we identified a flavonoids compound, pectolinarigenin (MW: 314.29), with STAT3 inhibitory activity in a dose-dependent manner in our internal Chinese medicine chemical library (Figure 1a). The chemical structure of pectolinarigenin was shown in Figure 1b. Immunoblotting with an antibody recognizing p-Tyr705 residue of STAT3 showed the constitutive activation of STAT3 was blocked by pectolinarigenin ( Figure 1c). In response to growth factor or cytokine stimulation, p-Tyr705 residue of STAT3 can also be activated. IL-6 represents one of the most important inflammatory factors inducing STAT3 phosphrylation at Tyr705. 24 Our results indicated pectolinarigenin significantly suppressed IL-6-induced STAT3 phosphrylation ( Figure 1d). Intriguingly, Janus kinase 2 (JAK2), the known upstream regulatory signal of STAT3, was inactivated by pectolinarigenin (Figures 1c and d). Constitutive or inducible activation of STAT3 Tyr705 is critical for its biologic function, as it facilitates STAT3 dimerization, further promoting STAT3 cytoplasimic-to-nuclear translocation. 25 We found IL-6induced STAT3 nuclear accumulation was largely impaired after pectolinarigenin treatment (Figure 1e). Similar results were observed when immunobloting with an anti-STAT3 antibody to detect STAT3 distribution in both cytoplasm and nucleus (Figure 1f). In addition, the results of an electrophoretic mobility shift assay (EMSA) confirmed that treatment with pectolinarigenin led to a dose-dependent inhibition of STAT3 DNA-binding activity in 143B cells (Figure 1g). These results showed pectolinarigenin is a potent inhibitor of STAT3 signaling in osteosarcoma.
SHP-1 is essential for pectolinarigenin-mediated STAT3 Tyr705 phosphrylation repression. PTPs have been implicated in STAT3 signaling activation, 26 and we sought to investigate whether PTPs involved in the blockade of STAT3 signaling by pectolinarigenin in osteosarcoma cells. Sodium vanadate, a nonspecific phosphatase inhibitor, could reverse pectolinarigenin-induced inhibition of STAT3 activity (Figure 2a), implying the involvement of tyrosine phosphatases. We thus detected the protein level of several protein phosphatases (SHP-1, SHP-2 and phosphatase and tensin homolog (PTEN)) after pectolinarigenin exposure. We found pectolinarigenin specifically increased SHP-1 expression, whereas it had no effect on the expression of SHP-2 and PTEN ( Figure 2b). This result suggested SHP-1 has an important role in pectolinarigenin-induced inhibition of STAT3 activity. Next, we queried whether pectolinarigenin treatment could induce SHP-1 at the transcriptional level. As anticipated, SHP-1 mRNA was significantly increased when treated with pectolinarigenin ( Figure 2c). These data suggested that the upregulated SHP-1 protein expression may be caused by an increase at transcriptional level. Previous studies reported STAT3 nucleates a transcriptional repressive complex composed of DNMT1 and HDAC1 in SHP-1 promoter site, thus leading to the silencing of SHP-1 in cancers. 27 Therefore, we explored the effect of pectolinarigenin on STAT3/DNMT1/HDAC1 complex formation in 143B nuclear lysates. As shown in Figure 2d, after immunoprecipitating STAT3, we detected the reduced associated DNMT1 and HDAC1 when treated with pectolinarigenin. Similarly, after immunoprecipitating DNMT1, the associated STAT3 and HDAC1 decreased. Quantitative ChIP (qChIP) analysis in 143B cells using specific antibodies against STAT3 and DNMT1 showed a release of STAT3 and DNMT1 on the SHP-1 promoter after pectolinarigenin treatment ( Figure 2e). These data demonstrated that pectolinarigenin induced SHP-1 expression by reducing the STAT3/DNMT1/HDAC1 complex on SHP-1 promoter in osteosarcoma. To validate the important effect of SHP-1 on pectolinarigenin-induced inhibition of STAT3 activity, we silenced SHP-1 with small interfering RNA (siRNA) duplex in 143B cells (Supplementary Figure 1). We used siRNA-1 to perform the following experiment as the knockdown efficiency remained the same in two pairs of siRNAs. As expected, downregulation of SHP-1 by siRNA-1 abolished the inhibitory effects of pectolinarigenin on STAT3 p-Tyr705 (Figure 2f, up panel). The viability of tumor cells was also partly increased
L E
Pectolinarigenin inhibits osteosarcoma cells proliferation, colony formation and induces apoptosis in osteosarcoma cell lines. The activated STAT3 pathway has key roles in cell growth, survival and apoptosis in human cancers. 6 To evaluate the anti-proliferative effect of pectolinarigenin, we performed MTS cell proliferation assay using a panel of osteosarcoma cells. Pectolinarigenin effectively decreased the viability of 143B, MG63.2, HOS and MG63 cells in a concentration-dependent manner ( Figure 3a). Colony formation is considered to be well simulated, the pathological process of tumor development in vivo. We analyzed clonogenicity of various osteosarcoma cell lines after treatment with pectolinarigenin. As shown in Figure 3b, pectolinarigenin treatment resulted in a marked decrease in colony numbers. In addition, we examined the pro-apoptotic propensity of pectolinarigenin. Flow cytometry analysis showed that a large percentage of 143B cells underwent apoptosis after pectolinarigenin exposure ( Figure 3c). We then investigated the effect of pectolinarigenin on STAT3 downstream target genes, which are closely related to tumor cell growth, survival and apoptosis. An immunoblotting assay revealed the protein level of STAT3 downstream targets cyclin D1, Survivin, B-cell lymphoma 2 (Bcl-2), B-cell lymphoma extra-large (Bcl-XL) and myeloid cell leukemia 1 (Mcl-1) was significantly reduced by pectolinarigenin ( Figure 3d). Collectively, these results showed pectolinarigenin inhibits osteosarcoma cells cell growth, survival and induces apoptosis via suppressing STAT3 signaling.
Pectolinarigenin inhibits adhesion, migration, invasion and reversed EMT phenotype in osteosarcoma cells.
Tumor metastasis requires precisely orchestrated regulation of multiple cellular processes that involve cell adhesion, migration and invasion. To determine whether pectolinarigenin inhibits osteosarcoma cells adhesion, migration and invasion, we used 143B and MG63.2 cells with highly invasive property to perform experiments. As shown in Figure 4a (left panel), pectolinarigenin effectively impaired osteosarcoma cell adhesion to the matrix in a dosedependent manner. In addition, osteosarcoma cell migration and invasion were markedly blocked by pectolinarigenin (Figures 4a and b). To mimic the three-dimensional (3D) conditions similar to those observed in vivo during tumor cell invasion, we developed a 3D culture model. In control group, osteosarcoma cells formed 3D clusters with cells protruding into the surrounding matrix, whereas treatment with pectolinarigenin resulted in the opposite phenotypes ( Figure 4c). Epithelial-mesenchymal transition (EMT) is considered to be a critical mechanism regulating the initial steps in metastatic progression. 28 Previous studies reported STAT3 may directly mediate EMT in cancer progression. 29 To investigate the effect of pectolinarigenin on osteosarcoma EMT, we examined EMT-associated markers. We found pectolinarigenin could significantly downregulate the expression of mesenchymal markers (N-cadherin, fibronectin and zinc-finger E-box binding homeobox 1 (ZEB1)) and upregulate epithelial cell marker E-cadherin (Figure 4d). In line with this result, an immunofluorescence (IF) assay indicated exposure to pectolinarigenin resulted in a reverse of EMT, as indicated by the decreased membrane-located N-cadherin and increased E-cadherin ( Figure 4e). These results suggested that pectolinarigenin showed metastasis inhibitory effects in vitro, further supporting the testing of in vivo antimetastasis efficacy of pectolinarigenin in osteosarcoma.
Pectolinarigenin inhibits tumor growth, metastasis and prolongs the survival of mice in a spontaneous animal model. To assess whether the biologic effect of pectolinarigenin on osteosarcoma is potentially clinically relevant, we detected the in vivo efficacy of pectolinarigenin in tumor growth and metastasis in orthotopic osteosarcoma implanted mice. Discernable differences in tumor growth among pectolinarigenin-treated and control tumors were observed, as tumor weight was markedly relived in pectolinarigenin treatment groups compared with control group (Figure 5a). We also found the lung weight of control mice was drastically increased because of metastasis burden (Figure 5b). In highdose group, metastasis nodules were hardly observed in lungs (Figure 5c, left panel). The number of lung metastasis was significantly reduced in mice that received pectolinarigenin ( Figure 5c, right panel). Approximately 90% of mortality from cancer patients is attributable to metastases. To detect whether the metastasis suppression effect of pectolinarigenin could yield a survival benefit, the survival rate was calculated. Our data showed pectolinarigenin remarkably improved overall survival of tumor bearing mice. On day 32, all the mice had died in control group, whereas only one mouse died in highdose pectolinarigenin treatment group ( Figure 5d). Moreover, in agreement with our in vitro results, xenografts treated with pectolinarigenin displayed a lower level of STAT3 p-Tyr705 in comparison with control group (Figures 5e and f). We also found pectolinarigenin induced SHP-1 expression and downregulated STAT3 downstream genes (Survivin, Bcl-2 and Bcl-XL) expression ( Figure 5f). Altogether, these in vivo results showed that pectolinarigenin suppresses osteosarcoma growth and metastasis by blocking STAT3 signaling.
Pectolinarigenin inhibits tumor growth in a patientderived osteosarcoma xenograft animal model. Patientderived xenograft (PDX) models may be superior to traditional cell line xenograft models of cancer because they maintain more similarities to the parental tumors. 30 We subcutaneously transplanted the second generation of patient-derived osteosarcoma in nude mice. We detected significant difference in tumor growth among pectolinarigenintreated and control group. Grafts treated with pectolinarigenin had an average volume of 480.44 mm 3 (20 mg/kg/2 days) and 182.84 mm 3 (50 mg/kg/2 days) (Figure 6a). However, the 3 . In line with this, tumor weight was significantly reduced after pectolinarigenin administration in comparison with solvent control (Figure 6b). Immunohistochemistry assay and immunoblotting analysis of tumor tissue indicated STAT3 p-Tyr705 level decreased in pectolinarigenin treatment group compared with control group (Figures 6c and d). Furthermore, we found pectolinarigenin induced SHP-1 expression and downregulated STAT3 downstream genes (Survivin, Bcl-2 and Bcl-XL) expression ( Figure 6d). These data implied that the growth inhibitory effect of pectolinarigenin correlated with suppression of STAT3 signaling in patient-derived tumors. Altogether, these result solidly showed that pectolinarigenin possesses antitumor activity in osteosarcoma.
The potential toxicity of pectolinarigenin on mice. To investigate the systemic potential toxicity of pectolinarigenin, male BALB/c mice received intraperitoneal (i.p.) injection of pectolinarigenin (50 mg/kg/2 days) for 28 days. Body weight was detected once a week. Mice were killed on day 29, and the major organs were weighed and paraffin embedded for hematoxylin and eosin (H&E) staining. No significant changes in mice body and organ weight were observed after treatment with pectolinarigenin (Figures 7a and b). H&E staining revealed that pectolinarigenin showed no obvious damage to major organs including heart, lung, liver, spleen and kidney (Figure 7c). It implied that pectolinarigenin shows few side effects on the mouse body at our therapeutic concentration.
Discussion
Constitutive activation of STAT3 has been detected in a wide range of tumor types and pharmacological inhibition of STAT3 has shown its vast potential as anticancer therapies in vitro and in vivo. In our current study, we showed that pectolinarigenin is a potent STAT3 inhibitor that inhibits osteosarcoma growth and metastasis. We found that pectolinarigenin disturbed the DNMT1/HDAC1/STAT3 complex formation in SHP-1 promoter site, thus releasing the transcription repression of SHP-1. Our results indicated the antitumor action of pectolinarigenin mainly depended on SHP-1-mediated STAT3 signaling suppression. In addition, we used cell line and patient-derived osteosarcoma animal models to reveal pectolinarigenin inhibited tumor growth and metastasis with no obvious side effects in vivo. Our findings provide solid evidence for the anti-osteosarcoma action and new mechanistic insight of pectolinarigenin that may aid its application in osteosarcoma intervention. Our findings clearly displayed that pectolinarigenin inhibited STAT3 signaling. Previous studies reported inhibition of STAT3 signaling by RNA interference (RNAi), peptides, and small molecular inhibitors lead to successful suppression of tumor cell growth and metastasis. 31 In addition, series of downstream target genes of STAT3 signaling have been identified, including that encode anti-apoptotic and proliferationassociated proteins (such as Bcl-xL, Bcl-2, cyclin D1 and Survivin). 32 These small molecules inhibit STAT3-mediated gene regulation, block tumor cell proliferation and selectively induce apoptosis of tumor cells with activated STAT3. In this study, pectolinarigenin suppressed osteosarcoma cell proliferation and induced apoptosis, meanwhile, we also investigated pectolinarigenin downregulated STAT3 downstream proteins, such as, Bcl-xL, Bcl-2, cyclin D1 and Survivin. These founds supported that pectolinarigenin has a function on anticancer mainly because of its STAT3 signaling inhibitory activity.
Our results showed pectolinarigenin induced SHP-1 expression via promoting its transcriptional activity. SHP-1 is a tyrosine phosphatase being proposed as a candidate tumorsuppressor gene in various cancers. And it functions as an antagonist to the tumor growth and metastasis-related tyrosine kinases. 33,34 SHP-1 binds to JAK2 and regulates the activity of JAK2 and STAT3. It is deemed as a negative regulator of JAK2/STAT3 signaling pathway. 35 In our results, silencing SHP-1 can rescue the reduced expression of p-STAT3 by pectolinarigenin (Figure 2f). Previous study indicated STAT3, DNMT1 and histone deacetylase 1 from transcriptional repressive complex, which could silence the expression of SHP-1. 27 We speculate that the accumulation of SHP-1 by pectolinarigenin may be partially due to the disruption of this complex. As expected, pectolinarigenin disturbed this complex formation in SHP-1 prompter site. STAT3 is often considered as a transcription activator. However, transcription repression by STAT3 has also been reported. 36 Our chromatin immunoprecipitation (ChIP) analysis showed STAT3 diminished in the SHP-1 promoter region after pectolinarigenin treatment. These data may imply STAT3 is a transcription repressor when binding to the promoter of the tumor suppressor. SHP-1 promoter hypermethylation would also lead to its downregulation with consequent activated phosphorylation of STAT3. 37 We speculate the combined STAT3 and DNMT inhibition is a reasonable treatment strategy in STAT3-activated cancers.
Figure 4 Pectolinarigenin inhibits adhesion, migration, invasion and reversed EMT phenotype in osteosarcoma cells. (a) Left panel, adhesion assay. 143B and MG63.2 cells
were pretreated with various concentrations of pectolinarigenin for 12 h. Cells were trypsinized, and seeded on a fibronectin coated 96-well plate. After 15 min, non-adherent cells were removed and adherent cells were stained with 0.1% crystal violet. The precipitates were dissolved in 30% acetic acid, and the absorption at 590 nm was acquired. Middle panel, wound-healing migration assay. 143B and MG63.2 cells were seeded into six-well plates and left to grow to full confluence. Cells were scratched to create a wound and exposed to different concentrations of pectolinarigenin. Images were acquired after 12 h. Cell migration was quantified manually. Right panel, invasion assay. 143B and MG63.2 cells were resuspended in serum-free medium and seeded into the upper chamber of the transwell inserts precoated with Matrigel. Complete medium containing different concentrations of pectolinarigenin were added in the bottom well. After 12-h incubation, images were obtained. Cell invasion was quantified manually.
L E
During the process of EMT, carcinoma cells lose their epithelial characteristics, including polarity and cell-cell adhesion, and acquire a mesenchymal cell phenotype to gain invasion capacity. 38 EMT is a critical step in order for epithelialderived malignancies to metastasize; however, it also has vital roles for mesenchymal-derived tumor metastasis, such as Pectolinarigenin inhibits STAT3 activity T Zhang et al osteosarcoma. 39,40 The reason of highly metastatic propensity of osteosarcoma may be partly due to its mesenchymal origin and osteosarcoma could be considered as a tumor that has undergone EMT. STAT3 signaling pathway has been validated to involve in tumor EMT. STAT3 promotes ZEB1 expression and downregulates E-cadherin and therefore directly mediates EMT progression in colorectal carcinoma. 41 Indeed, we found a high level of EMT driver Reversing osteosarcoma cell EMT behavior may partly explain the reduced tumor invasion and metastasis by pectolinarigenin. These results support that pectolinarigenin serves as a novel STAT3 inhibitor that antagonizes EMT and thereby prevents osteosarcoma metastasis. An important finding in this study is that pectolinarigenin displayed satisfactory therapeutic efficacy in animal models. Approximately 40-50% of osteosarcoma patients will develop pulmonary metastasis, and the 5-year survival rate of patients with metastases is even lower than 30%. 42 In our orthotopic implantation xenograft animal model, we found the metastasis of tumor cells to the lungs was significantly inhibited and the survival of the mice was improved. Recent studies have suggested that the phenotype of cultured cell lines has diverged substantially from the clinical patient tumors from which they derived. 30 Cell lines may lose their heterogeneity under the laboratory culture conditions. 43 However, PDXs are based on the transfer of tumors directly from the patient into an immunodeficient mouse, and are of high value in the translation of cancer therapeutics into clinical settings. 30,43 Patient-derived human osteosarcoma xenograft animal model was applied to detect the effect of pectolinarigenin in our research. Remarkably, mice treated with pectolinarigenin showed a robust inhibition of tumor growth during the course of the experiment, compared with the control mice. We also found pectolinarigenin suppressed the expression of p-STAT3 Tyr705 in tumor tissue, therefore mirroring our in vitro data. These results showed that pectolinarigenin may provide significant clinical benefits in the treatment of osteosarcoma.
Our studies suggest that pectolinarigenin possesses the inhibitory potential for osteosarcoma growth and metastasis by SHP-1-mediated STAT3 signaling inhibition. However, it remains plausible that pectolinarigenin may exhibit its antiosteosarcoma activity through impairing/activating other signaling. Further investigations are needed to comprehensively explore the molecular mechanism of pectolinarigenin, which will help us better understand the function of pectolinarigenin on osteosarcoma. In addition, STAT3 inhibitors also have beneficial clinical therapeutic effects on several types of cancer (breast, ovarian, prostate, pancreatic, etc.), and it will be essential to determine the efficacy of pectolinarigenin against other cancer types. Cell lines. 143B, HOS and MG63 were purchased from ATCC (Manassas, VA, USA). MG63.2 cell line was established by serially passaging the parental MG63 cells. 44 All cells were maintained in DMEM supplemented with 10% FBS and 1% penicillin/streptomycin. Cells were maintained at 37°C under a humidified 5% CO 2 incubator.
STAT3 luciferase reporter assay. The STAT3 luciferase reporter plasmid (pGMSTAT3-Luc) was used to detect STAT3 activation and obtained from Shanghai Yi Sheng Biotechnology Co. Ltd. (Shanghai, China) and procedure were carried out as previously described. 45 143B cells were seeded in 24-well plates 24 h before transfection. The cells were co-transfected with pGMSTAT3-Luc and pRL-SV40 (a plasmid encoding Renilla luciferase) using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA). After 24 h, cells were treated with the indicated concentrations of pectolinarigenin for 24 h. Luciferase activity was assessed by the dual-luciferase reporter assay system (Promega, Madison, WI, USA) using a luminometer (Thermo Scientific, Waltham, MA, USA). The inhibition of STAT3 activation by pectolinarigenin was calculated as the ratio between the value of firefly and Renilla luciferase activity. Three independent experiments were carried out in triplicate.
Immunofluorescence assay. Cells grown on coverslips were exposed to different concentrations of pectolinarigenin for 24 h (for detecting STAT3 cytoplasmic-to-nuclear translocation) or 72 h (for detecting EMT-related proteins expression), fixed with 4% paraformaldehyde and permeabilized with 0.1% Triton X-100 in PBS. Samples were blocked with 1% BSA for 30 min followed by incubation with indicated primary antibodies at 4°C overnight. After washing three times, cells were probed with Alexa Fluor 488 secondary antibody for 1 h at room temperature. The nuclei were stained by 4′, 6-diamidino-2-phenylindole (DAPI). Images were acquired with a confocal microscope (Leica, Wetzlar, Germany).
Electrophoretic mobility shift assay. EMSA was performed using Odyssey Infrared STAT3 EMSA Kit (LI-COR Biosciences, Lincoln, NE, USA) following the manufacturer's protocol. In brief, 143B cells were pretreated with pectolinarigenin and stimulated with IL-6. Nuclear extracts were prepared and incubated with STAT3 IRDye 700 infrared dye-labeled oligonucleotides: 5′-GATCCTTCTGGGAATTCCTAGATC-3′ and 3′-CTAGGAAGACCCTTAAGGATCT AG-5′ (boldface indicates STAT3-binding sites) in reaction buffers, for 30 min at 37°C. The protein-DNA complex was applied to native polyacrylamide gels. The gels were visualized with Odyssey infrared system.
RT-PCR. RNA samples from cells were prepared using Trizol (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocols. Total RNA (1 μg) was converted to cDNA using oligo dT primer. The relative expression of SHP-1 was analyzed by RT-PCR with actin as an internal control. The primer sequences used for SHP-1 were 5′-GAGAACGCTAAGACCTACATCG-3′ and 5′-CAGTA TGGGACG CATTTGTT-3′. PCR products were separated on 1.5% agarose gel and then stained with GelRed. Three independent experiments were carried out in triplicate.
Co-immunoprecipitation. Co-immunoprecipitation was performed as previously reported. 46 143B cells were treated with or without pectolinarigenin for 24 h as indicated concertrations. Equal amount of proteins was incubated with anti-STAT3 or anti-DNMT1 antibodies overnight at 4°C. The immunoprecipitated pellets were then incubated with protein A/G agarose beads followed by five washes with wash buffer. The eluted proteins were resolved on 8% SDS-PAGE. Three independent experiments were carried out in triplicate.
ChIP assay. ChIP assay was performed as previously described. 46 143B cells were cross-linked in 1% formaldehyde in PBS for 10 min, followed by adding glycine to quench unreacted formaldehyde. Cell lysates were then collected with cold lysis buffer for ChIP and sonicated to obtain chromatin with an average fragment size of 500 bp. The chromatin samples were precleared with protein A/G agarose/salmon sperm DNA beads for 1 h and then immunoprecipitated with indicated antibodies. The immuoprecipitates were then incubated with protein A/G agarose beads for 2 h. After five sequential washes, the protein-DNA complex was eluted with elution buffer plus proteinase K and the cross-links were reversed at 65°C for 12 h. DNA was extracted with phenol-chloroform. Immunoprecipitated DNA was analyzed by real-time PCR, and PCR products were separated on 1.5% agarose gel and stained with GelRed. The sequences of the primers used in the ChIP assay were as follows: 5′-AGGGTTACTTCCTGGTCTGTTC-3′ and 5′-ACGTCGGAGTGAGCATCAAC-3′.
MTS cell viability assay. MTS cell viability assay was performed according to the manufacturer's instructions (Promega). In brief, osteosarcoma cells (5 × 10 3 per well) were seeded into 96-well plates 24 h before pectolinarigenin treatment. Forty-eight hours after pectolinarigenin exposure, Aqueous One solution were added and the absorption was acquired at 490 nm by a microplate spectrophotometer (Thermo Scientific). Three independent experiments were carried out in triplicate.
Wound-healing migration assay. Wound-healing migration assay was performed as previously described. 47 Osteosarcoma cells were seeded into six-well plates and when growing into full confluence, a 'wound' was created by a sterile 100 μl pipette tip. Fresh medium containing different concentrations of pectolinarigenin was subsequently added. After 12 h, cells were fixed with 4% paraformaldehyde, and images were obtained by an inverted microscope (Olympus, Tokyo, Japan). Migrated cells were counted manually. Three independent experiments were carried out in triplicate.
Transwell invasion assay. Transwell invasion assay was conducted using a modified Boyden chamber coated with Matrigel as previously described. 48 Osteosarcoma cells were resuspended at 5 × 10 4 cells in 100 μl medium with or without indicated concentrations of pectolinarigenin and added to each transwell insert. In all, 500 μl of growth medium was placed in each bottom well. Ten hours after seeding, invaded cells in the lower side of the insert were fixed with 4% paraformaldehyde and stained with 0.1% crystal violet. Images were acquired by an Three-dimensional on-top assay. Three-dimensional on-top assay was conducted as previously described. 46 Briefly, 80 μl Matrigel solution per well was added into a 48-well plate and left in 37°C for 30 min to solidify. In all, 1.5 × 10 4 143B cells were resuspened in 100 μl DMEM and seeded on solidified Matrigel. After 15 min, 100 μl DMEM containing 10% Matrigel as well as indicated concentrations of pectolinarigenin was added on top of the plated culture. The ontop Matrigel-medium mixture was replaced every 2 days. Three independent experiments were carried out in triplicate.
siRNA-mediated knockdown. 143B cells were seeded in a six-well plate 24 h before transfection. siRNA duplex targeting SHP-1 were transfected using Lipofectamine 2000 (Invitrogen Life Technologies) according to the manufacturer's protocols. The sequences targeting SHP-1 were as follows: 5′-GCAGGAGGUGA AGAACUUG-3′ (siRNA-1) and 5′-CCAGUUCAUUGAAACCAUTAA-3′ (siRNA -2). For the spontaneous growth and metastasis model, 143B tumor cells (1 × 10 6 ) were suspended in sterile 20 μl PBS and implanted into the medullary cavity of tibia of each mouse. One week after cell inoculation, the mice were randomly divided into three groups (n = 6 per group) and received i.p. injection of pectolinarigenin (20 mg/kg/2 days and 50 mg/kg/2 days) as compared with mice injected with DMSO (control group). After 24 days, all mice were killed. The posterior limb with tumors and lungs were finely excised for further study. Tumor weight was measured and lung metastasis nodules numbers were counted using a dissecting microscope by three individuals who do not have personal biases with the current experiment. Tumor tissues were snap frozen in liquid nitrogen for western blotting. Another independent animal experiment was performed to determine survival curve.
The patient-derived human osteosarcoma xenografts (PDXs) animal model was conducted according to previously described procedures. 49 Briefly, surgical specimens from patients undergoing removal of primary osteosarcoma tumors at Shanghai First People's Hospital were implanted s.c. into nude mice. When the tumors have successfully engrafted, tumor samples were passaged into subsequent generations of nude mice for the following studies. On day 14, the mice were randomized into three groups and given i.p. injection of pectolinarigenin (20 mg/kg/ 2 days and 50 mg/kg/2 days) as compared with mice injected with DMSO (control group). Tumor volume was measured by a digital caliper once per week. Tumor volume was determined using the following formula: (length × width 2 ) × 0.52. After treatment for 28 days, all the mice were killed. The tumors were removed and prepared for western blot.
H&E staining. Hearts, livers and other organs were freshly collected from mice when the experiments terminated and fixed in 4% paraformaldehyde overnight before paraffin embedding. In all, 4 μm sections were then deparaffinization for H&E staining and representative images were acquired with a Leica microscope.
Statistical analysis. Data are presented as mean ± S.D. A Student's t-test was used to compare two groups (Po0.05 was considered significant) unless otherwise indicated. All experiments were performed at least three times.
Conflict of Interest
The authors declare no conflict of interest. | 2017-11-08T18:20:40.082Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "b24fa2e53cf84bdc40efa97762215ebacd55e1ca",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/cddis2016305.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c852d8ed52419801b9d346e16b00029c7aa29e86",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
52185376 | pes2o/s2orc | v3-fos-license | Development and Evaluation of Minocycline Hydrochloride-Loaded In Situ Cubic Liquid Crystal for Intra-Periodontal Pocket Administration
In the present study, an injectable in situ liquid crystal formulation was developed for local delivery of minocycline hydrochloride (MH) for chronic periodontitis treatment. The physicochemical properties, phase structures, in vitro drug release and pharmacodynamics of in situ liquid crystals were investigated. The optimal formulation (phytantriol (PT)/propylene glycol (PG)/water, 63/27/10, w/w/w) loaded with 20 mg/g MH was proved to be injectable. The precursor formulation can form a cubic phase gel in excess water in 6.97 ± 0.10 s. The results of in vitro drug release suggested the MH presented a sustained release for 4 days. Liquid crystal precursor formulation significantly reduced gingival index, probing depth and alveolar bone loss compared to the model group (p < 0.01). Besides, the pathological characteristics of model rats were improved. The results suggested that MH-loaded in situ cubic liquid crystal possessed of sustained release ability and periodontal clinical symptoms improvement. The developed in situ cubic liquid crystal may be a potentially carrier in the local delivery of MH for periodontal diseases.
Introduction
Periodontitis is a plaque-induced inflammatory condition that affects the periodontium; it is caused by the adherence to tooth surfaces of pathogenic bacterial species organized in complex communities that form biofilms [1]. The pathogens that have been confirmed include Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis, Treponema denticola, Tannerella forsythia, Prevotella intermedia, Parvimonas micra, Fusobacterium nucleatum, Selenomonas sputigena and Eubacterium nodatum, in the onset and progression of periodontitis [2]. Periodontitis lesions usually harbour a constellation of putative pathogens rather than a single pathogenic species [3]. Periodontitis is a risk factor of some systemic health problems such as vascular inflammation [4], diabetes [5], rheumatoid arthritis [6] and hyperlipidemia [7].
The effective treatment of periodontitis is removing the calculus and plaque by scaling and root planing (SRP). However, due to poor access to the base of deep pockets and anatomical complexities of teeth and furcation involvement, SRP alone may not always result in the complete elimination of pathogens, which results in exacerbation of the disease. This has encouraged the use of antibiotics as an adjunct to mechanical therapy [8]. Orally administered antibiotics suffer from drawbacks due to their systemic effect and also result in lack of effective concentration of the drug at the site of action, resulting in poor patient acceptance. This necessitates the development of alternative localized delivery of drugs [9]. There are multiple options of antimicrobials that can be locally delivered into the mucosa, such as metronidazole, chlorhexidine, minocycline, doxycycline and tetracycline. They show action on both gram-negative and gram-positive organisms. These drugs used in periodontal pockets and can inhibit or eliminate the periodontopathogenic microorganisms as well as modulate the inflammatory response of the tissues [10].
Minocycline hydrochloride (MH, Figure 1) is a broad spectrum tetracycline antibiotic compared to the other members of the group. It is one of the most active antibiotics against most of the microorganisms associated with periodontal disease. It has the most marked substantivity and shows greater solubility in lipids [11]. The minimum inhibitory concentration (MIC) of MH on Enterobacteriaceae, Pseudomonas, Staphylococcus and Candida isolates from periodontal pockets were 16, 128, 8 and 16 µg/mL [12]. MH shows some advantages in the rehabilitation of periodontitis such as inhibition of collagenase activity, inhibition of bone resorption, promoting the proliferation of periodontal fibroblast and adhesion of periodontal connective tissue [13,14]. Periocline ® which is a bio-absorbable sustained local drug delivery system consisting of 20 mg/g MH in a matrix of hydroxyethyl-cellulose, aminoalkylmethacrylate, triacetine and glycerine is commercially available.
Molecules 2018, 23, x FOR PEER REVIEW 2 of 15 resulting in poor patient acceptance. This necessitates the development of alternative localized delivery of drugs [9]. There are multiple options of antimicrobials that can be locally delivered into the mucosa, such as metronidazole, chlorhexidine, minocycline, doxycycline and tetracycline. They show action on both gram-negative and gram-positive organisms. These drugs used in periodontal pockets and can inhibit or eliminate the periodontopathogenic microorganisms as well as modulate the inflammatory response of the tissues [10]. Minocycline hydrochloride (MH, Figure 1) is a broad spectrum tetracycline antibiotic compared to the other members of the group. It is one of the most active antibiotics against most of the microorganisms associated with periodontal disease. It has the most marked substantivity and shows greater solubility in lipids [11]. The minimum inhibitory concentration (MIC) of MH on Enterobacteriaceae, Pseudomonas, Staphylococcus and Candida isolates from periodontal pockets were 16, 128, 8 and 16 μg/mL [12]. MH shows some advantages in the rehabilitation of periodontitis such as inhibition of collagenase activity, inhibition of bone resorption, promoting the proliferation of periodontal fibroblast and adhesion of periodontal connective tissue [13,14]. Periocline ® which is a bio-absorbable sustained local drug delivery system consisting of 20 mg/g MH in a matrix of hydroxyethyl-cellulose, aminoalkylmethacrylate, triacetine and glycerine is commercially available. The antibiotic release systems used so far in the treatment of periodontitis include in situ gels [15], fibers [16], microparticles [17], nanoparticles [18], films [19] and so on. Recently, in-situ forming implants (ISFI) based on poly(lactic-co-glycolic acid) (PLGA) have been proposed for local periodontitis treatment. These are liquid formulations, which can be easily injected into periodontal pockets, and then (e.g., following solvent exchange) harden to form solid implants with customized geometry. These systems were loaded with antibiotic drugs, namely doxycycline hyclate, metronidazole and minocycline hydrochloride [20]. With Atridox ® , an ISFI developed by Atrix Laboratories, it has been possible to overcome many drawbacks of the available marketed formulations. The product showed a significant improvement in patient compliance by developing a biodegradable implant that does not require a surgical procedure to place or remove the system [21]. However, there are still some problems in ISFI system, such as solvent safety.
Lyotropic liquid crystals (LLCs) formed by the self-assembly of amphiphilic molecules in a solvent (usually water) have attracted increasingly greater attention in the last few decades. Compared with polymer-based ISFI system, LLC systems has many advantages. The dual polar/apolar structure of LLC systems allows for encapsulation of a wide range of active drugs (i.e., hydrophilic, hydrophobic and amphiphilic) and protect them from hydrolysis and enzymolysis. Phytantriol (PT) contains a saturated aliphatic chain and without ester functional group results in a more stable liquid crystal structure due to the avoidance of the ester hydrolysis reaction [22,23]. Furthermore, PT has been reported to be biodegradable, stable, nontoxic and is easily available in highly pure form [24].
The in situ liquid crystal systems are low viscosity precursor, which are good candidates for injection administration. The precursor could transform into a viscous cubic or hexagonal phase gel in the presence of excess water [25], which facilitate its retention in the periodontal pocket. The antibiotic release systems used so far in the treatment of periodontitis include in situ gels [15], fibers [16], microparticles [17], nanoparticles [18], films [19] and so on. Recently, in-situ forming implants (ISFI) based on poly(lactic-co-glycolic acid) (PLGA) have been proposed for local periodontitis treatment. These are liquid formulations, which can be easily injected into periodontal pockets, and then (e.g., following solvent exchange) harden to form solid implants with customized geometry. These systems were loaded with antibiotic drugs, namely doxycycline hyclate, metronidazole and minocycline hydrochloride [20]. With Atridox ® , an ISFI developed by Atrix Laboratories, it has been possible to overcome many drawbacks of the available marketed formulations. The product showed a significant improvement in patient compliance by developing a biodegradable implant that does not require a surgical procedure to place or remove the system [21]. However, there are still some problems in ISFI system, such as solvent safety. Lyotropic liquid crystals (LLCs) formed by the self-assembly of amphiphilic molecules in a solvent (usually water) have attracted increasingly greater attention in the last few decades. Compared with polymer-based ISFI system, LLC systems has many advantages. The dual polar/apolar structure of LLC systems allows for encapsulation of a wide range of active drugs (i.e., hydrophilic, hydrophobic and amphiphilic) and protect them from hydrolysis and enzymolysis. Phytantriol (PT) contains a saturated aliphatic chain and without ester functional group results in a more stable liquid crystal structure due to the avoidance of the ester hydrolysis reaction [22,23]. Furthermore, PT has been reported to be biodegradable, stable, nontoxic and is easily available in highly pure form [24].
The in situ liquid crystal systems are low viscosity precursor, which are good candidates for injection administration. The precursor could transform into a viscous cubic or hexagonal phase gel in the presence of excess water [25], which facilitate its retention in the periodontal pocket. Researchers have tried to use liquid crystalline as drug carriers in the treatment of periodontal disease. For example, the precursor system of liquid crystalline phase containing propolis microparticles [26] and metronidazole [27] were prepared and characterized. The rheology of systems revealed properties that favored easy injection into the periodontal pocket and subsequent stable retention therein. Furthermore, this systems have been extensively investigated for their ability to sustain the release of bioactives [28]. The purpose of this work was developing a PT-based in situ cubic liquid crystal system containing MH and evaluating the effectiveness on experimental chronic periodontitis when administered as a periodontal pocket topical delivery system.
Development of Precursor Formulations
A pseudo-ternary phase diagram was constructed by first mixing PT with propylene glycol (PG) in the ratios of 1: :2 and 9:1 (w/w). A predetermined amount of each mixture was then transferred into a centrifuge tube and mixed with water at the ratios of 1:9, 2:8, 3:7, 4:6, 5:5, 6:4, 7:3, 8:2 and 9:1 (w/w) to a total weight of 0.2 g. Each phase was characterized by visual analysis and polarizing light microscopy (PLM). As shown in Figure 2, the sample appeared as a gel in photomicrographs, represented by a dark field, which is a typical characteristic of cubic phase. Malta crosses were observed in the photomicrographs of samples, suggesting the existence of lamellar phase. All mesophase including lamellar phase, cubic phase and the mixture was observed predominantly when PG was present below 30% (w/w). Samples appearing as emulsions with phase separation after 48 h were characterized as emulsions. The solution samples that showed a dark field under PLM were characterized as isotropic solutions. The isotropic solution formed at a water content of less than 30% and a PG percentage of more than 30%. Researchers have tried to use liquid crystalline as drug carriers in the treatment of periodontal disease. For example, the precursor system of liquid crystalline phase containing propolis microparticles [26] and metronidazole [27] were prepared and characterized. The rheology of systems revealed properties that favored easy injection into the periodontal pocket and subsequent stable retention therein. Furthermore, this systems have been extensively investigated for their ability to sustain the release of bioactives [28]. The purpose of this work was developing a PT-based in situ cubic liquid crystal system containing MH and evaluating the effectiveness on experimental chronic periodontitis when administered as a periodontal pocket topical delivery system.
Development of Precursor Formulations
A pseudo-ternary phase diagram was constructed by first mixing PT with propylene glycol (PG) in the ratios of 1: :2 and 9:1 (w/w). A predetermined amount of each mixture was then transferred into a centrifuge tube and mixed with water at the ratios of 1:9, 2:8, 3:7, 4:6, 5:5, 6:4, 7:3, 8:2 and 9:1 (w/w) to a total weight of 0.2 g. Each phase was characterized by visual analysis and polarizing light microscopy (PLM). As shown in Figure 2, the sample appeared as a gel in photomicrographs, represented by a dark field, which is a typical characteristic of cubic phase. Malta crosses were observed in the photomicrographs of samples, suggesting the existence of lamellar phase. All mesophase including lamellar phase, cubic phase and the mixture was observed predominantly when PG was present below 30% (w/w). Samples appearing as emulsions with phase separation after 48 h were characterized as emulsions. The solution samples that showed a dark field under PLM were characterized as isotropic solutions. The isotropic solution formed at a water content of less than 30% and a PG percentage of more than 30%. Zhang et al. [29] explored the addition of drugs and organic solvent into unsaturated monoglyceride lipid cubic phase, the results of the study showed that the monoglyceride lipid matrix with alcohol, polyethylene glycol, PG or N-methyl-2-pyrrolidone can develop low viscosity liquid crystal precursor. In this study, the solubility of MH in PG was higher than that in other solvents. Thus, we developed fluid precursor formulations composed of PT, PG and water. After injection, the precursor formulation needs to absorb gingival crevicular fluid (GCF) to transform into viscous cubic phase gel. GCF is limited in the intraperiodontal pocket. Hence, we chose a flowing isotropic solution phase and lamellar phase, which are injectable and could achieve phase transformation with little additional water for further investigation. Composition of the selected formulations F1-F4 is shown in Table 1.
Physicochemical Characterization
Syringeability, PH value and the ability to form a cubic liquid crystalline gel in situ are very important for the evaluation of an injected formulation, so the physicochemical properties of the formulations F1-F4 are characterized in Table 2. The acceptable pH range for parenteral preparations is 4-9 [30]. The influence of different compositions on phase transformation was also evaluated by minimum volume of water for gelation (V min ) and gelation time (T g ). V min and T g decreased with increased water content (formulations F3 and F4) and increased ratio of PT/PG (formulations F1, F2 and F3). Formulation F1 presented minimum values of V min and T g compared to other formulations. In terms of physicochemical properties, formulation F1 is the optimal. However, their release behaviour needs further investigation.
In Vitro Drug Release Studies
In vitro release studies were conducted to investigate the influence of the compositions of formulations on release behaviour. The formulations in Table 1 were chosen for drug release studies, the release profiles are illustrated in Figure 3. Figure 3a presents the plots of the MH percentage released as a function of time from F1, F2 and F3 containing different ratios of PT/PG (8:2, 7:3, and 6:4, w/w). The release profile of F1 was significantly slower than that formulation F3. No significant difference was observed between F1 and F2. Around 50.0% of drug released in F1-F3 within the first 12 h. Drug sustained release up to 72 h. After 96 h, the release of the drug basically complete, the cumulative release rate was more than 95%. The MH released from F3 and F4 with different water content is illustrated in Figure 3b. When the water content of the formulation decreased from 20 to 10%, the release of MH was slightly lower. Simultaneously, drug was released completely after 96 h. There is some evidence which suggests that the delayed release is due to lower water content and higher PT/PG ratios. Thus, we suggested a hypothesis that MH may be distributed in the water domain of liquid crystal structure.
We further investigated whether the drug loading has an effect on the release behaviour. In this study, a drug precipitation phenomenon was observed in F1 with elevated drug loading. In terms of drug loading, F1 is not suitable as the optimal formulation. Instead, F2 formulations with similar physicochemical properties loaded with 10 mg/g, 15 mg/g and 20 mg/g of MH, were chosen for in vitro release studies. As shown in Figure 4, the release profile of formulation loaded with 10 mg/g of MH was highly similar to the formulation loaded with 15 mg/g and 20 mg/g of MH. There was no significant difference for release rate and cumulative release amount among release profiles. These results indicate that the proportion of drug released was not influenced by the amount of drug loaded into the system. In conclusion, 20 mg/g MH-loaded F2 was the most suitable formulation. We further investigated whether the drug loading has an effect on the release behaviour. In this study, a drug precipitation phenomenon was observed in F1 with elevated drug loading. In terms of drug loading, F1 is not suitable as the optimal formulation. Instead, F2 formulations with similar physicochemical properties loaded with 10 mg/g, 15 mg/g and 20 mg/g of MH, were chosen for in vitro release studies. As shown in Figure 4, the release profile of formulation loaded with 10 mg/g of MH was highly similar to the formulation loaded with 15 mg/g and 20 mg/g of MH. There was no significant difference for release rate and cumulative release amount among release profiles. These results indicate that the proportion of drug released was not influenced by the amount of drug loaded into the system. In conclusion, 20 mg/g MH-loaded F2 was the most suitable formulation. No exact conclusion concerning the effect of drug loading on drug release in vitro is available. The results of this study was consistent with the results of Marilisa et al. who investigated the release behaviour of salicylic cubic phase [31], as well as the results of Jessica et al. who investigated naltrexone-loaded in situ hexagonal liquid crystal [32]. Chen et al. [33] considered that the release of sinomenine hydrochloride-loaded in situ cubic phase increased with the increased of drug loading. However, Qin et al. [34] suggested the release of hydroxycamptothecin-loaded in situ cubic phase decreased with the increased of drug loading. Therefore, we are able to infer that the relationship between in vitro release behaviour and drug loading cannot be generalized. It has been hypothesized that release behaviour may be related to the polarity and solubility of different drugs. No exact conclusion concerning the effect of drug loading on drug release in vitro is available. The results of this study was consistent with the results of Marilisa et al. who investigated the release behaviour of salicylic cubic phase [31], as well as the results of Jessica et al. who investigated naltrexone-loaded in situ hexagonal liquid crystal [32]. Chen et al. [33] considered that the release of sinomenine hydrochloride-loaded in situ cubic phase increased with the increased of drug loading. However, Qin et al. [34] suggested the release of hydroxycamptothecin-loaded in situ cubic phase decreased with the increased of drug loading. Therefore, we are able to infer that the relationship between in vitro release behaviour and drug loading cannot be generalized. It has been hypothesized that release behaviour may be related to the polarity and solubility of different drugs.
A comparative study of in situ cubic liquid crystal and Periocline ® was carried out. Figure 5 showed that in situ cubic phase can be sustained release drug for 4 days, and the daily drug release was higher than the MIC of MH. The release rate of cubic phase was significantly faster than that of Periocline ® . The cumulative release amount of Periocline ® was about 70%, while cubic liquid crystal can maintain more than 90% of drug release. The release rate of Periocline ® is dependent on the degradation of the matrix material, while the release of the liquid crystal mainly through its unique internal structural features. There are two water channels in the cubic phase structure, and drugs are released from water channel into the environment [35]. Different internal structures lead to different drug release behaviours. A comparative study of in situ cubic liquid crystal and Periocline ® was carried out. Figure 5 showed that in situ cubic phase can be sustained release drug for 4 days, and the daily drug release was higher than the MIC of MH. The release rate of cubic phase was significantly faster than that of Periocline ® . The cumulative release amount of Periocline ® was about 70%, while cubic liquid crystal can maintain more than 90% of drug release. The release rate of Periocline ® is dependent on the degradation of the matrix material, while the release of the liquid crystal mainly through its unique internal structural features. There are two water channels in the cubic phase structure, and drugs are released from water channel into the environment [35]. Different internal structures lead to different drug release behaviours. A comparative study of in situ cubic liquid crystal and Periocline ® was carried out. Figure 5 showed that in situ cubic phase can be sustained release drug for 4 days, and the daily drug release was higher than the MIC of MH. The release rate of cubic phase was significantly faster than that of Periocline ® . The cumulative release amount of Periocline ® was about 70%, while cubic liquid crystal can maintain more than 90% of drug release. The release rate of Periocline ® is dependent on the degradation of the matrix material, while the release of the liquid crystal mainly through its unique internal structural features. There are two water channels in the cubic phase structure, and drugs are released from water channel into the environment [35]. Different internal structures lead to different drug release behaviours.
Evaluation of Phase Behavior
The phase behaviour of optimal formulation was characterized by PLM, small-angle X-ray scattering (SAXS) and rheology methods. Figure 6 shows the SAXS spectra of scattered intensities versus scattering vector q. The same structure was observed in both mesophases, as revealed by the SAXS diffraction peaks in the ratio 2: 3: 4: 6. The results demonstrated that blank cubic phase and MH-loaded cubic phase are reversed double diamond bicontinuous cubic phase Pn3m symmetry. As a consequence, the addition of MH did not alter the phase behaviour.
Evaluation of Phase Behavior
The phase behaviour of optimal formulation was characterized by PLM, small-angle X-ray scattering (SAXS) and rheology methods. Figure 6 shows the SAXS spectra of scattered intensities versus scattering vector q. The same structure was observed in both mesophases, as revealed by the SAXS diffraction peaks in the ratio 2 : 3 : 4 : 6. The results demonstrated that blank cubic phase and MH-loaded cubic phase are reversed double diamond bicontinuous cubic phase Pn3m symmetry. As a consequence, the addition of MH did not alter the phase behaviour. Strain-sweep measurements of the precursor formulation shown a linear relation between the shear stress and the shear rate, which is a characteristic Newtonian behaviour. The liquid crystal precursors have a flow property that renders them easy to apply to the required site. The oscillatory frequency sweep was carried out on the precursor formulation and in situ liquid crystal in excess water. The storage modulus G′ and the loss modulus G′′ were plotted against frequency, and representative rheograms are presented in Figure 7. Strain-sweep measurements of the precursor formulation shown a linear relation between the shear stress and the shear rate, which is a characteristic Newtonian behaviour. The liquid crystal precursors have a flow property that renders them easy to apply to the required site. The oscillatory frequency sweep was carried out on the precursor formulation and in situ liquid crystal in excess water. The storage modulus G ′ and the loss modulus G" were plotted against frequency, and representative rheograms are presented in Figure 7.
Evaluation of Phase Behavior
The phase behaviour of optimal formulation was characterized by PLM, small-angle X-ray scattering (SAXS) and rheology methods. Figure 6 shows the SAXS spectra of scattered intensities versus scattering vector q. The same structure was observed in both mesophases, as revealed by the SAXS diffraction peaks in the ratio 2 : 3 : 4 : 6. The results demonstrated that blank cubic phase and MH-loaded cubic phase are reversed double diamond bicontinuous cubic phase Pn3m symmetry. As a consequence, the addition of MH did not alter the phase behaviour. Strain-sweep measurements of the precursor formulation shown a linear relation between the shear stress and the shear rate, which is a characteristic Newtonian behaviour. The liquid crystal precursors have a flow property that renders them easy to apply to the required site. The oscillatory frequency sweep was carried out on the precursor formulation and in situ liquid crystal in excess water. The storage modulus G′ and the loss modulus G′′ were plotted against frequency, and representative rheograms are presented in Figure 7. The precursor formulation were found to be more viscous than elastic (G" > G ′ ), indicating "liquid-like" behaviour. In situ liquid crystal in excess water were found to be more viscous than elastic (G" > G ′ ) at a low frequency and more elastic than viscous (G ′ > G") at a high frequency. The results showed that cubic phase was a viscoelastic system and presented "gel-like" behaviour. Hence, the "liquid-like" behaviour of precursor formulation is more benefit to inject for intra-periodontal pocket administration. The "gel-like" behaviour of phase transition formulation keep drug in the periodontal pocket and sustained release.
In Vivo Pharmacodynamics Studies
High glucose feeding [36], silk ligation of animal teeth [37] and periodontal local inoculation of suspected pathogens [38] can be all used to induce periodontitis. The latter two methods obtain the local pathological manifestations of periodontitis by locally adding periodontal stimulants. High glucose can weaken the ability of cell migration, weaken the healing ability of periodontal tissues and affect the inflammatory secretion of gingival epithelial cells. Meanwhile, the expression of Toll-like receptor 4 (TLR4) and Interleukin 6 (IL-6) in human gingival epithelial cells can be upregulated Periodontitis is a multifactor-induced disease, and the current model of periodontitis in rats often combines two or more methods in model rats, In this study, the use of silk ligation combined with high-sugar feeding method was used and after 10 weeks, the typical symptoms of periodontitis were observed, including gingival edema, gingival soft and depressed alveolar bone resorption (Figure 8). This suggested that periodontitis model was constructed successfully. After 4 weeks treatment of Periocline ® and in situ cubic liquid crystal, the scores of gingival index (GI), probing depth (PD) and alveolar bone loss (ABL) of each group is graphically illustrated in Figure 9. The model group showed significant higher GI, PD, and ABL levels compared to the normal group (p < 0.01). There was significant reduction of inflammatory symptoms in Periocline ® group and in situ cubic liquid crystal group compared with model group (p < 0.01). Likewise, the values of GI, PD, and ABL are all close to the normal group in the first four weeks. The precursor formulation were found to be more viscous than elastic (G′′ > G′), indicating "liquid-like" behaviour. In situ liquid crystal in excess water were found to be more viscous than elastic (G′′ > G′) at a low frequency and more elastic than viscous (G′ > G′′) at a high frequency. The results showed that cubic phase was a viscoelastic system and presented "gel-like" behaviour. Hence, the "liquid-like" behaviour of precursor formulation is more benefit to inject for intra-periodontal pocket administration. The "gel-like" behaviour of phase transition formulation keep drug in the periodontal pocket and sustained release.
In Vivo Pharmacodynamics Studies
High glucose feeding [36], silk ligation of animal teeth [37] and periodontal local inoculation of suspected pathogens [38] can be all used to induce periodontitis. The latter two methods obtain the local pathological manifestations of periodontitis by locally adding periodontal stimulants. High glucose can weaken the ability of cell migration, weaken the healing ability of periodontal tissues and affect the inflammatory secretion of gingival epithelial cells. Meanwhile, the expression of Toll-like receptor 4 (TLR4) and Interleukin 6 (IL-6) in human gingival epithelial cells can be upregulated Periodontitis is a multifactor-induced disease, and the current model of periodontitis in rats often combines two or more methods in model rats, In this study, the use of silk ligation combined with high-sugar feeding method was used and after 10 weeks, the typical symptoms of periodontitis were observed, including gingival edema, gingival soft and depressed alveolar bone resorption (Figure 8). This suggested that periodontitis model was constructed successfully. After 4 weeks treatment of Periocline ® and in situ cubic liquid crystal, the scores of gingival index (GI), probing depth (PD) and alveolar bone loss (ABL) of each group is graphically illustrated in Figure 9. The model group showed significant higher GI, PD, and ABL levels compared to the normal group (p < 0.01). There was significant reduction of inflammatory symptoms in Periocline ® group and in situ cubic liquid crystal group compared with model group (p < 0.01). Likewise, the values of GI, PD, and ABL are all close to the normal group in the first four weeks. Histopathological results of the periodontal tissue are depicted in Figure 10. The normal group showed conical gingival papilla, neatly arranged gingival collagen fibers and complete gingival epithelium. The junctional epithelium is attached to cemento-enamel junction (CEJ), and the alveolar bone crest with smooth morphology has no resorption. The periodontal tissue showed signs of chronic inflammation in the model group. Epithelial erosion, gingival papilla depression, bone resorption of depressed type, destruction of alveolar bone, collagen fiber derangement and resorption of cementum were found. The junctional epithelial detachment from the CEJ to the root proliferative shift. More osteoclasts appear in the alveolar bone crest. After 4 weeks treatment, the two medicated groups showed different degrees of repair. For the Periocline ® group, the gingival epithelium was repaired slightly and re-attached to CEJ. Neatly arranged gingival collagen fibers and smooth alveolar bone crest were basically restored. Depressed alveolar bone resorption was disappeared nearly. Compared with Periocline ® group, in situ cubic liquid crystal presented the similar effect. Periocline ® as the positive control, it can be confirmed that MH in situ cubic liquid crystal has an therapeutic effect on the restoration of the gingival epithelium, gingival collagen fibers and the alveolar bone. The liquid crystal system presented in this work exhibits its own unique advantages while exerting similar effects on periodontitis compared to other systems. The liquid crystal system can obtain properties in relation to the periodontal administration without using various additives and toxic solvents, such as sensitive solution-gel phase transition, unique nanostructures, gel strength, good adhesiveness and suitable mechanical properties. These properties can overcome a particular problem of poor retention at the site of application of many drug delivery systems for periodontal pockets.
Histopathological results of the periodontal tissue are depicted in Figure 10. The normal group showed conical gingival papilla, neatly arranged gingival collagen fibers and complete gingival epithelium. The junctional epithelium is attached to cemento-enamel junction (CEJ), and the alveolar bone crest with smooth morphology has no resorption. The periodontal tissue showed signs of chronic inflammation in the model group. Epithelial erosion, gingival papilla depression, bone resorption of depressed type, destruction of alveolar bone, collagen fiber derangement and resorption of cementum were found. The junctional epithelial detachment from the CEJ to the root proliferative shift. More osteoclasts appear in the alveolar bone crest. After 4 weeks treatment, the two medicated groups showed different degrees of repair. For the Periocline ® group, the gingival epithelium was repaired slightly and re-attached to CEJ. Neatly arranged gingival collagen fibers and smooth alveolar bone crest were basically restored. Depressed alveolar bone resorption was disappeared nearly. Compared with Periocline ® group, in situ cubic liquid crystal presented the similar effect. Periocline ® as the positive control, it can be confirmed that MH in situ cubic liquid crystal has an therapeutic effect on the restoration of the gingival epithelium, gingival collagen fibers and the alveolar bone. The liquid crystal system presented in this work exhibits its own unique advantages while exerting similar effects on periodontitis compared to other systems. The liquid crystal system can obtain properties in relation to the periodontal administration without using various additives and toxic solvents, such as sensitive solution-gel phase transition, unique nanostructures, gel strength, good adhesiveness and suitable mechanical properties. These properties can overcome a particular problem of poor retention at the site of application of many drug delivery systems for periodontal pockets.
Materials
Phytantriol (3,7,11,15- China). Purified water used in all experiments was processed using a Milli-Q system (Millipore, Bedford, MA, USA). All other reagents were of analytical or pharmaceutical grade.
Preparation of Precursor Formulations
The precursor formulations were prepared by mixing PT, PG and water. PT was gently melted at 60 ± 0.5 • C followed by the addition of the required amount of PG at the same temperature. The MH was dissolved in the PG. The mixture was vortex-mixed homogeneously, and the appropriate quantity of prewarmed water at the same temperature was added into the mixture. Then, the mixture was homogeneously vortex-mixed. The formulations were finally sterilized by filtration through a 0.22 µm filter and sealed in ampoules to equilibrate for 72 h before any experiments.
PLM
The precursor formulations and the gel obtained in excess water were macroscopically characterized by visual observation and examined microscopically under a polarized light microscope (XP-330C, Cai Kang Optical Instrument Co., Ltd., Shanghai, China) at room temperature.
SAXS Measurements
Unloaded and MH-loaded precursor fluid formulations and the gel obtained in excess water were evaluated by an SAXSess mc2 SAXS (Anton Paar, Graz, Austria) equipped with a sealed X-ray tube (Cu-anode target type) producing Ni-filtered Cu Ka radiation with a wavelength of λ = 0.15418 nm. The voltage was set to U = 40 kV with an anode current of I = 50 mA. The optics and sample chamber were under vacuum to minimize air scatter. Measurements were performed at 37 • C with measurement time setting t = 15 min. And samples were equilibrated for 10 min prior to measurements.
Rheological Measurements
Rheological measurements were carried out with a stress controlled rheometer AR-2000ex (TA Instruments, New Castle, DE, USA) in the flow and oscillatory modes. A cone-plate sensor with a diameter of 20 mm and a cone angle of 1 • was used. Measurements were performed after a period of 2 min to allow for the stress relaxation. The linear viscoelastic domain of a material was determined via an oscillatory stress sweep at a fixed frequency (1 Hz) before carrying out the oscillatory measurements. Strain-sweep measurements were carried out over a range of strain (0.01-100%).
A constant strain was chosen in the linear viscoelastic domain, the samples were subjected to frequency sweep measurements at 25 ± 0.1 • C for the MH-loaded precursor formulations and 37 ± 0.1 • C for the gel obtained in excess water over a frequency range of 0.01-100 rad⋅s −1 . The viscoelasticity of the sample before and after phase transition was characterized in terms of the elastic modulus G' and the loss modulus G". Flow-sweep measurements were performed on MH-loaded precursor formulations over a range of shear rates (1-200 s −1 ) at temperature of 25 ± 0.1 • C.
Evaluation of Syringeability and Determination of pH Value
In this work, syringes equipped with a modified plastic pipette tip were chosen to evaluate the syringeability of the formulations at room temperature. The inner diameter of injection head is about 0.5 mm. The pH values of chosen formulations were determined by a SevenMuti type multi-tester (Mettler Toledo, Shanghai, China).
Determination of the V min
The V min of the chosen formulations was determined by the magnetic stirring method [39]. First, 0.1 g of the MH-loaded precursor formulation was aliquoted into a 5 mL centrifuge tube, a magnetic bar (10 × 6 mm) was added into the centrifuge tube. The centrifuge tube was incubated in a water bath at 37.0 ± 0.5 • C for 5 min with the magnetic stirring speed of 30 rpm. Then, 10 µL of water was added into the centrifuge tube every 1 min until the magnetic bar completely stopped moving due to gelation. The total volume of the water added into the centrifuge tube was determined to be the V min of the sample [24].
Determination of the T g
The internal and external substances of periodontal pocket was easily removed by the GCF flow, and rapid phase transition should be achieved when in situ cubic liquid crystals in combination with the required amount of GCF, so the T g of the chosen formulations was also determined by the magnetic stirring method. The operation was the same as the determination of V min , whereby excess water was added into the centrifuge tube. The time when the magnetic bar completely stopped moving due to gelation was determined to be the T g of the sample.
In Vitro Drug Release
In vitro release of MH was determined in triplicate by using a dialysis membrane diffusion method [40]. Briefly, the formulations (0.5 g) were placed separately in 6 cm dialysis bags. The dialysis bags were then closed and immersed into 6 ml PBS (PH 7.2-7.4) in centrifuge tube, which were further placed in a horizontal shaker (37.0 ± 0.5 • C, 60 rpm). Sodium azide (0.02% w/w) was added to the dissolution medium to prevent bacterial contamination. After 0.5, 1, 2, 4, 6,8,12,24,48,72,96,120,144,168,192,216 and 240 h, all of dissolution medium was withdrawn from each vessel and immediately replaced with 6 ml fresh dissolution medium each time. The amount of MH released was analysed by UV spectrophotometry at 274 nm (UV-L5S, Precision Science Instrument Co., Ltd., Shanghai, China). Release amount and cumulative release rate were calculated by absorbance at predetermined time points.
In Vivo Pharmacodynamics Studies
All the animal studies were approved by the Animal Ethical Committee of Anhui University of Chinese Medicine (Ethic approval No. KC: 027-15, KC: 027-16). The experiments were conducted in accordance with the guidelines of the Laboratory Animal Center of Anhui University of Chinese Medicine. SPF rats (3 months old, 300 ± 20 g) were divided into four groups: normal group, model group, MH-loaded in situ cubic liquid crystal group and Periocline ® group, five rats in each group. A rat model of chronic periodontitis was implemented by a combination of the thread ligation method [41] and high glucose feeding. The animals were anesthetized with 5% chloral hydrate (350 mg/kg) by intraperitoneal injection before surgery. Firstly, the crevice, which is located between in the first molar and the second molar was established by slowly tugging with 4-0 surgical suture. Then, the neck of the tooth was ligated using double circle silk thread and the ligature was placed into the gingival sulcus. Sucrose solution (100 g/L) was administered as drinking water during the modeling period. The thread was checked three times a week, and the whole process was maintained for 10 weeks.
Two groups of rats were tested by intra-pocket administration of either in situ cubic liquid crystal or Periocline ® once a week and sustained for 4 weeks. The food and water of the four groups of rats were supplied normally. GI, PD and ABL were measured every week. GI score: 0 = healthy, 1 = slight, 2 = moderate and 3 = severe. PD was monitored using periodontal probe in the tongue side and buccal side of the far, middle and near, respectively. Rats were sacrificed and maxillary third molar alveolar bone tissue was excised to determine ABL. The tissue immersed in 1 mol/L NaOH solution for 24 h. Then soft tissue was removed and Loeffler's Methylene Blue disseminated to the CEJ. The tissue is placed under the stereo microscope, 12.5 times magnification and measured the distance from the CEJ to the alveolar crest. Mean value of tongue side and buccal side of the far, middle and near were taken as the ABL. Histopathological experiments were also conducted on the periodontal tissue of the upper and lower teeth with three molars. The tissue was fixed in 4% polyformaldehyde solution for 24 h, following by moving into decalcifying fluid for 4 weeks and decalcifying fluid was replaced the next day. Finally, the tissue was observed using light microscopy after hematoxylin and eosin (H&E) staining.
Statistical Analysis
Statistical analysis was carried out by the SPSS statistical software (23.0 version, SPSS Inc., Chicago, IL, USA). Each experiment was performed in triplicate and all data were expressed as mean ± standard deviation (SD). One-way analyses of variance were performed for evaluation of the results. p-values below 5% (p < 0.05) were considered statistically significant.
Conclusions
The in situ liquid crystal delivery system presented in this work was found to possess the needed low viscosity, sensitive solution-gel phase transition and favourable physicochemical properties. The formulation showed a typical characteristic of cubic phase in excess water by PLM, SAXS and rheological measurements. The in vitro release experiments showed that the MH-loaded in situ liquid crystal presented higher cumulative releases than Periocline ® , and the formulation be able to sustain the drug release for 4 days. The pharmacodynamics results indicated that MH-loaded in situ cubic liquid crystal had therapeutic effects on periodontitis. This system provided a successful and effective drug delivery method and may be used as a potential carrier in the local delivery of MH for periodontal diseases. | 2018-09-16T06:23:00.821Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "ceb4b773ce650b90b5aee31caa7230c7734a1bff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/23/9/2275/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ceb4b773ce650b90b5aee31caa7230c7734a1bff",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
503534 | pes2o/s2orc | v3-fos-license | A single gene (tts) located outside the cap locus directs the formation of Streptococcus pneumoniae type 37 capsular polysaccharide. Type 37 pneumococci are natural, genetically binary strains.
The molecular aspects of the type 37 pneumococcal capsular biosynthesis, a homopolysaccharide composed of sophorosyl units (beta-d-Glc-(1-->2)-beta-d-Glc) linked by beta-1,3 bonds, have been studied. Remarkably, the biosynthesis of the type 37 capsule is driven by a single gene (tts) located far apart from the cap locus responsible for capsular formation in all of the types characterized to date in Streptococcus pneumoniae. However, a cap37 locus virtually identical to the cap33f cluster has been found in type 37 strains, although some of its genes are inactivated by mutations. The tts gene has been sequenced and its transcription start point determined. Tts shows sequence motifs characteristic of cellulose synthases and other beta-glycosyltransferases. Insertion of the tts gene into the pneumococcal DNA causes a noticeable genome reorganization in such a way that genes normally separated by more than 350 kb in the chromosome are located together in clinical isolates of type 37. Encapsulated pneumococcal strains belonging to 10 different serotypes (or serogroups) transformed with tts synthesized type 37 polysaccharide, leading to the formation of strains that display the binary type of capsule. Type 37 pneumococcus constitutes the first case of a natural, genetically binary strain and represents a novel alternative to the mechanisms of intertype transformation.
M icrobial pathogens have developed a great variety of strategies to overcome host cell defenses and ensure their own survival and expansion. These strategies have become extremely accurate in the case of pathogens that have kept a close association with their host (1). Streptococcus pneumoniae (pneumococcus) has evolved as a microorganism highly adapted to and dependent on its human host and is currently considered the most dangerous vehicle, causing conditions from otitis media and sinusitis to pneumonia, septicemia, and meningitis (2). Pneumococcal disease accounts for more deaths than any other vaccine-preventable bacterial disease (3). The capsular polysaccharide has been identified as the main virulence factor of pneumococcus. There are at least 90 different capsular types, although only a subset of 23 types causes more than 90% of invasive disease worldwide (2). The use of a 23-valent polysaccharidebased vaccine has turned out to be quite limited to protecting those segments of the population that are extremely sensitive to invasion by pneumococcus (e.g., children under three years old and the elderly).
Recent studies have provided insights on the gene cluster ( cap ) involved in capsular formation in S. pneumoniae . This cluster has been characterized at a molecular level in the case of types 1, 3, 14, 19F, 19B, 23F, and 33F (4)(5)(6)(7)(8)(9)(10)(11). All of the cap clusters characterized so far are placed between the dexB and aliA genes, with a central region embracing those genes responsible for the synthesis of the type-specific capsule and flanked by open reading frames (ORFs) 1 that share, in most cases, homology among all of the types described so far. The number of genes involved in type-specific capsule formation varies according to the chemical complexity of the capsule, whereas the biological role of the ORFs flanking the specific genes remains to be determined (12). More recently, we have found that galU , a gene located outside of the cap locus and encoding a uridine diphosphoglucose pyrophosphorylase, is essential for capsular polysaccharide biosynthesis, at least in type 1 and 3 pneumococci (13).
Shifting from one capsular type to another (intertype transformation) was suggested to happen in nature and has been repeatedly demonstrated in the laboratory (for review see reference 14). More recently, detailed molecular analysis of the cap locus has revealed that capsular changes are 242 Type 37 Polysaccharide Synthase of S. pneumoniae quite frequent between the most virulent clinical isolates of pneumococci (15,16). The strategy used to carry out intertype transformation is based on the complete interchange of large DNA fragments (from 14 to 22 kb long) between different capsular types, taking advantage of the similarity found in the ORFs flanking the capsular-specific genes. The frequent presence in the flanking regions of insertion sequence (IS) elements might also promote this type of interchange and suggests that the capsular cluster could behave as a pathogenicity island. In other bacterial pathogens, it has been suggested that ISs might facilitate the evolution and adaptation of microorganisms to their host's environment by using a kind of 'quantum leap' evolution that leads to rapid changes (17), as could be the case for the pneumococcal capsule.
In this paper, we describe a novel strategy used by S . pneumoniae to synthesize the type 37 capsule. This strategy implies the participation of a single gene ( tts ) to direct the formation of an abundant capsular envelope that is composed (18). The tts gene responsible for the formation of this capsule was located outside of the cap cluster and characterized. Our work also illustrates an extremely simplified strategy that pneumococcus has developed to direct the formation of its main virulence factor, which contributes in a fundamental way to the survival of this pathogenic microorganism in humans.
Materials and Methods
Bacterial Strains, Plasmids, and Growth Conditions. We used the following unencapsulated laboratory S . pneumoniae strains: M24 (S3 Ϫ ; reference 19), M29 (S1 Ϫ ; reference 4), and M31 ( ⌬ lytA ; S2 Ϫ ; reference 20). The type 37 clinical isolates were purchased from the Statens Seruminstitut (strain 7077/39) or provided by A. Fenoll (Spanish Pneumococcal Reference Laboratory, Majadahonda, Spain; strains 1235/89 and 975/96), who also provided most of the other encapsulated pneumococci used in this work. The number after the shill indicates the year of isolation of the corresponding strain. When working with Escherichia coli , strains DH5 ␣ (21) and C600 (22) were employed. Growth and transformation of laboratory strains of S . pneumoniae and E . coli was performed as previously described (13). Clinical pneumococcal isolates were transformed after the procedure of Håvarstein et al. (23) using a competence-inducing peptide provided by D.A. Morrison (Department of Biological Sciences, University of Illinois at Chicago, IL). S . pneumoniae clones obtained upon transformation with derivatives of pLSE1 ( tet ermC ; reference 24) were scored on blood agar plates containing 0.7 g of lincomycin (Ln) per milliliter. Plasmid pLSE4 is a promoter-probe vector able to replicate in S . pneumoniae and E . coli that contains a promoterless lytA gene (25). Plasmid pUCE191 has been described elsewhere (5).
DNA Techniques and Plasmid Construction. DNA manipulations and standard molecular biological methods were performed as described by Sambrook et al. (22). S . pneumoniae DNA digested with either SmaI, SacII, or ApaI was analyzed by pulse-field gel electrophoresis (PFGE) using a contour-clamped homogeneous electric field DRII apparatus (Bio-Rad Labs.) as previously described (26). Primer-extension mapping of the transcription initiation site was carried out as previously described (4). PCR amplifications were performed as previously described (11). Conditions for amplification were chosen according to the G plus C content of the corresponding oligonucleotides. The oligonucleotide primers mentioned in the text were: The oligonucleotide primer OL62 (5 Ј -CGC-TTCATTCTGTACGGTTGAATGCGG-3 Ј ) has been previously described (4). Lowercase letters indicate nucleotides introduced to construct appropriate restriction sites. These are underlined.
Plasmid pDLP37 was constructed by cloning a 1.7-kb SphI-NheI DNA fragment of strain 1235/89 containing the tts gene into pUC19 previously digested with SphI and XbaI. Plasmid pDLP40 contains a 1.7-kb SphI-KpnI DNA fragment of pDLP37 embracing the tts gene, inserted into an EcoRI-deficient pUC18 previously treated with the same enzymes. The latter plasmid was constructed by digesting pUC18 with EcoRI, filling in with the Klenow (large) fragment of the E . coli DNA polymerase, and selfligation. We used PCR to amplify the ermC gene from plasmid pLSE1 using oligonucleotide primers OL82 and OL83. This promoterless gene was digested with SmaI and cloned into EcoRIdigested pDLP40. Before ligation, the EcoRI site located in the tts gene had been filled in as described above. Plasmid pDLP41 was isolated among the erythromycin-resistant transformants of E . coli DH5 ␣ . Plasmid pDLP43, containing a promoterless tts gene placed downstream of the tet gene of the pLSE1 vector, was constructed as follows: DNA prepared from strain 1235/89 was PCR amplified using oligonucleotide primers D109 and D116. The amplified product was filled in, digested with ClaI, and ligated to pLSE1 previously treated with EcoRV and MspI.
NEBlot™ Phototope™ Kit (Millipore Corp.) was used to construct biotin-labeled probes and Phototope™ 6K Detection Kit (Millipore Corp.) was used for chemiluminescent detection. Southern blots, dot blots, and hybridizations were carried out according to the manufacturer's instructions.
Nucleotide Sequence and Data Analysis. DNA sequencing was carried out by using an Abi Prism 377™ DNA sequencer (Applied Biosystems, Inc.). DNA and protein sequences were analyzed with the Genetics Computer Group software package (version 9.0; reference 27) or using the programs indicated in the text that are available at the internet address specified below. Sequence similarity searches were performed using the EMBL/GenBank, SWISS-PROT, and PIR databases. Preliminary sequence data of the S . pneumoniae genome were obtained from The Institute for Genomic Research at http://www.tigr.org.
Miscellaneous Techniques. Pneumococcal transformants harboring pLSE4-derived plasmid were scored on Ln-containing plates using a filter technique to distinguish the LytA phenotype (28). Immunoagglutination using anti-R serum (29) or coagglutination assays with type antisera purchased from the Statens Seruminstitut were carried out as previously described (11). Typing by the Quellung technique was carried out by L. Vicioso (Spanish Pneumococcal Reference Laboratory, Majadahonda, Spain).
Nucleotide Sequence Accession Numbers. The sequence data reported here have been submitted to the EMBL/GenBank/DDBJ databases under accession numbers AJ131984 and AJ131985.
Type 37 Pneumococcal Strains Possess a Cryptic cap33f Locus.
Long PCR using oligonucleotide primers D62 (dexB) and D5 (aliA) and DNA prepared from three different type 37 pneumococcal clinical isolates produced 20-kb DNA fragments that were apparently identical to each other ( Fig. 1 A). The amplified DNA fragment obtained from strain 1235/89 was completely sequenced (20,133 bp) and compared with the sequences available in the databases. High similarity (Ͼ97% identity) was found throughout the entire sequence between the cap37 locus and the cap33f cluster recently described (reference 11; Fig. 1 B). Most interesting, mutations interrupting the reading frames were found in cap37B, cap37E, cap37N, and cap37O, suggesting that none of these genes is required for type 37 capsule biosynthesis. These mutations were confirmed by repeated sequencing (at least three times) of different PCR-amplified products. The great number of genes found in the cap37 locus was unexpected, as type 37 polysaccharide is, as reported above, very simple and, in all the cases documented so far in the literature, there was a direct relationship between the size of the cap cluster and the chemical and structural complexity of the corresponding capsular polysaccharide (12). It would be conceivable, however, that the observed inactivation of some of the genes of the locus might result in a polysaccharide simpler than that of type 33F. If this was the case, transformation of S. pneumoniae with the 20-kb PCR fragment containing the cap37 genes should have shifted the capsule type of the recipient strain to that of type 37. However, we never found type 37 transformants when using competent cells of strains M24 (S3 Ϫ ) or M29 (S1 Ϫ ) as recipient bacteria for the 20-kb type 37 DNA (data not shown). Moreover, when the cap locus from strains DN2 or DN5 (two independently isolated type 37 transformants of strain M24 obtained by using chromosomal DNA prepared from strain 1235/89) was amplified by PCR using oligonucleotides D62 and D5, the length as well as the restriction enzyme profile of the amplified PCR DNA fragments corresponded to that of the recipient S3 Ϫ strain (M24) and not to the donor DNA ( Fig. 1 C). In addition, no amplification was obtained using DNA from DN2 or DN5 and any pair of internal oligonucleotide primers designed on the basis of the cap37 sequence (data not shown). Taken together, these results strongly suggested that additional genes located outside the cap37 locus were required for transformation to the type 37 phenotype (S37 ϩ ).
A Single Gene (tts) Transforms S. pneumoniae to the S37 ϩ Phenotype. To localize the gene(s) responsible for the synthesis of the type 37 capsule, DNA prepared from strain 1235/89 was digested with several restriction endonucleases, and the fragments were separated by electrophoresis on 0.7% low-melting-point agarose gels. DNA fragments of various sizes were purified and used to transform competent cells of M24 (S3 Ϫ ) to the type 37 capsule. S37 ϩ transformants were observed using as donor material fragments of 7ف kb when DNA from strain 1235/89 was digested with PstI. Afterwards, a ligation mixture containing 7-kb PstI DNA fragments from strain 1235/89 and PstI-digested pUCE191 was used to transform competent M24 cells. Several S37 ϩ , Ln-resistant transformants were isolated, and one of them (strain C2) was used for subsequent study. Transformation experiments using chromosomal DNA prepared from strain C2 demonstrated that the ermC marker was genetically linked to the gene(s) responsible for the synthesis of the type 37 polysaccharide. Afterwards, C2 DNA was digested with restriction endonucleases without target sequences in pUCE191 (indicated by X in Fig. 2), namely BglII, EcoRV, Eco47III, MunI, or SpeI, diluted and selfligated. The ligation mixture was used for PCR amplification with the direct and reverse M13/pUC primers. Amplified DNA fragments were found exclusively with the EcoRV and MunI digestions (not shown). Determination of the nucleotide sequence beyond the PstI sites served to design a pair of oligonucleotide primers (D90 and D91) that were used for PCR amplification of DNA prepared from strain 1235/89. Those primers produced a fragment of 7ف kb that was capable of transforming the S3 Ϫ strain M24 to the S37 ϩ phenotype (not shown). In addition, identical fragments were produced when DNAs prepared from the type 37 strains 975/96 and 7077/39 were used as substrates for PCR amplification. These amplified DNA fragments were also able to transform the M24 strain to the type 37 capsule (not shown). The amplified DNA fragment obtained from strain 1235/89 was completely sequenced, and a schematic representation of the results is shown in Fig. 3 A. The nucleotide sequence of the PstI fragment (7,311 bp) was compared with a partial (and still preliminary) nucleotide sequence of the genome of a type 4 pneumococcal strain (see Materials and Methods). Surprisingly, from positions 1 to 1,479, the sequence matched part of contig sp_14 (Fig. 3 B), in particular that containing a gene (gpmA) putatively encoding a protein highly similar (64.3% identity and 76.6% similarity) to the phosphoglyceromutase (GpmA) of Haemophilus influenzae. However, from nucleotide 5,298 to the end of the PstI fragment, the sequence was virtually identical to part of contig sp_58 (that located immediately downstream of the TAA termination codon of the metE gene) and putatively codes for a protein that is 66% identical (80.7% similar) to the PyrDA dihydroorotate dehydrogenase of Lactococcus lactis, and for a partial ORF (orfY) of unknown function (Fig. 3 C). Upstream of the pyrDA gene, a 105-bp repeat element characteristic of S. pneumoniae (4) was found. There is no data indicating the distance between both contigs, but it can be estimated to be Ͼ22 kb, that is, the smallest distance between gpmA and the right end of contig sp_14. The apparently anomalous structure of the PstI fragment will be discussed in detail below.
From nucleotide 3,834 to 5,297 of the PstI fragment obtained from strain 1235/89 DNA, a copy of the IS element IS1167 (30) was found (Fig. 3 A). The trp 1167 gene should encode a defective transposase because it contains a frameshift mutation. From nucleotide position 3,706 to 3,833, the sequence is identical to that found 3 bp downstream of the TAA termination codon of gpmA in contig sp_14, strongly suggesting that this region represents the integration site of the type 37-specific sequences.
The only gene in the whole 7-kb PstI fragment from strain 1235/89 that showed no similarity to any other present in the S. pneumoniae database was named tts. Upstream of the ATG initiation codon, a putative promoter (ttsp) was found (TTGATA-17 bp-TATAAT). An extended Ϫ10 promoter motif, TtTG, characteristic of the Ϫ16 region of S. pneumoniae (31) was also observed. On the other hand, another copy of the 105-bp repeat element characteristic of S. pneumoniae (reference 4; see above) was located further upstream. Both repeats are 71.7% identical and oppositely oriented. The tts gene putatively codes for a protein of 509 amino acid (aa) residues with a predicted M r of 58,888. Six transmembrane regions could be anticipated for Tts using different prediction programs, suggesting that the protein targets to the membrane. The aa sequence positions for these predicted transmembrane helices are A (aa 11-33), B (aa 45-63), C (aa 347-369), D (aa 378-400), E (aa 407-429), and F (aa 483-505). The central part of the protein is more hydrophilic and is predicted to reside in the cytoplasm and contain the catalytic site(s). Two independent predicting methods (SignalP V1.1 and PSORT) were used to test whether Tts possesses a signal peptide, and both methods strongly suggested that this was indeed the case. The possible cleavage site was predicted to be located between residues 36 and 37 or 32 and 33 depending on the program used. The putative signal peptide coincides with transmembrane helix A. On the other hand, we have also determined the complete nucleotide sequence of the tts gene of the other two clinical type 37 isolates, strains 7077/39 and 975/96, and observed that the three tts genes were identical (not shown). As the type 37 clinical strains studied here were isolated in different geographic locations and one of them as early as in 1939, this finding illustrates the noticeable genetic stability of the tts gene.
To ascertain that the tts gene is responsible for the synthesis of the type 37 capsule, insertion-inactivated mutants were constructed using pDLP41 to transform competent cells of the S37 ϩ pneumococcal strain DN2. Plasmid pDLP41 contains the gene ermC inserted into the tts gene (see Materials and Methods). One of the Ln-resistant transformants was used for further study (strain DN21). The accuracy of the construction was checked by restriction analysis of the PCR-amplified products of DN21 and DN2 DNAs using oligonucleotide primers D90 and D91 (Fig. 4). Cells of strain DN21 were shown to be unencapsulated, as deduced from the failure of the type 37 antiserum to agglutinate them. Moreover, these transformants deposited at the bottom of the test tube when grown in liquid medium and agglutinated with anti-R serum (not shown). On the other hand, when competent DN21 cells were transformed with pDLP43 containing exclusively tts gene cloned into pLSE1, S37 ϩ transformants were isolated (not shown). All of these results indicated that Tts is the type 37-specific polysaccharide synthase.
Identification of the tts Promoter and the Transcription Start Point. To determine whether the proposed promoter sequence (see above) actually represents ttsp, a DNA fragment containing the putative promoter was amplified using oligonucleotide primers D101 and D112 (Fig. 3 A). After digestion with SphI and XbaI, the fragment (198 bp) was ligated to pLSE4 previously treated with the same enzymes and used to transform competent cells of the pneumococcal M31 strain. Plasmid pLSE4 is a promoter-probe vector able to replicate in S. pneumoniae and E. coli that contains a promoterless lytA gene (25). LytA ϩ cells, detected among the Ln-resistant M31 (⌬lytA) transformants, contained a recombinant plasmid designated pDLP36. Crude sonicated extracts of M31 cells harboring pDLP36 contained LytA activity 21ف( U/mg of protein; data not shown), which proved the presence of a functional promoter in the cloned fragment. To demonstrate that ttsp was actually located in this region, the transcription start point was mapped by primer extension of the oligonucleotide OL62. This analysis (Fig. 5) showed that the transcription of the tts gene initiates 9 nucleotides after the Ϫ10 consensus sequence.
Tts Appears To Be a -Glucosyltransferase. The deduced aa sequence of the tts gene was compared with the sequences available in the databases. Using COG (Clusters of Orthologous Groups) analysis (32), sequence similarities suggested that Tts might be a member of the group of glycosyltransferases involved in cell wall biogenesis, whereas BLASTP showed moderate similarity with cellulose synthases. In particular, Tts exhibits significant similarities (Fig. 6) in the regions recently shown to be highly conserved among plant as well as bacterial cellulose synthases and several other glucosyltransferases (33). These conserved motifs have previously been suggested to be critical for catalysis and/or binding of the substrate uridine diphosphoglucose (UDP-Glc; reference 34).
Genomic Reorganization Caused by Intertype Transformation in Type 37 Pneumococcal Strains. The tts gene from the type 37 clinical strains has been shown to reside in a 7-kb PstI fragment that, apparently, might be the result of a profound reorganization of the genome. This assumption was based on the finding that the genes flanking tts reside in two different contigs, namely sp_14 and sp_58, that are located far apart on the partially sequenced genome of a type 4 pneumococcal strain. This also appears to be the case for the lab-oratory strain M24, a late descendant of the classical R6 strain (19), as repeated attempts to amplify M24 DNA using oligonucleotides D90 and D91 and the long PCR technique were unsuccessful (data not shown). On the other hand, PCR amplification experiments using DNA prepared from either DN2 or DN5, two type 37 transformants of the M24 strain, and the same oligonucleotide primers only rendered a PCR product in the case of DN2 DNA. Interestingly, restriction enzyme analysis showed that the amplified DN2 DNA fragment was identical to that of the 7-kb PstI fragment of the parental clinical strain 1235/89 DNA (not shown).
PFGE is a powerful tool to distinguish among isolates of S. pneumoniae due to the great polymorphism exhibited by the DNAs of different pneumococcal strains (35). Unfortunately, this polymorphism precludes the use of DNA prepared from clinical isolates to directly locate any gene, because only the physical map of the Avery's R6 strain (36) has been worked out (37,38). As previously reported (26), two different DNA fragments were generated by digestion of M24 DNA with either ApaI or SacII with respect to those produced in R6 DNA, whereas both strains have identical SmaI profiles. Fig. 7 A shows a partial physical/ genetic map of the M24 chromosome. When analyzed by PFGE, identical profiles were observed for M24 and DN5 DNAs digested with ApaI, SacII, or SmaI (Fig. 7 B). However, DN2 DNA showed altered bands with all three enzymes used, indicating that genomic reorganization did occur during transformation of the S3 Ϫ recipient strain M24 to the S37 ϩ phenotype. It should be stressed that, for in- The final products were loaded on a 6% polyacrylamide 7 M urea sequencing gel, in parallel with a sequencing reaction using the same oligonucleotide primer (OL62) and pDLP36. The major extended product is indicated by an arrow, and the Ϫ10 consensus sequence of ttsp is also shown. Note that the indicated sequence corresponds to the coding strand. Figure 6. Computer-generated alignment (PILEUP) of selected regions of the Tts synthase (SPNE_Tts) and several cellulose synthases and other glucosyltransferases. Stars indicate the conserved aspartic acid residues, and solid triangles indicate the QXXRW motif reported to be critical for UDP-Glc binding and/or catalysis (34). Residues in black boxes indicate aa residues identical in at least 7 out of the 13 proteins aligned. stance, the SacII fragment number 3 062ف( kb) of M24 and DN5 DNAs is converted, in DN2 DNA, into a 290-kb fragment that superimposes on the original SacII fragment number 2 of M24 and DN5 DNA. This reorganization does not affect the cap3 recipient cluster as shown above and might involve those fragments where contigs sp_14 and sp_58 are located. To test this hypothesis, chromosomal DNAs prepared from M24, DN2, and DN5 were digested with ApaI, SacII, or SmaI, subjected to PFGE, blotted, and hybridized with different biotin-labeled probes ( Table I). The probes used contained internal fragments of the genes tts, gpmA, psaA, or pyrDA (see gene locations in Fig. 3). First of all, we localized the genes gpmA (contig sp_14) and pyrDA (contig sp_58) in the S. pneumoniae M24 chromosome and observed that they map at very distant positions (Table I and Fig. 7 A). As expected, the location of gpmA matched that of the previously mapped pbp2B gene (36) that is located only 15 kb upstream of gpmA according to recent sequence data (Fig. 3 B). These results also showed that contigs sp_14 and sp_58 are located very far apart in the S. pneumoniae chromosome. In fact, these contigs are separated by at least 380 kb, the sum of the sizes of the intervening macrorestriction fragments (Fig. 7 A).
Different hybridization bands were observed when comparing DN2 and DN5 DNAs (Table I), in agreement with the different chromosomal location of the tts gene in both strains. Moreover, apart from the hybridization band of DN5 DNA with the type 37-specific tts probe, the hybridization patterns of M24 and DN5 DNAs were identical, strongly suggesting that a large chromosome reorganization had not taken place in DN5 as a consequence of transformation of M24 to the S37 ϩ phenotype. In fact, combined PCR amplification experiments and sequence determination showed that, in DN5 DNA, the tts gene integrated between gpmA and orf1819 (Fig. 3 B), as 2,400 out of 2,412 bp of the intervening orf3 gene were lost (data not shown). In the type 37 DN2 transformant, however, we found that gpmA moved from its original position to that where pyrDA resides (Table I). Moreover, this reorganization also affected some genes located downstream of gpmA, as deduced from the finding that psaA that is located 7ف kb downstream of gpmA in the S. pneumoniae genome (Fig. 3 B) hybridizes with a novel SmaI fragment (number 7) in DN2 DNA (Table I and Fig. 7 B).
To investigate whether the IS element located downstream of tts might be involved in the reorganization of the Numbers represent the restriction fragments separated by PFGE (Fig. 7 A) that hybridize with labeled probes containing the indicated genes. Ϫ, No hybridization signal. *Restriction fragments not present in M24 DNA (Fig. 7 B).
genome, type 37 transformants of the M24 strain were obtained by using, as donor DNA, a 4.1-kb SacI-ClaI fragment containing the tts gene, the IS1167 element, and the last 140 nucleotides of gpmA (Fig. 3 A). Five independently isolated type 37 transformants were tested using a combination of PCR amplification and Southern blot analysis (not shown). All of them turned out to be identical and appeared to have arisen by homologous recombination between the 3Ј end of gpmA and the 128-bp region located immediately downstream of the tts gene (represented by a star in Fig. 3, A and B) without any additional genome rearrangement. Moreover, all of the transformants had lost the IS1167 element. Although the number of transformants studied is limited, these results suggest that the sequences flanking the tts gene are more relevant for successful transformation than the IS element itself. Construction of Binary Encapsulated Strains of S. pneumoniae. Apart from the natural type 37 strains, only cap3A unencapsulated pneumococcal mutants had been used in this study as recipients for intertype transformation experiments. Consequently, we were interested to know whether the tts gene could code for the biosynthesis of type 37 capsule in pneumococcal isolates of different types. S. pneumoniae strains belonging to serotypes (or serogroups) 1,2,5,6,8,9,19, 33A, 33B, or 33F were incubated with DNA prepared from strain C2, and Ln-resistant transformants were scored in blood agar plates. Selected clones were then analyzed for capsulation using both the Quellung reaction and coagglutination assays. All of the clones tested showed two capsules, that of the recipient strain and the type 37 capsule encoded by the transforming donor DNA (not shown).
Discussion
It is noteworthy that the three clinical strains of S. pneumoniae studied here, one recovered in Denmark in 1939 shortly after the first isolation of a type 37 strain (39) and the other two in Spain in 1989 and 1996, respectively, contain in their chromosomes nearly identical and mutated cap33f loci placed between the dexB and aliA genes (Fig. 1). This cap33f locus appears to be silent in all type 37 strains, as measurable amounts of serogroup 33 polysaccharide were not found (data not shown). The finding that no S37 ϩ transformants could be identified when the cap33f locus was PCR amplified and used as donor DNA to transform unencapsulated recipient cells suggested that the gene(s) responsible for the synthesis of the type 37 capsular polysaccharide might be located elsewhere in the genomes of type 37 strains.
In this paper, we show that a single gene, designated tts and located in a 7.3-kb PstI DNA fragment common to all of the clinical type 37 isolates (Fig. 3 A), is responsible for the synthesis of the type 37 capsular polysaccharide. The Tts protein coded by the tts gene appears to be an integral membrane protein having a potentially cleavable signal peptide. As the type 37 polysaccharide has two different -glucosidic linkages, 1,2 and 1,3 (18), Tts should catalyze both kinds of linkages. There is increasing evidence show-ing that this property is not so unusual as previously envisaged. Type 3 pneumococcal Cap3B synthase (40) and the HasA hyaluronan synthase of Streptococcus pyogenes (41) provide examples of dual enzymatic activity. More recently, Griffiths et al. (42) have demonstrated that KfiC, an enzyme involved in the synthesis of the E. coli K5 capsule, is a bifunctional enzyme with both ␣and -glycosyltransferase activities responsible for the sequential addition of glucuronic acid and N-acetylglucosamine to the growing polysaccharide chain. Interestingly, it has been possible to produce a truncated protein lacking only one of the two transferase activities (42). If a similar situation could be demonstrated for the Tts synthase, it might be possible to construct tts mutants lacking the 1,2-glucosyltransferase activity that would produce a callose-containing capsular polysaccharide (-1,3-glucan). Nevertheless, it should be emphasized that this type of capsule has never been reported in S. pneumoniae.
The type 37 synthase shows sequence signatures known to be characteristic of bacterial and plant cellulose synthases and other -glycosyltransferases (33; Fig. 6). Currently, it is not known whether genes other than tts and those common to all pneumococci might cooperate in the capsular synthetic process as reported, for example, for the Acetobacter xylinum cellulose synthase, the only well characterized cellulose synthase that comprises at least one putatively regulatory subunit in addition to the catalytic subunit (34). Also, we lack sufficient biochemical information to speculate about whether the Tts synthase is responsible for direct polymerization of glucan from UDP-Glc, as proposed for A. xylinum, or whether it might catalyze the synthesis of a lipid-Glc precursor as suggested for the CelA protein of Agrobacterium tumefaciens (34).
Transformation of a laboratory strain (M24) with type 37 chromosomal DNA produced at least two categories of strains. In one of them, the DN2 strain has suffered a noticeable genomic reorganization, as genes separated for at least 380 kb in the genome of the recipient strain (i.e., the genes gpmA and pyrDA) lie close together after transformation, as evidenced by PFGE experiments (Fig. 7 and Table I). This situation reconstructed that found in the clinical type 37 pneumococcal isolates. In the other class of transformants (strain DN5), the tts gene is integrated immediately downstream of gpmA without any major chromosomal rearrangement. In addition, by using transforming DNA exclusively containing the tts gene and IS1167, it appears that the IS element plays a secondary role in the integration events. The observation that pneumococcal strains isolated almost 60 years apart at different geographic locations contain not only an identical tts gene inserted at the same site but also a cryptic cap33f locus, together with the finding on the potential capacity of tts to integrate and express in all of the pneumococcal strains tested, strongly supports the hypothesis of the clonal origin of capsular genes in S. pneumoniae, as has already been proposed for the cap1 cluster involved in the synthesis of type 1 polysaccharide (4). In fact, in the two cases where complete sequence data of the cap genes of two different strains of the same serotype are available, types 3 (5, 6) and 23F (unpublished sequence available from EMBL/ GenBank/DDBJ under accession number AF030373; reference 10), Ͼ95% identical nucleotides were found among the cap genes of different pneumococcal strains.
During the last few years, several researchers have reported that some clinically relevant (multiresistant) pneumococcal strains are essentially identical in overall genotype but differ in capsular type (15,(43)(44)(45)(46)(47)(48). This finding has been interpreted as evidence that the new strains were the result of intertype transformation. Very recently, Coffey et al. (16) studied in detail eight type 19F variants that were otherwise identical to the major Spanish multiresistant 23F clone and confirmed that recombination at the cap locus had taken place on at least four occasions. In all of the cases reported so far, in vivo intertype transformation implies that the recipient cap locus is substituted by that of the donor strain, that is, the transformant gains new capsular genes but loses its own cap cluster. In the case reported here, however, the capsular tts gene of the donor strain does not replace the recipient cap33f cluster but integrates in a different, distant place and originates a genetically binary strain, a strain containing two capsular loci. Binary encapsulated strains, i.e., those synthesizing two chemically and immunologically distinguishable capsules, were constructed in the laboratory many years ago, and it was observed that one type of capsule predominates (for a comprehensive review see reference 14). Moreover, transformation experiments using DNA prepared from binary cells showed that the supernumerary capsular cluster was inserted in a region different from the usual capsular polysaccharide-determining one (49). Binary transformants appear to be stably maintained, except in some rare cases where unstable binary strains were obtained (50). In the latter case, linkage between the donor and recipient capsular genes could be demonstrated. More recently, binary strains were constructed by cloning the type 3 polysaccharide synthase gene (cap3B) into S. pneumoniae strains belonging to several types (40). In addition, genetically binary type 3 strains were prepared by transformation of unencapsulated cap3A mutants impaired in the synthesis of UDP-Glc dehydrogenase with the homologous cap1K gene from type 1 pneumococci (4). In this case, the introduction of the cap1K gene in the recipient chromosome was facilitated by the presence of a closely linked copy of the IS1167. Nevertheless, with the only exception of Griffith (51), who reported a pneumococcal strain that agglutinated specifically with the sera of two different types, natural isolates of S. pneumoniae having two capsules have not been described so far. In addition, the possibility that Griffith's observation was caused by some kind of immunologic cross-reactivity between capsular polysaccharides cannot be ruled out (52,53).
The type 37 pneumococci reported here are binary strains from the genetic viewpoint. This status might provide a potential advantage against the immunological host defenses. Although currently silent, the recipient cap37 locus might eventually recover its capacity to synthesize type 33F capsular polysaccharide, e.g., we can envisage that transformation events involving DNA fragments of the cap33f gene cluster would restore to the wild-type genotype those genes mutated in cap37. On the other hand, tts cryptic homologues might be also present in some clinical isolates of pneumococcus. Although preliminary searches for these putative mutants have been unsuccessful, these variants should be good candidates for the rapid acquisition of a type 37 capsule. Regardless of these possibilities, from the results presented here, de novo acquisition by S. pneumoniae of a tts gene via genetic transformation appears to be a rather likely event. | 2014-10-01T00:00:00.000Z | 1999-07-19T00:00:00.000 | {
"year": 1999,
"sha1": "426ca9cefde32e4ece171840135e7f907de6caf5",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/190/2/241.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "93a2e81bb6bec43a574df9b48d04c2cc627ae7ef",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
34271746 | pes2o/s2orc | v3-fos-license | Temporal Variations of the Chemical Composition of Three Seaweeds in Two Tropical Coastal Environments
The seaweeds Chaetomorpha antennina, Gymnogongrus griffithsiae and Ulva fasciata were studied regarding tissue concentrations of total nitrogen, total phosphorus, total protein, hydrosoluble protein, total carbohydrate, chlorophyll a and total carotenoid throughout a 39-month survey in two coastal environments of Rio de Janeiro State, Brazil. One of the sites (Itapuca Stone) has high concentrations of dissolved nutrients and an intense long-term process of cultural eutrophication; the second site (Bananal Inlet) is thought to have lower concentrations of dissolved nutrients and no relevant anthropic impact. Seaweeds experienced changes in the concentrations of the substances in the thalli; however they did not show any cyclic seasonal pattern, except for pigments, with lower values in summer in both sites. The differences found for each species in each sampling at the sites were small (e.g. U. fasciata, more total nitrogen at Itapuca Stone) or absent (e.g. C. antennina, no significant differences for hydrosoluble protein in the sites). Differences in the concentrations of dissolved nutrients in the sites did not generate contrasting chemical profiles in the seaweeds. There is no evidence of nitrogenor phosphorus-limitation in any season. It is presumable that the concentrations of dissolved nutrients at the nutrient-poorer site are sufficient to generate high concentrations of the substances in the thalli of the species tested, similar to the concentrations measured in the eutrophic site. Experimental data are needed to elucidate the factors that promote the success of the species tested under contrasting nutrient availability and environmental disturbance. Corresponding author. A. Nascimento et al.
Introduction
Growth of macrophytes and phytoplankton in tropical coastal waters is generally limited by nutrient availability [1].Human use of coastal areas has greatly increased the inputs of nitrogen and phosphorus into many aquatic systems, with resultant impacts at the population and ecosystem level [2].Increased abundance of nuisance macroalgae is among the direct consequences of nutrient loading [3].
Studies on the abundance of opportunistic seaweeds and measurements of dissolved nutrients are traditional approaches used to add information to evaluate the trophic state of a given ecosystem.However, other parameters may also be used to assess some ecological characteristics of coastal environments.For instance, monitoring the concentration of total N and P in macroalgal tissues may be a more useful indicator of enrichment or eutrophication potential [4], since total nutrient concentration in the algal tissue integrates the nutrient regime over time [5] [6].
In addition to measurements of tissue N and P, other chemical parameters can be useful in this field.Analyses of protein, carbohydrate and photosynthetic pigments can aggregate more information for the understanding of the behavior of algal species as responses to environmental conditions.Protein in the thalli is mainly influenced by nitrogen availability [7].Both experimental and field studies have demonstrated that seaweeds tend to accumulate higher concentrations of protein and chlorophyll when dissolved nitrogen is available in high concentrations [8].The values in the thalli tend to be relatively higher in specimens living in eutrophic environments or those that have been previously submitted to high concentrations of dissolved nutrients in a period before sampling [9].
High concentrations of carbohydrate in contrast with low concentrations of protein are frequently related to nitrogen deficiency in algae [10].Under long-term short supply of nitrogen, it is observed that an increase in total carbohydrate and a progressive decrease of the concentration of nitrogenous substances (protein, pigments, intracellular inorganic nitrogen, nucleic acids, etc.) over time occur, and this is a universal behavior of seaweeds [7] [11] and microalgae [12].Nitrogen-bearing substances may be partially consumed as alternative sources of nitrogen by algal species under nitrogen starvation [7] [12].
Data on pigment composition are also important to assess responses to environmental factors, such as temperature, salinity, dissolved nutrients and irradiation.The pigment content may increase in response to the environmental factors such as high nutrient availability [13] or decrease as a consequence of excess of solar radiation and exposure to UV radiation [14].Damage caused by UV radiation may be especially relevant in tropical environments, where seaweeds are particularly exposed to high irradiation [14] [15].
Studies on tissue chemical composition of macroalgae are predominantly carried out in temperate environments [6] [13] [16]- [21].By comparison, information on tissue chemical composition of algae from tropical and subtropical environments is relatively scarce [22]- [27], and more data are needed from the tropics.
In this study we report on the temporal variations of tissue N, P, N:P atomic ratio, protein, carbohydrate, and photosynthetic pigments (chlorophyll and carotenoids) of the green algae Chaetomorpha antennina and Ulva fasciata and the red alga Gymnogongrus griffithsiae.The three macroalgal species are common in two tropical sites of Rio de Janeiro State, Brazil, with different trophic states: Bananal Inlet (oligotrophic-mesotrophic) and Itapuca Stone (eutrophic-hypereutrophic).Comparisons were made between algal substances and the concentrations of dissolved nutrients in the systems in this 3-year assessment to evaluate the effects of excess of nutrients on the chemical composition of the species studied.
Sampling Sites
Both sampling sites are located in Niterói municipality, State of Rio de Janeiro, Brazil (Figure 1).Bananal Inlet (23˚58'S, 43˚01'W) corresponds to the marine part of an environmental protected area (Serra da Tiririca State Park), with restricted access for recreational uses.The area is not inhabited, but human occupation can be seen close to the limits of the park (5 -6 km away from the sampling site).The terrestrial part of the park is a mountain area covered by a tropical rain forest (Atlantic Forest).This site is considered protected from relevant human impacts.Macroalgal floristic studies are still scarce in this site, but preliminary results indicate the existence of 92 species at the intertidal zone (Moreira, unpublished data).
The second site is Itapuca Stone (23˚04'S, 43˚08'W), located in Guanabara Bay.The site is in the urban area of Niterói City, and it is located near the entrance of the Bay (Figure 1), which promotes a local dilution in the typical high levels of pollution of this coastal system and a faster water turnover [28].Inner areas of Guanabara Bay show a low water exchange rate [29] due to geomorphological features and human occupation of coastal areas.The Bay comprises an area of 381 km 2 and an estimated 2-billion m 3 of water.Its hydrographic basin (4000 km 2 ) includes 35 rivers that contribute substantially to the freshwater input.Guanabara Bay is considered a eutrophic or hypereutrophic environment (depending on the specific part of the Bay), highly disturbed by anthropic impacts [30].
Considering the environmental characteristics described here, we hypothesized that the seaweeds of Itapuca Stone (Guanabara Bay) would present permanently high concentrations of tissue N and P; in addition, the seaweeds in Itapuca Stone would not show significant variations in their tissue substances throughout the year and no inter-annual changes in the chemical substances analyzed.On the other hand, temporal changes in algal tissue substances would be expected for the seaweeds of Bananal Inlet.
Sampling
Sampling began in December 2000 (end of the austral spring) and continued through February 2004 (austral summer).Samples were collected every 30 -75 days, depending on the tidal regime and season.Samples were collected in the intertidal area only.
Whole thalli of adult plants were collected in early morning and washed in the field with seawater to remove epiphytes, sediment and detritus.At least 15 whole plants of each species were collected, independent of the size of each seaweed.All species were typically found at the same specific points in the site throughout the study (e.g. C. antennina was sampled always at the same rocks).The plants were placed in plastic bags, and kept on ice until return to the laboratory (less than one hour).In the laboratory, samples were gently brushed under running seawater, rinsed with distilled water, and dried at 60˚C for at least three days and until constant weight, to determine the percentage of moisture in the tissues.The dried material was ground into a powder and kept in desiccators containing silica-gel at room temperature until N and P tissue analyses.
Samples for pigment were analyzed immediately after the preparation of the algal material, in the same day of the field trip, using wet thalli.Samples for protein and carbohydrate were cleaned, weighted (wet weight) and stored at 4˚C until analyses, up to five days later.At the time of each collection of macroalgae, four 250 ml-water samples (n = 4) for dissolved nutrient analysis were taken from 15 -20 cm below the water surface, as well as measurements of local temperature at the same depth.The samples of water were filtered through cellulose membrane filters (Millipore ® HAWP 0.45 µm pore) and kept at −20˚C until spectrophotometric determinations of dissolved nutrients.Each sample was measured at least three times to obtain accurate results, and the results showed in this study represent mean values for four independent samples collected in the field for each sampling.
Tissue Analyses
Total N and P were determined in algal tissue after peroxymonosulphuric acid digestion, using a Hach digestor (Digesdhal , Hach Co.) [32].Total N and P contents in the samples were determined spectrophotometrically after specific chemical reactions.For analytical details see Lourenço et al. [15].For each species and sampling four independent (from different plants) measurements of tissue N and P were performed (n = 4).
The Lowry et al. method [33] was used to evaluate hydrosoluble protein in the samples, with bovine serum albumin as a protein standard.Spectrophotometric determinations were done at 750 nm.Results obtained for total nitrogen were used to calculate the total protein content, using the nitrogen-to-protein conversion factors proposed by Lourenço et al. [34].Carbohydrate was extracted with 80% H 2 SO 4 , according to Myklestad & Haug [35] and determined spectrophotometrically at 485 nm by the phenol-sulphuric acid method [36], using glucose as a standard.Pigment extraction was performed in methanol, at 4˚C, for 20 h.Chlorophyll a and total carotenoid was determinated spectrophotometrically as described by Lorenzen [37] and Strickland & Parsons [38], respectively.
Dissolved Nutrients
For the quantification of nutrient ions in seawater, spectrophotometric determinations of nitrate and nitrite [39], ammonium/ammonia [40], urea and phosphate [41] were performed, following standard procedures.
Physical and Meteorological Parameters
Salinity was measured with a hand refractometer (Shibuya Optical, model S-10) using four samples (n = 4) collected in the field in each trip.Air temperature and seawater temperatures were measured with a mercury-column thermometer (Incoterm Co., Brazil).
Meteorological data (average monthly air temperature and precipitation) were obtained from the Fluminense Federal University Meteorological Station, located in Niterói, beside Guanabara Bay.
Statistical Analysis
The results for each species separately and for total measurements of all species combined were analysed by single-factor analysis of variance (ANOVA) with significance level α = 0.05 [42], followed with a Tukey's multiple comparison test.Suitable transformations of data (e.g.log of the actual data) were made when necessary.Time was the only factor considered in ANOVA.
Results
Table 1 shows small temperature variations throughout the study.Maximum temperatures tended to be achieved in December-February (austral summer).Maximum monthly average temperatures were obtained in December 2003 and February 2004.Similar trends were obtained for atmospheric precipitation, with higher values obtained in summer months, and a maximum record in December 2001.
Measurements of salinity were typically lower at Itapuca Stone, where they fluctuated between 29.5 and 34.9 psu throughout the study (Table 2).At Bananal Inlet, minor variations in salinity were recorded, with values fluctuating around 35 psu (except in January 2004, when 31.2 was recorded).Conversely, variations in water temperature were wider in Bananal Inlet, with a difference between maximum and minimum mean values of 9˚C (Table 2), ca.three times that recorded at Itapuca Stone (3˚C).Air temperatures during field trips were similar in both sampling sites, however higher variations were recorded at Bananal Inlet.
In general, higher concentrations of all dissolved nutrients were found at Itapuca Stone, although in some observations the concentrations of nutrients were similar in both sites (Table 3).Typical concentrations of ammonium/ammonia were > 5 μM at Itapuca Stone and < 2 μM at Bananal Inlet, with significant differences between the sites (p < 0.0001).In Itapuca Stone, nitrite concentrations were typically ca.three times higher than those of Bananal Inlet (p < 0.0001), and a similar trend was recorded for nitrate.At Bananal Inlet maximum values for nitrate and nitrite were found in late summer/early autumn (Table 3).Urea tended to show higher concentrations in late spring and in summer, and lower values in winter, with higher concentrations at Itapuca Stone (p = 0.0231).Total nitrogen was influenced mainly by dissolved ammonium/ammonia and nitrate, the nitrogenous ions presented in higher concentrations in both sampling sites.Higher values for total nitrogen tended to be achieved in summer at Bananal Inlet (maximum of 12.4 μM, January 2004), and in winter at Itapuca Stone (maximum of 36.2 μM, July 2003).Typical concentrations of phosphate were ca.three times higher at Itapuca Stone than at Bananal Inlet (p < 0.0001), however the N:P atomic ratio were similar for both sites (p = 0.38), with overall fluctuations around 15:1 in seawater.
Wide variations among the three species were found for total tissue nitrogen (Figure 2).G. griffithsiae and U. fasciata tended to show higher concentrations of tissue nitrogen, while C. antennina presented lower values.In many comparisons U. fasciata showed differences for the measurements obtained in the sites, with higher values at Itapuca Stone.For the other species differences were small or not significant for monthly comparisons of the sites.C. antennina showed minor variations in tissue phosphorus throughout the study in both sites (Figure 3).A similar trend was obtained for G. griffithsiae, but in some comparisons higher values were recorded at Bananal Inlet.Among the three seaweeds, U. fasciata showed wider variations in tissue phosphorus in this study, with a trend to show higher concentrations of tissue P at Itapuca Stone.
Variations in tissue N:P ratio were wider for G. griffithsiae, varying from 10:1 (Bananal Inlet, December 2000) to 34:1 (Itapuca Stone, January 2003).For most of the comparisons, values of tissue N:P ratio were not significantly different in both sites for the three species (Figure 4).
As total protein was calculated using nitrogen-to-protein conversion factors, the same general trends described for total nitrogen were found (Figure 5).Typical values for hydrosoluble protein were higher than 15% of d.w., with U. fasciata showing percentages higher than the other species in most observations (Figure 6).Changes in hydrosoluble protein followed the same general description presented for tissue N and total protein, with U. fasciata showing more protein at Itapuca Stone, and small or null differences between the sites for the other species.
Carbohydrate was the most abundant component for all species, with typical concentrations > 40% d.w. in almost all measurements (Figure 7).G. griffithsiae showed maximum concentrations of total carbohydrate, with more than 60% of d.w. in some observations.G. griffithsiae tended to show higher concentrations of total carbohydrate in Bananal Inlet throughout the study.
Clorophyll a and total carotenoid showed wide variations in the measurements throughout the study (Figure 8 and Figure 9).C. antennina showed virtually the same concentrations of clorophyll a and total carotenoid in both sites, but G. griffithsiae tended to present higher concentrations in Bananal Inlet and U. fasciata at Itapuca Stone.For all species lower values were measured after the summer, and higher values tended to be found in autumn and winter.
Dissolved Nutrients
Results confirmed that concentrations of dissolved nutrients at Itapuca Stone were higher than at Bananal Inlet, but the differences between the sites were not intense.In some observations no statistical difference was detected between the sites, and in some monthly comparisons the absolute values measured in Bananal Inlet were only 30% -50% lower than in Itapuca Stone.Some hypotheses can be considered to explain the small differences in dissolved nutrients in the sites.The Inlet is the marine part of the Serra da Tiririca State Park, with most of its area comprising a rain forest on mountains.The topographical characteristics of the area possibly favor the transport of nutrients from the forest soil to the Inlet, especially after rainfalls.As a typical concentration of nitrogen in soil may be three orders of magnitude higher than that of the seawater, the run-off of relatively small fractions of nutrients from the forest would promote a remarkable fertilization of the seawater in the Inlet.If this interpretation is correct, inputs of organic substances (e.g.humic acids) probably are also relevant in Bananal Inlet.In this scenario, the forest that surrounds the sampling site could play as an important factor for the input of nutrients into the site.The influence of run-off from an adjacent forest to algal communities has already been shown [43].These authors demonstrated that the run-off from a forest in the east coast of South Korea promoted a remarkable increase in heavy metals, especially cadmium, detected in the algal flora besides a relevant nutrient enrichment.
A second hypothesis refers to the effects of the water circulation in the region.Despite the Inlet is an inhabited area, it is close to urbanized districts of Maricá, Niterói and Rio de Janeiro municipalities.The short distance to urban areas would favor the input of seawater with high concentrations of nutrients (and even pollutants) in Bananal Inlet.The entrance of Guanabara Bay is ca.20 km to the Inlet, and the Bay itself is an important source of dissolved nutrients to adjacent areas [44].These arguments are hypothetic, but there is some evidence to corroborate with this interpretation.For instance, in some field trips it was possible to detect the presence of solid waste (plastic, paper, etc.), in moderate amounts, floating in the Inlet.The occurrence of these records had no apparent link with events such as heavy storms or windy conditions in previous days to the field trip.Garbage in the area seems to result from peculiar patterns of circulation in the Inlet, since no local source of pollution exists in the site itself.As one admits the transport of solid garbage from adjacent areas to the Inlet, it is presumable to assume that dissolved nutrients from surrounding eutrophic waters could achieve the Inlet.Nevertheless, it is important to reinforce that in general the seawater in the Inlet is predominantly clean and transparent.Moreover, the Inlet has a remarkable wave action, a factor that contributes for a quick dilution of substances and transport of materials, establishing a presumably low residence time in the Inlet.A third hypothesis is the occurrence of some upwelling events in coastal areas of Niterói municipality.These events frequently reach Bananal Inlet in summer, but rarely could reach Itapuca Stone (located inside Guanabara Bay).For instance, in one of the filed trips (January 9th, 2004) waters of 17˚C reached the Inlet (Table 2), a typical temperature of upwelling events in the region.This interpretation is reinforced by the detection of high concentrations of nitrate in that month in the Inlet (3.39 ± 0.83 µM), which were not statistically different of those detected in Itapuca Stone.The excess of nutrients in Guanabara Bay characterizes that environment as eutrophic [30], achieving hypereutrophy in some parts and generating relevant floristic changes.A small number of macroalgal species exists near the entrance of Guanabara Bay, where Itapuca Stone is located.According to Taouil & Yoneshigue [45], there are only 45 species in that area, while more than 70 species were recorded in the same site by the end of the decade of 1960.This number contrasts with the 92 species found by Moreira (unpublished data) in Bananal Inlet.The ongoing process of eutrophication has been promoting a loss of biodiversity in Guanabara Bay, changing the characteristics of local algal communities [45].Opportunistic species, which tolerate high concentrations of pollutants (generally present in large volumes in environments disturbed by cultural eutrophication) tend to proliferate, occupying the space left by more sensitive species [46].Despite significant differences in the concentrations of dissolved nutrients have been detected in the sites, N:P ratio tended to be similar at the sampling sites throughout the study.An overall mean value of 14.9:1 was calculated for Itapuca Stone and 14.7:1 for Bananal Inlet.Compared to the classical studies [47] [48], which indicate a N:P ratio of 16:1 as an average value for world oceans, the current results are within fluctuations expected for field data.Despite the small number of samples analyzed, values around 15:1 would not indicate limitation by N or P to the algae.This trend contrasts with other Brazilian studies.For instance, Aidar et al. [49] obtained an average N:P ratio of 12:1 for the continental shelf of Ubatuba, São Paulo State, suggesting a phytoplankton limitation by nitrogen.Valentin et al. [30] found wide variations in atomic ratios in different sampling sites in inner parts of Guanabara Bay with a remarkable influence of the tidal regime.Low N:P rations in inner parts of Guanabara Bay (<10:1) were interpreted as a result of excess phosphate from domestic effluents [30].
Measurements of N:P ratio are insufficient to determine the presence or absence of a given species in an environment under strong impact.It is widely known that each species has an optimal N:P ratio for its metabolic demands [7] [50], but it is unlike to happen competitive exclusion due to this factor.The exclusion of a given species from a disturbed environment by anthropic action is more likely to be a consequence of the effects of pollutants, but normally it is very difficult to determine the limits for the action of a specific pollutant since in general complex mixtures are discharged into the sea [51].
Total Tissue Nitrogen and Phosphorus in the Seaweeds
In this study, the red alga G. griffithsiae tended to show higher concentrations of tissue nitrogen and phosphorus than the green algae.This is in accordance with studies of Diniz et al. [52] and Lourenço et al. [34] who characterized the chemical composition of seaweeds from Brazilian coastal environments.Rhodophytes tended to show more nitrogen-bearing pigments and higher concentrations of hydrosoluble protein.Higher concentrations of phosphorus in seaweeds would be related to the characteristics of fast growing species, which produce more ATP [7] [52].
In physiological terms, seaweeds from tropical environments show a low demand for dissolved nutrients, compared to seaweeds from temperate environments [53] [54].In the tropics plants are commonly saturated with nutrients even in low concentrations (e.g.: 3.0 µM for N e 0.25 µM for P), which are sufficient to generate high growth rates and tissue nutrients in suitable concentrations.Compared to phytoplankton, seaweeds have a high procurement for carbon, higher than the relative demand for nitrogen and phosphorus, a characteristic related to the life cycles, life span, growth and composition of the thalli [55].If the availability of inorganic nutrients increases temporarily, there is a natural trend of a fast up take and assimilation of nutrients, displaying in the thalli higher concentrations of N and P [56].However, if an abundant amount is kept for a longer period, there is a trend of saturation of the thalli with nutrients, and no increment in algal responses to nutrients is recorded [53].Thus, an excess of nutrient in water not necessarily will generate high concentrations of N and P in tissues, because even a luxuriant consumption of nutrients (such as nitrogen) has a limit, without a progressively linear response to the stimulus after a given point.If high concentrations of nutrients persist, the algae (macroand micro-algae) may either excrete inorganic nutrients or keep the synthesis of organic matter in stable levels, without increases in concentrations of N-and P-bearing-substances [7] [12] [57].These are typical responses of algae in eutrophic environments.
In Bananal Inlet, concentrations of dissolved nutrients are supposedly enough to sustain an optimum growth in the species tested.The observations of the local species showed tissue-N concentrations never lower than 2% d.w., suggesting that growth conditions would be suitable throughout the year [58].Thus, the seaweeds tend to show high tissue N and P concentrations, possibly close to the saturation level.In this context, measurements done with samples from Bananal Inlet tended to be predominantly high, similar to those of Itapuca Stone.Another factor to contribute to diminish differences of tissue N and P measured in the different sites are topographical features.Itapuca Stone is plan, with low natural shelters (e.g.crevices in rocks), and seaweeds are directly exposed to dryness during low tides.This condition promotes a strong stress in the species, which is expressed as damages in the thalli and loss of tissue nutrients [59] [60].In many field trips in summer months and also in short isolated periods of strong heat in any season, several individuals showed tips bleached, indicating loss of their constituents.This phenomenon was particularly common in G. griffithsiae, especially easy to see due the contrast between the dark red of the healthy individuals and the pale color (white or yellowish) of damaged plants.In Bananal Inlet this phenomenon was less common, although in some occasions algae were found with bleaching especially after period of strong heat.There, U. fasciata was the species that showed more commonly damaged thalli, while G. griffithsiae has never been found with bleaching in Bananal Inlet.This trend is possibly a consequence of the specific occupation of the space by the red alga in Bananal Inlet, always in specific places sheltered from sunshine, under shadows created by large rocks and crevices.These arguments are important to understand why concentrations of tissue N and P of G. griffithsiae were similar in both environments (Figure 2 and Figure 3) and how abiotic sources of stress may influence the nutrient composition of the algae.Despite the minor differences in tissue N and P for each species in both sites, most of the observations did not reveal cyclic patterns of variation of concentrations, except for lower values of chlorophyll a and total carotenoid in summer/early autumn, contrasting with typical results reported for temperate environments [5] [8].Fluctuations found in tropical environments are associated to significant changes in concentrations of nutrients throughout the year (including possible inputs of nutrients into the system) or the occurrence of a more intense environmental factor (eg.: temperature, upwelling), affecting algal responses during part of the annual cycle.In a related study, Lourenço et al. [15] studied the seasonal variations of tissue N and P of eight macroalgal species of Araruama Lagoon, a hypersaline environment in Rio de Janeiro State.Remarkable seasonal variations in tissue nutrients for the seaweeds were found, with higher values in autumn and lower in spring for most of the species.The authors also considered that seaweeds are severely affected by high temperatures, at least in part of the spring and in the summer.The absence of patterns for seasonal variations in tissue N and P of the seaweeds in the present study suggests: (i) that the nutrient supply is virtually constant or it suffers minor variations; and (ii) that other abiotic factors (e.g.temperature) play a secondary role to influence nutrient accumulation by the seaweeds.The lack of seasonal variations of tissue N and P of 10 seaweeds (6 green and 4 red algae) was also confirmed [56] in a seven-year study (from 1997 to 2004) performed in Boa Viagem Beach, a site located in Guanabara Bay.Lourenço et al. [28] found N:P atomic ratios in the algal tissues typically higher than 20:1 and lower phosphorus concentrations in the water than at Itapuca Stone in the present study.
According to the Björnsäter and Wheeler's classification [61] of macroalgal nutrient status based on N:P ratio of tissues, a N:P ratio < 16 indicates N-limitation; a N:P ratio 16 -24 indicates N-sufficiency and P-sufficiency -i.e.no limitation and N:P > 24 indicates P-limitation.Applying this classification to our data we could conclude that the macroalgae in the sampling sites are permanently N-and P-sufficient, with few exceptions.However, the N:P ratio must be evaluated with care, as it may obscure trends for the individual elements.For instance, the lower values for phosphorus in the seaweeds were normally > 0.40% d.w.A 0.40% of tissue P does not represent a low level of phosphorus, and it is actually higher than values found for many other algae from tropical environments [53] [54].In some cases a high N:P ratio observed may be strongly affected by the high concentrations of nitrogen and is not necessarily indicative of P limitation.Thus, the classification of Björnsäter & Wheeler [61] must be considered with caution, because the ranges may not be suitable for macroalgae from tropical environments such as Guanabara Bay and Bananal Inlet.Further investigations are needed to test the suitability of that classification for tropical environments, where seaweeds typically grow well with low concentrations of dissolved nutrients and normally have lower tissue N and P compared to species from temperate environments.
Protein, Carbohydrate and Photosynthetic Pigments
Following the same general trends described for total nitrogen, the red alga G. griffithsiae tended to show higher concentrations of hydrosoluble protein than the green algae.Our results also agree with those of Gressler et al. [62], who found that four red seaweeds from Brazil typically show hydrosoluble protein fluctuating from 4.6% to 18.3% of d.w.In our study, hydrosoluble protein of G. griffithsiae fluctuates from 10% to 20% of d.w in most of the observations.
Possibly most of the studies on major chemical components of seaweeds have focus on the nutritional properties of the species, e.g.[63]- [69].However, analyses of major chemical components are important tools for environmental issues.Previous studies performed confirm that tissue protein is positively correlated dissolved nitrogen in the water [70] [71].In the present study, the apparent permanent sufficiency on nutrients (especially nitrogen) would contribute to the high measurements of protein in the seaweeds in both sites.One can speculate that in Itapuca Stone the saturating levels of dissolved nutrients would keep protein in high concentrations.Despite the concentration of nutrients is not so high at Bananal Inlet it would be enough to generate a high accumulation of protein.These interpretations have support from the studies of [53] with the green alga Enteromorpha intestinalis (=Ulva intestinalis) in mesocosms, in which the alga did not respond to enrichment with nutrients if thalli concentrations were saturated.
The accumulation of protein tends to promote a decrease in carbohydrate production.The assimilation of nitrogen (ammonia) into amino acids occurs via a GS/GOGAT (glutamine sintetase/glutamine: 2-oxoglutarate aminotransferase) system, resulting in production of glutamate.For the synthesis of glutamate two molecules of 2-oxiglutarate are required, while for the synthesis of other amino acids carbon skeletons are required through respiratory chain.As a result, assimilation photosynthetic nitrogen stimulates the respiratory flux of carbon.In cells growing with high concentrations of nutrients, levels of endogenous reserves of carbohydrate drop and the assimilation of nitrogen in amino acids depends on recent photosynthesis [10] [72].
These arguments support the occurrence of higher concentrations of carbohydrate in samples from Bananal Inlet, especially G. griffithsiae and U. fasciata.Results for G. griffithsiae are similar to those of Perfeto [73], who found values predominantly >50% d.w. for the same species in a seasonal study in southern Brazil, under subtropical climate.Pádua et al. [24] also reported similar results for total carbohydrate, with concentrations varying from 55.3% to 58.4% of d.w. for Ulva lactuca and U. fasciata from Paraná State, Brazil.Protein levels measured in those species by the same authors varied from 13.3% to 18.4% d.w.[24]; these values are slightly lower than the current results.Higher concentrations of carbohydrate than that of protein in 30 common seaweeds of tropical Australia were also found [74], as well as for three common species of Abu Qir Bay, Egipt [75].Despite the amount of dissolved nutrients in Bananal Inlet is relatively high, it is lower than in Itapuca Stone.Considering the coupling between carbon and nitrogen metabolism, it is logical to understand a tendency for more carbohydrate in Bananal Inlet, even with discrete differences in some comparisons.As C. antennina exhibited the smallest differences for virtually all comparisons of the two sites, this alga probably has a naturally low demand for nutrients.Supposedly the chemical composition of C. antennina was virtually not affected by differences in nutrient regimes in the sites.This trend has been documented for slow-growing tropical macroalgae, such as thus of the genus Sargassum [54] and green algae typical of warm waters, such as Halimeda [53].Thus, independent of specific environmental characteristics in which they are, these species tend to exhibit discrete responses of synthesis of substances to available nutrients, keeping their chemical composition with slight fluctuations.
A wide range of variation in the content of hydrosoluble protein in the green algae U. fasciata and C. antennina agrees with the variations in tissue nitrogen throughout time, with accumulation of nitrogen in some periods as protein.The occurrence of very "flexible" protein content in those seaweeds points to the capability of them to respond to rapid environmental changes.Fleurence [76] points that protein contents in Ulva typically vary from 10% to 26% d.w.
The high concentrations of total carotenoid found in this study (normally higher than 50% of the chlorophyll content) points to the role of carotenoids as shields to protect the photosystems [77].The presence of different quantities and kinds of pigments (chlorophyll, carotenoids, phycoerythrin) in G. griffithsiae results in a high capacity to absorb light in virtually all visible light.The diverse pigments of the red alga may favor the species to occupy microhabitats not directly exposed to light in rocky shores, under shadows of large rocks or in crevices.G. griffithsiae is found in these microhabitats at Bananal Inlet.In this context, the species could have competitive advantages for not exposing itself to high intensities of light.Presumably, G. griffithsiae shows an efficient apparatus for light absorption.The presence of accessory pigments could account for the lower concentrations of chlorophyll in G. griffithsiae compared to the other species.
Remarkable oscillations in pigment content were recorded in Ulva fasciata and they seem to be related to the loss of pigments in certain periods of the year, as a consequence of partial loss of thalli due to excessive desiccation.Intertidal seaweeds experience extreme conditions of heat in tropical environments, which may affect their morphological features [78].As a foliose alga that occupies the mid-littoral area, U. fasciata is particularly exposed to high temperatures.This factor is apparently less important in C. antennina, which inhabits places under direct wave action and permanently in contact with seawater in movement.Moreover, arguments relative to life cycle of the species (not assessed in this study) are also potentially relevant, especially for U. fasciata, which suffers a population decline in summer due to phenological processes in the region [51].The apparent biomass fluctuations observed for U. fasciata (with lower biomass in summer) were similar in both environments, suggests that abiotic factors such as light and temperature might be as important as dissolved nutrients to affect the chemical composition of the species, as demonstrated for Gracilaria tikvahiae [59].Gymnogongrus griffithsiae also presented significant changes in the concentrations of photosynthetic pigments (less than U. fasciata) and loss of thalli in samples collected at Itapuca Stone, after periods of strong heat.The exposure of seaweeds to high irradiation in summer could account for the lower contents of chlorophyll recorded for G. griffithsiae and U. fasciata due to loss of tissues.Aguilera et al. [14] recorded this same trend for Porphyra umbilicalis from the North Sea, with loss of chlorophyll after periods of intense heat.
Concluding Remarks
Changes in the concentrations of total protein, hydrosoluble protein, total carbohydrate, chlorophyll a, total carotenoid, tissue nitrogen and tissue phosphorus in Chaetomorpha antennina, Gymnogongrus griffithsiae and Ulva fasciata were predominantly small or absent in the two sampling sites.No clear cyclic variations throughout time were detected for the substances measured in the seaweeds, except for pigments, which showed declines at the end of summer months.Dissolved nutrients are available in higher concentrations to seaweeds at Itapuca Stone, where they possibly achieve permanent saturating levels for the seaweeds.Concentrations of dissolved nitrogen and phosphorus at Bananal Inlet seem to be always high enough to supply the metabolic demands of the seaweeds for synthesis of organic substances and growth, with no evidence of nutrient limitation throughout the year.
Figure 2 .
Figure 2. Temporal fluctuations in tissue nitrogen of Chaetomorpha antennina (A), Gymnogongrus griffthisiae (B), and Ulva fasciata (C) sampled in Itapuca Stone and Bananal Inlet from December 2000 to February 2004.Data are expressed as percentage of the dry weight (d.w.) and each point represents the mean of four replicates ± standard deviation (n = 4).
Figure 3 .
Figure 3. Temporal fluctuations in tissue phosphorus of C. antennina (A), G. griffthisiae (B), and U. fasciata (C) sampled in Itapuca Stone and Bananal Inlet from December 2000 to February 2004.Data are expressed as percentage of the d.w. and each point represents the mean of four replicates ± SD (n = 4).
Figure 4 .
Figure 4. Temporal fluctuations in tissue N:P ratio of C. antennina (A), G. griffthisiae (B), and U. fasciata (C) sampled in Itapuca Stone and Bananal Inlet from December 2000 to February 2004.Data are expressed as percentage of the d.w. and each point represents the mean of four replicates ± SD (n = 4).
Figure 5 .
Figure 5. Temporal fluctuations in total protein content of C. antennina (A), G. griffthisiae (B), and U. fasciata (C) sampled in Itapuca Stone and Bananal Inlet from December 2000 to February 2004.Data are expressed as percentage of the d.w. and each point represents the mean of four replicates ± SD (n = 4).
Figure 6 .
Figure 6.Temporal fluctuations in hydrosoluble protein of C. antennina (A), G. griffthisiae (B), and U. fasciata (C) sampled in Itapuca Stone and Bananal Inlet from December 2000 to February 2004.Data are expressed as percentage of the d.w. and each point represents the mean of four replicates ± SD (n = 4).
Figure 7 .
Figure 7. Temporal fluctuations in total carbohydrate of C. antennina (A), G. griffthisiae (B), and U. fasciata (C) sampled in Itapuca Stone and Bananal Inlet from December 2000 to February 2004.Data are expressed as percentage of the d.w. and each point represents the mean of four replicates ± SD (n = 4).
Figure 8 .
Figure 8. Temporal fluctuations in chlorophyll a content of C. antennina (A), G. griffthisiae (B), and U. fasciata (C) sampled in Itapuca Stone and Bananal Inlet from December 2000 to February 2004.Data are expressed as percentage of the d.w. and each point represents the mean of four replicates ± SD (n = 4).
Figure 9 .
Figure 9. Temporal fluctuations in total carotenoid content of C. antennina (A), G. griffthisiae (B), and U. fasciata (C) sampled in Itapuca Stone and Bananal Inlet from December 2000 to February 2004.Data are expressed as percentage of the d.w. and each point represents the mean of four replicates ± SD (n = 4).
Table 1 .
Atmospheric precipitation and air temperature collected daily at the Fluminense Federal University Meteorological Station throughout the period of this study.
Table 2 .
Average values of salinity and temperature measured at the sampling sites during part of the field trips.Results for salinity represent the mean values of four determinations ± the standard deviation (n = 4).Data for the first 18 months of this study are not presented.
Table 3 .
Some selected mean values for dissolved nutrients collected throughout the present study in the two sampling sites.The results are expressed as µM (except N:P ratio) and represent the average of four replicates ± SD (n = 4). | 2017-11-11T12:10:45.744Z | 2014-02-19T00:00:00.000 | {
"year": 2014,
"sha1": "341a337dbbefe17ae349d17059fbc459bae2de8f",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=44576",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "341a337dbbefe17ae349d17059fbc459bae2de8f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
118598141 | pes2o/s2orc | v3-fos-license | QCD phase diagram from the lattice at strong coupling
The phase diagram of lattice QCD in the strong coupling limit can be measured in the full $\mu$-$T$ plane, also in the chiral limit. In particular, the phase diagram in the chiral limit features a tricritical point at some $(\mu_c,T_c)$. This point may be related to the critical end point expected in the QCD phase diagram. We discuss the gauge corrections to the phase diagram at strong coupling and compare our findings with various possible scenarios in continuum QCD. We comment on the possibility that the tricritical point at strong coupling is connected to the tricritical point in the continuum, massless QCD.
Motivation
The QCD phase diagram is conjectured to have a rich phase structure. At low temperatures, QCD has a vacuum and nuclear matter phase; at high temperatures and/or densities, QCD matter develops a qualitatively different phase where quarks are liberated from confinement -the socalled quark gluon plasma (QGP). While there is strong evidence for a crossover transition from the hadronic phase to the QGP for zero baryon chemical potential µ B , there is no evidence for a true phase transition at higher densities. Lattice studies of QCD have aimed to extend the simulations to finite quark chemical potential µ = 1 3 µ B , but the available methods are limited to µ/T 1 due to the sign problem: Monte Carlo simulations sample a probability distribution and hence rely on the condition that the statistical weights are positive. In the conventional approach to lattice QCD based on the fermion determinant, the weight for the fermion determinant becomes complex as soon as the chemical potential is non-zero. The sign problem (more precisely in this context: the complex phase problem) is severe, prohibiting direct simulations for µ > 0 -which is also due to the fact that Monte Carlo is performed on the colored gauge fields.
However, there is a representation of lattice QCD which does not suffer severely from the sign problem: in this representation, the lattice degrees of freedom are color singlets. The complex phase problem is reduced to a mild sign problem induced by geometry-dependent signs of fermionic world lines. Such a "dual" representation of lattice QCD has been derived for staggered fermions in the strong coupling limit, that is in the limit of infinite gauge coupling g → ∞ [7]. In this limit, only the fermionic action contributes to the path integral, whereas the action describing gluon propagation is neglected. QCD at strong coupling has been studied extensively since 30 years, both with mean field methods [1,2,3,4,5,6] and by Monte Carlo simulations [7,8,9,10,11,12]. Those studies have been limited to the strong coupling limit, which corresponds to rather coarse lattices. However, recently [13] we were able to include the leading order gauge corrections to the partition function. The effects of these gauge corrections on the phase diagram will be discussed below.
The chiral and nuclear transition in the strong coupling limit
The path integral of staggered fermions in the strong coupling limit can be rewritten exactly as a partition function of a monomer+dimer+flux system. The reformulation proceeds in two steps: first the gauge links (gluons) are integrated out, which confines the quark fields ψ(x) into color singlets, the hadrons: those are the mesons M(x) =ψ(x)ψ(x) and the baryons In the second step, also the quarks are integrated out, which allows to express the partition function via integer variables: Figure 1: The Phase diagram in the strong coupling limit (left), as measured in a Monte Carlo simulation, compared to the standard expectation of the continuum QCD phase diagram (right). Both diagrams are for massless quarks.
the Grassmann constraint: This constraint restricts the number of admissible configurations {k b , n x , } in Eq. (2.1) such that mesonic degrees of freedom always add up to 3 and baryons form self-avoiding loops not in contact with the mesons. The weight w( , µ) and sign σ ( ) = ±1 for an oriented baryonic loop depend on the loop geometry. The partition function Eq. (2.1) describes effectively only one quark flavor, which however corresponds to four flavors in the continuum (see Sec. 4). It is valid for any quark mass. We will however restrict here to the theoretically most interesting case of massless quarks, m q = 0. In fact, in this representation the chiral limit is very cheap to study via Monte Carlo, in contrast to conventional determinant-based lattice QCD where the chiral limit is prohibitively expensive. For staggered fermions in the strong coupling limit, there is a remnant of the chiral symmetry This symmetry is spontaneously broken at T = 0 and is restored at some critical temperature T c with the chiral condensate ψψ being the order parameter of this transition. As shown in Fig. 1 (left), we find that this transition is of second order. This is analogous to the standard expectation in continuum QCD with N f = 2 massless quarks, where the transition is also believed to be of second order. Moreover, both for our numeric finding at strong coupling and for the expectation in the continuum, the transition turns into first order as the chemical potential is increased. Thus the first order line ends in a tricritical point, which is the massless analogue of the chiral critical endpoint sought for in heavy ion collisions.
In fact, at strong coupling, the zero temperature nuclear transition at µ B,c m B is intimately connected to the chiral transition, and they coincide as long as the transition is first order. The reason for this is the saturation on the lattice due to the Pauli principle: in the nuclear matter On an excited plaquette, color singlets can also be composed of quark-quark-gluon or antiquark-gluon combinations. Whereas in the strong coupling limit baryons are pointlike, they become extended objects due to the gauge corrections.
phase at T = 0, the lattice is completely filled with baryons, leaving no space for a non-zero chiral condensate to form (in terms of the dual variables, there is no space for monomers on the lattice). This is certainly a lattice artifact which disappears in the continuum limit, where the nuclear phase behaves like a liquid rather than a crystal.
The ultimate question is whether the tricritical point at strong coupling is related to the hypothetical tricritical point in continuum massless QCD. If we can establish such a connection numerically, this would be strong evidence for the existence of a chiral critical endpoint in the µ-T phase diagram of QCD. To answer this question, it is necessary to go away from the strong coupling limit and incorporate the gauge corrections, which will lower the lattice spacing and eventually allow to make contact to the continuum.
Gauge Corrections to the strong coupling phase diagram
Lattice QCD in the strong coupling limit is defined by the the lattice coupling β = 6 g 2 → 0 as g → ∞. Going away from the strong coupling limit is realized by making use of strong coupling expansions in β . We have recently shown how to incorporate the leading order gauge corrections O(β ) [13]. In a nutshell, the strategy is to compute link integrals at the boundary of "excited" plaquettes, which correspond to gluonic excitations. Introducing a variable q P ∈ {0, 1} to mark the "excited" plaquettes P, the O(β ) partition function can be expressed in a similar fashion as Eq. (2.1) with modified weightsŵ (for details see [13]): (3.1) We can sample this partition function by the same algorithm (variant of the worm algorithm) as for β =0, adding a Metropolis accept/reject step to update the plaquette variables q P . These simulations have been carried out for N τ = 4 and various lattice volumes N σ = 4, 6, 8, 12, 16 to perform finite size scaling and to measure the phase boundary as a function of the chemical potential. In contrast to the strong coupling limit, where the color singlets are entirely composed of quarks and antiquarks, including the gauge corrections allows color singlets to be composed of quark-quark-gluon or antiquark-gluon color singlet states, as shown in Fig. 2 1. Baryons are point-like in the strong coupling limit, the lattice spacing is too coarse to resolve the internal structure of the baryon. Including the gauge correction, baryons become extended objects, spread over one lattice spacing.
2. The nuclear potential in the strong coupling limit is of entropic nature, where two static baryons interact merely by the modification of the pion bath. With the leading order gauge correction, pion exchange is possible as the Grassmann constraint is relaxed: on excited plaquettes, the degrees of freedom in Eq. (2.2) add up to 4 instead of 3.
These features will have an impact on the phase boundary. In Fig. 3, the effect of the gauge corrections is shown. We find that the second order phase boundary is lowered, as expected because the critical temperature in lattice units drops as the lattice spacing is decreased with increasing β . However, we find the chiral tricritical point and the first order transition to be invariant under the O(β ) corrections. We want to stress that there are actually two end points, which split due to the gauge corrections: the second order end point of the nuclear liquid-gas transition is traced by looking at the nuclear density as an order parameter. We expect the nuclear and the chiral first order transition to split, such that at T = 0 there are three different phases instead of two phases (as shown in Fig. 1 right). The nuclear phase is in the continuum distinct from the chirally restored phase. As a first evidence for this splitting, we find that the nuclear critical end point separates from the chiral tricritical point.
Relation between the strong coupling phase diagram and continuum QCD
In Fig. 4 we speculate how the separation of the first order transitions could be realized at larger values of β . Moreover, we can distinguish at least three scenarios (A,B,C) on how the chiral tricritical point depends on β . These scenarios start from the same phase diagram in the strong coupling limit, but have different continuum limits at β → ∞ (a → 0). In all three scenarios, a tricritical point exists at µ = 0, β > 0: it must exist because the finite-temperature µ = 0 transition, which is of second order for β = 0, is of first order for β = ∞, following the argument of [14] which applies to the continuum, N f = 4 theory. 2. In scenario (B) the chiral transition weakens and hence turns second order, but strengthens again to turns first order at larger µ B .
3. In scenario (C) the chiral transition weakens and remains second order. In that case the tricritical line bends towards larger µ and eventually vanishes at some finite β In order to discuss the relation between the phase diagram in the µ-T plane for N f = 4 massless quarks with the more physical scenario N f = 2 + 1 with 2 massless up and down quarks and one physical strange quark, we show phase diagrams in the N f -µ plane. Interpolating between integer numbers of massless flavors N f and N f + 1 can be realized by decreasing the mass of an additional flavor from infinity to zero. In all scenarios it is assumed that for N f = 2, the chiral transition is second order, and that there is a tricritical strange quark mass m tric s separating it from the N f = 3 first order transition, as shown in the so-called Columbia plot, Fig. 5. Note that whether N f = 2 is indeed second order and thus whether m tric s exists and also whether it is larger or smaller than the physical strange quark mass is still under debate [15]. The standard scenario of QCD in the chiral limit, as shown in Fig. 1 (right), corresponds to scenario (B) in Fig 4. However, the non-standard scenario (C) is supported by Monte Carlo simulations for imaginary chemical potential and analytic continuation [15,16]: these studies suggest (at least for small chemical potential) that the chiral transition weakens with chemical potential, making the N f = 3 first order region in Fig. 5 to shrink with increasing µ B . This should also be the case for N f = 4.
A last comment on staggered fermions is in order: one of the lattice artifacts is due to the way this discretization solves the so-called fermion doubling problem: At strong coupling, there is effectively only one quark flavor, whereas in the continuum limit the same action describes 4 flavors due to the fermion doubling. Instead of 15 Goldstone bosons that are present in the N f = 4 continuum theory, there is only one Goldstone boson at strong coupling, since the others 14 receive masses from lattice artifacts (called taste-splitting). In the determinant-based approach, the problem is solved by "rooting": taking the root of the fermion determinant to reduce the number of flavors from 4 to 2 (and the number of Goldstone bosons from 15 to 3). This strategy is not available > m tric s , which implies that the chiral transition is second order for N f = 2. The arrow points towards the N f = 2 + 1 chiral light quark masses and physical strange quark mass as denoted in the bottom row of Fig. 4 in between N f = 2 and N f = 3. in our dual-variable approach. Although the strong coupling limit has effectively only one flavor, the residual chiral symmetry is that of a N f = 4 continuum theory, with one true Goldstone boson which even persists when the chiral anomaly U A (1) is present for β > 0. This is in contrast to a genuine N f = 1 theory in the continuum which has no Goldstone bosons at all. The chiral anomaly breaks the chiral symmetry explicitly, driving the chiral transition into a crossover (corresponding to the lower right corner of the Columbia plot Fig. 5). Hence the deconfinement transition at N f = 0 is most likely completely separate from the chiral transition for N f ≥ 2, as shown in all three scenarios Fig. 4 (bottom).
Outlook for future investigations
There are various ways to discretize fermions on the lattice, with staggered fermions and Wilson fermions the most widely used for thermodynamics studies. They describe the same physics in the continuum limit only. At finite lattice spacing, and in particular at strong coupling, both discretizations are quite different. In particular, the spin and the kinetic term of the fermion action are treated very differently. A dimer+flux representation is also possible for Wilson fermions. Such a representation was so far only determined for lattice QED [17,18], since the Grassmann integration is much more involved for N c > 1.
As a matter of principle, for both lattice discretizations, the gauge action can be incorporated order by order in β . There are however technical difficulties that remain to be solved. A new strategy to study both lattice discretizations on a par is to expand both in systematically in β and the inverse quark mass by making use of a Hamiltonian formulation [19]. The partition function is then expressed by a Hamiltonian composed by operators: where the generalized quantum numbers Q i (spin, parity,flavor) are globally conserved, and nearest neighbor interactions are characterized by the operators J + Q i (x) J − Q i (y) , which raise the quantum number Q i at site x and lowers it at a neighboring site y (see [19] for the case of N f = 1, 2 for staggered fermions). For both staggered fermions and Wilson fermions, the matrices J ± Q i contain vertex weights which are the crucial input to sample the corresponding partition function. The plan for the future is to do so with a quantum Monte Carlo algorithm. Comparing both fermion discretizations order by order in the strong coupling expansion will help to discriminate lattice discretization errors from the genuine physics, in particular with respect to QCD at finite density. | 2015-03-27T16:45:56.000Z | 2015-03-27T00:00:00.000 | {
"year": 2015,
"sha1": "b3162f1a70be247ea79a63249d35d932160db000",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/217/073/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "eb91d2a714985e35cba1160b70a44f709f73a775",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
167165483 | pes2o/s2orc | v3-fos-license | E-Recruitment as a Strategy for Informal Networking: A Case Study of Mobilink
E-recruitment strategy is necessary to attract the potential employees to better equip the organization with competent workforce. E-recruitment helps to get the informal networks for the organization. These informal networks support the organization in form of employee’s clarity of their roles, which increases the organizational knowledge of employees to perform their tasks efficiently. It increases the satisfaction of employees which in result will increase the commitment of employees for organization in long run. A case study of Mobilink, a repudiated telecommunication company, is conducted to get the implication of e-recruitment strategy that either it is beneficial for the organization or not. The survey method was employed for data gathering. The results have strong support for the long run benefits, for Mobilink, of adopting e-recruitment strategy.
Introduction
Recruitment is the basis for an organization to create and then continually recreate itself.It involves the workforce of an organization who are directly involved in the ecology of organizations.When individuals arrive, grow, develop and departure, it shows the success or failure of a recruiter who recruited them.Recruitments are particular activities carried out by organization with the main objective of finding the real objectives of organization by finding the competent workforce for the organization (Breaugh& Starke, 2000).
E-recruitment is necessary because employers must have effective means of attracting appropriate employees.E-recruitment includes both corporate website and commercial boards.Under corporate website, social networking includes face book, LinkedIn, and twitter.
To get competent workforce is a challenge for recruiter of an organization.So HR department of an organization is always adaptive to innovations that can better cope the requirements of an organization regarding to its workforce.E-recruitment is one of the innovations of mid of 1990's that were used to equip an organization with competent workforce who can adopt itself with rapid changing environment.Organizations adopt this strategy as an innovation after channelizing the innovation to the social system of an organization that how they react to the innovation, if it will help to get better results and implications, organizations will be more likely to opt this strategy.
Web 1.0 is also a form of e-recruitment which includes job boards, career websites and recruitment systems (Parry & Tyson, 2008), While web 2.0 includes blogs and online informal and social networking.Web 1.0 is essential for recruitment transactions but in highly competitive environment web 1.0 becomes insufficient and web 2.0 not only helps for the good reputation of recruiter and organization but will also be helpful to find competent individuals for better results of the organization and to cope the rapid changing environment also.It can also be used for the firm as a source of outsourcing the selection of individual and applicants and also for the decentralization of recruitment.
E-recruitment is a strategy for an organization, by using modes of corporate websites and commercial boards, not only to get the competent work force to meet the objectives of the organization but to develop a confidence for the employees which will increase the informal networking of an organization.This informal networking will help not only the existing employees but also the coming employees in organization that what organization is expecting to play a competent role beneficial for organization and for upcoming workforce.So, E-recruitment will not only help for informal networking but also will increase the knowledge of employees regarding organization.Organizational knowledge will make role of employees clear that will help the organization for mutual development of workforce which not only in short run but also in long run will be helpful for the organization to the gradual ecology of the organization.When recruiter according to the competencies and qualification will assign the tasks and duties to the employees it will increase the satisfaction level of the employees.When employees will be satisfied by the organization and by the tasks assigned to them it in result will increase the commitment of the employees in terms of their performance that ultimately will help to retain the employees in long run.
The study is trying to show e-recruitment as a strategy for Mobilink, a repudiated telecommunication company who is also using e-recruitment strategy for the long run attainment of informal networking.This informal networking ultimately will help the applicants and employees for their role clarity, mutual development, job satisfaction, organizational knowledge, and firm commitment.At this level the firm or organization will be sure that its HR department has been successful in retention of the employees in long run which is actually the main purpose of every recruitment department (Smith, 1998).To find the application of this statement, a survey is conducted whose result will be shown in coming chapters, first providing the evidence through literature review.Smith, (1988) in his studies focused that the basic purpose of recruitment is to retain the employees not to be fired after that because it also involves the cost for the organization so he conducted his studies for student that institutions want to retain the employees after their recruitment.And then to find out the ways that how can these goals be achieved.
Literature Review
In 1993, Krackhardt, D., and R. Hanson conducted this study.According to their network analysis, if the formal networks are the Skelton of an organization, then it will be the central system deriving the collective action and process of the whole business unit.These networks also help to understand the roles of these employees in the organizations.Lee, R. (1994), in his article demonstrated that recruitment is the process of creating and continuously re-creating the organization.To make this process successful, it is necessary for the labor force to be more skillful and adaptive to understand the core responsibility assigned by that organization.Workforces also have some pressures on their performance regarding their security and flexibility.If organization will make them sure that they are hired for a long term period they can perform their duties well.And if organization will not be providing security to their employees, it must provide trainings to their employees for the security of their future in market.When organization seems to be on a track unmatched with the goals of the organization all blame will be on employees but the actual problem occurred when the employees were empowered to work in that organization during the process of recruitment.So recruiter will always see the core competencies of the recruiter and then will hire him otherwise the person will be rejected for having limited competency.Organization evolution is a natural phenomenon when people come to an organization and have ability to adapt according to the environment not only for the sake of their own survival but for the organization also because an organization evolve slowly when individuals arrive, grow, develop and departure.Recruitment is the process of dealing challenges and diversity.And an organization want to be professionally successful it must have a number of fair people who think differently for organization in a radical way.Four dilemmas such as flexibility/security, control/empowerment, competence/learning, and comfort/challenge were discussed in this paper.So recruitment is an essential process to develop the ecology of an organization in terms of its workforce or employees.
Daniel M. Cable and Daniel B. Turban(2001), in their article demonstrated that job seekers want to evaluate the organization by getting knowledge about the recruiter of the organization and if they will use this information, the applicant will be a useful asset for the organization depending upon the source that what mode he is using to get information and in this way he will be able to get competency for organizational goals in a good manner and it will also help the applicant either he should join the organization after recruitment or not, as the beliefs of people determine the behavior of the people similarly depending upon the knowledge about recruiter and organization will determine the applicant behavior about either he will join the organization or not.
Elizabeth Whole Morrison (New York University) was conducted this study in Dec 2002.The structure of the newcomers' friendship networks related to their social integration and organizational commitment.By linking socialization outcomes to social network structure, this study sheds new light on the role of relationships in newcomer learning and incorporation.
Dahl, M., and Pedersen, C., conduct this study in 2002, here in this paper defines how the hypothetical donations in dispute that knowledge is diffused through informal contacts have been censured recently by scholars stating that agents will not disclose firm specific knowledge to outside agents, because of faithfulness to the firm.They argue that employees will only exchange more general knowledge of low value, which will not have shortcomings for their firms.Though, we show in this paper that more detailed information is dim.Even specific knowledge on new products, which is possible to be very firm specific and which the firms are likely to want to guard from participants.A large share of the engineers asked received knowledge from their informal contact, which they value to be of intermediate importance for their own work.This tells us that the informal contacts are an importance source of evidence for the engineer in this collection.
Marra, M., conducted this exploratory study in 2011.The exploratory study suggests that competences and capabilities are often accumulated in secondary nodes.Such peripheral nodes play an important role in fostering the knowledge transfer within the organization and to create important relations with external collaborators.For the same reason a bottom-up knowledge governance emerged as a significant component in enhancing the knowledge transfer between actors within the firm and between the firm and its external partners, accelerating its ability to innovate and learn.The bases for a successful innovation process are interactions and the continuous feedback among activities within the same firm and between the firm's external sources of knowledge.They represent the opportunity to have access to knowledge resources unavailable internally.The development of firm specific knowledge resources required key employees; such key employees have been identified to study their role in the firm.
The research study conducted by Christine Greenhow in (2011), offered the two themes that are based on a choice appraisal of the research literature as well as the author's surveys of fresh people's online social networking practices of Face book and MySpace, two obviously happening, youth-initiated sites, as well as in an online social networking request intended for ecological science training and community action.Author elegant that how social media, such as social network sites, currently support informal learning may develop one's aptitude to build current operative social media-enabled surroundings for additional formal learning objectives.Girard, A. and Fallery, B. (2011), in their studies focused that e-recruitment, at first stage just works as a transaction for organization but latter on results in achieving the informal social networks to expand the competency of that particular organization in form of competent employees with diversified cultures and experiences to cope with the challenges of rapid changing environments.Researchers have categorized the activities of e-recruitment and their sequential effects in two forms and named it as WEB 1.0 and WEB 2.0.Web 1.0 just includes the job boards, corporate websites, and recruitment systems, while web 2.0 results in online informal networking that will be helpful and advantageous for the organization.Web 1 is essential for recruitment transactions but in highly competitive environment this web 1.0 becomes insufficient while web 2.0 is used as to develop employees branding and reputation.Web 2.0 makes it possible to make new relationships with applicants.So, this tool is also used for decentralization of recruitment responsibilities or development of outsourcing.It in turn will increase the commitment of the employees.
Conceptual Framework
This research is trying to explore the significance of e-recruitment as a strategy in Mobilink, telecommunication corporation, which is widely adopting this strategy, to increase their informal networking for the retention of employees in terms of their role clarity, job satisfaction, organizational commitment, organizational knowledge, (Morrison, E., 2011).Hypothesis 1: Informal networking is positively associated with E-recruitment strategy.
E-recruitment as a strategy has the major objective of achieving the competent workforce but to get in touch with competent employees is a challenging task in this era of rapid changing environment.Web 1.0 and web 2.0 are the tools under e recruitment strategy to have a reach to the competent individual for that particular organization positive informal networking, (Parry &Tisson, Leader, Hamilton, & Cowan, 2008).It will help organization in terms of cost effectiveness, access to more job seekers, ability to target required target employees, access to applicants with technical capabilities, quick response time of respondents and it is very easy to use, (Galanaki, 2002;Zusman&Landia, 2002).Hypothesis 2: Role clarity, organizational knowledge, job satisfaction,and organizational commitment are positively associated with informal networking.
All the dependent variables are correlated with each other.One component will help to achieve the better results of others, (Dahl & Pederson).Because organization knowledge will be best achieved through informal networking which in result will increase the satisfaction level of employees which in result will work for the retention of employees in organization for a long term period.
It is the benefits in terms of employees compensation, health benefits of working employees; stable organizational characteristics, balanced work life, expected future opportunities regarding career, respected job to satisfy the employees ego, working place, benefits after retirement and morals that attract the competent work force to join that organization with more commitment and loyalty but this attraction can easily be communicated on wider range through tools of web 1.0 and web 2.0, which in result will not only increase the social networking but also will help to increase the commitment of employees in terms of their retention, (Smith, 1988).
Methodology
A case study method is adopted and the workforce of three categories is included in survey sample which includes HR officers, employees and managers of Mobilink.Survey method is used to collect the data which includes semi structured interviews from HR Managers and questionnaires from managers and employees of the organization who have been hired in company through e-recruitment and how they helped to get informal networking for the organization in terms of upcoming employees and workforce of the organization.
Survey method have been adopted which includes the in depth semi structured interviews of HR officers of Mobilink to get the insight of too much extent the organization is using the e-recruitment as a strategy to enhance their competent workforce by providing them the bridge of attraction through means of social networking which in positive sense will enhance the capabilities of employees regarding the clarity of their role, their commitment through channelized information and in long run retention of their employees.
For analysis of data, descriptive technique is used.So that the results of questionnaires and interviews can best be analyzed and explained to show the impact of e-strategy on informal networking of organization and at the end retention of employees in long run.For mathematical presentation bar chart is used.
Questions included in semi structured interviews are listed in table; Either company is using Online recruitment or not?What is the reason behind the adoption of this innovation?What is their perception of advantages and disadvantages?
While the questions included for the liker scale analysis are as followed; Informal networking: Support of e-recruitment for acquiring informal networking within the organization Informal networking helpful for more effective expertise in form of employees Positive response of employees towards e-recruitment because of increasing competition You increased the informal networks after joining the organization Organizational Knowledge You get the job because you are potential to get informal networks for the organization Informal networks provided by you have influence on their perception regarding the commitment for the organization.
Role Clarity
Informal networks provided by you have influence regarding the role clarity of employees.Informal networks help them to have complete knowledge about duties, what they are expected to do for the organization Job Satisfaction Employees coming through informal networking are satisfied at the time of hiring about all the working conditions of the organization Employees feeling free to move in office during the working hours with friendly environment This commitment was just result of strong networks Organizational Commitment You provided any sort of informal network to organization after your hiring These informal networks helped you to acquire knowledge about competitors for your promotion or benefits These informal networks helped you to acquire knowledge about competitors for the advantage of organization.These informal networks increased the commitment of employees for long run.
Results
Most of the employee's behavior to the questionnaires was very supportive for the e-recruitment strategy.The questionnaires filled by the managers and all the young employees working in the organization.The company is using corporate websites and commercial boards for the capable and potential employees for the organization.Each time those employees are preferred for the organization who are referred through the working employees networks because such employees are considered more potential for the organization who already know what the company is going to expect to them and what tasks they are going to perform so in each case, e-recruitment strategy always help the HR department and overall organization to hire those people who already have knowledge and knowhow of organization.Informal networks have created friendly environment for the employees during working hours which in turn is increasing the satisfaction of employees.During interviews and filling the questionnaires most of the employees were asked either they are satisfied by Mobilink or not?The answer was yes because organization trusts us and we are doing the assigned tasks in friendly environment by providing potential networks to organization.So, Mobilink is retaining the employees for long run by using an effective e-recruitment strategy in terms of employee's commitment.Following are the tables providing the evidences for support of results by encoding the frequency of liker scale.All the tables are presenting six independent variables which have greater impact of e-recruitment strategy in form of informal networking for the organization, role clarity, organizational knowledge, job satisfaction, mutual development and commitment of employees which is not only beneficial for the employees coming through e-recruitment but for the organization in long run in terms of their long run retention.Following are the Bar charts presenting the support for e-recruitment strategy by telling that how much they are beneficial for the organization in terms of creating informal networks for the organization.
The employees of Mobilink were strongly agree that e-recruitment strategy has increased and support the informal networks for the organization.
Employees that came in the organization through informal networks were more committed to the organization in terms of policies and their implication.
Employees coming through e-recruitment were more satisfied in terms of their role clarity, organizational knowledge and mutual development.The overall results of the study are positively favoring the adoption of e-recruitment strategy for getting the more potential employees for the organization in this competition world. | 2019-05-28T13:15:00.812Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "6e9daf7192d32c0c354549e5d9df81ba5f4dab75",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/IAGS/article/download/15488/15896",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bf56e4462651bb506b0c162681b570b15830c3b7",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
} |
256054321 | pes2o/s2orc | v3-fos-license | Heavy Ion-Responsive lncRNA EBLN3P Functions in the Radiosensitization of Non-Small Cell Lung Cancer Cells Mediated by TNPO1
Simple Summary Heavy-ion radiotherapy (HIRT) is associated with higher tumor cure rates compared with conventional radiotherapy (CRT). However, considering the high cost of HIRT, most tumor patients are still treated with CRT. The aim of this study was to elucidate the tumor inhibitory mechanism of HIRT by exploring gene expression signatures after heavy-ion exposure. We confirmed that the carbon ion-responsive long non-coding RNA endogenous bornavirus-like nucleoprotein 3, pseudogene (EBLN3P), is significantly decreased in carbon-ion irradiated non-small cell lung cancer (NSCLC) cells. The combination therapy of LNC EBLN3P-inhibition and X-ray irradiation can delay the progression of NSCLC both in vitro and in vivo, indicating the potential role of LNC EBLN3P as a target of radiosensitization in CRT. Abstract In recent decades, the rapid development of radiotherapy has dramatically increased the cure rate of malignant tumors. Heavy-ion radiotherapy, which is characterized by the “Bragg Peak” because of its excellent physical properties, induces extensive unrepairable DNA damage in tumor tissues, while normal tissues in the path of ion beams suffer less damage. However, there are few prognostic molecular biomarkers that can be used to assess the efficacy of heavy ion radiotherapy. In this study, we focus on non-small cell lung cancer (NSCLC) radiotherapy and use RNA sequencing and bioinformatic analysis to investigate the gene expression profiles of A549 cells exposed to X-ray or carbon ion irradiation to screen the key genes involved in the stronger tumor-killing effect induced by carbon ions. The potential ceRNA network was predicted and verified by polymerase chain amplification, western blotting analysis, colony formation assay, and apoptosis assay. The results of the experiments indicated that lncRNA EBLN3P plays a critical role in inhibiting carbon ion-induced cell proliferation and inducing apoptosis of NSCLC cells. These functions were achieved by the EBLN3P/miR-144-3p/TNPO1 (transportin-1) ceRNA network. In summary, the lncRNA EBLN3P functions as a ceRNA to mediate lung cancer inhibition induced by carbon ion irradiation by sponging miR-144-3p to regulate TNPO1 expression, indicating that EBLN3P may be a promising target for increasing the treatment efficacy of conventional radiotherapy for NSCLC.
Introduction
According to the most recent annual cancer statistics report, lung cancer is the most prevalent type of malignancy leading to cancer-related deaths worldwide, with an estimated annual death toll of 1.8 million [1]. Among lung cancer patients, approximately BEAS-2B (human lung bronchial epithelial cells), A549, H1299, HCC827, and Calu-1 (NSCLC cells) were purchased from the National Collection of Authenticated Cell Cultures (Shanghai, China). Dulbecco's modified Eagle medium (Thermo Fisher Scientific, Waltham, MA, USA) was used for culturing BEAS-2B cells, whereas Roswell Park Memorial Institute-1640 medium (Thermo Fisher Scientific) was used for culturing lung cancer cells. All media were supplemented with 1% penicillin-streptomycin and 10% fetal bovine serum (FBS). Cells were cultured in an incubator containing 5% CO 2 at 37 • C. All cell lines were cultured, maintained, and used in the range of 10 to 20 passages. The RS 2000 X-ray Biological Irradiator (Rad Source Technologies, Suwanee, GA, USA) was used to produce X-rays (225 kVp, 1.12 Gy/min). The Heavy Ion Medical Accelerator in Chiba (HIMAC) at the National Institute of Radiological Science was used to produce carbon ion beams (290 MeV/u, 1.22 Gy/min). The linear energy transfer (LET) of C290 at the entrance of the plateau was 13.3 keV/µm, whereas the LET in the spread-out Bragg peak was 80 keV/µm.
qRT-PCR
TRIzol reagent (Thermo Fisher Scientific) was used to extract total RNA from cells and tissues. The Prime Script RT kit (Takara Shuzo Co., Shiga, Japan) was used for the reverse transcription of mRNA. For the polymerase chain reaction, the AceQ qPCR SYBR Green Master Mix kit (low ROX premixed) (Vazyme Biotech Co., Ltd., Nanjing, China) was used. The Vii7A system (Thermo Fisher Scientific) was used for quantitative PCR and signal generation. The comparative C(t) method was employed for data analysis. Glyceraldehyde-3-phosphate dehydrogenase was used as the internal reference gene, and sequences used for PCR are listed in Table 2. The primers for the reverse transcription and amplification of miR-144-3p were designed and synthesized by RiboBio (Guangzhou, China).
Western Blotting
Radioimmunoprecipitation assay lysis buffer (Beyotime, Shanghai, China) was used to extract total protein from cells, and the protein concentration of each sample was determined using the BCA Protein Assay kit (Beyotime, Shanghai, China). Equivalent amounts of protein were separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis and subsequently transferred to a polyvinylidene fluoride membrane (Amersham, Arlington Heights, IL, USA). The chemiluminescence signals were developed using an Enhanced Chemiluminescence Detection kit (Millipore, Burlington, MA, USA) and acquired using a polychromatic fluorescence chemiluminescence imaging analysis system (Alpha Innotech, San Leandro, CA, USA). Antibodies against transportin-1 (catalog no. 38700; Cell Signaling Technology, Danvers, MA, USA) and glyceraldehyde-3-phosphate dehydrogenase (catalog no. 5174; Cell Signaling Technology) were used for western blotting.
Cell Viability
The Cell Counting Kit-8 assay (Dojindo, Kumamoto, Japan) was used to assess cell viability. Cells were digested with trypsin and seeded in 96-well plates at a density of 2000 cells per well. The medium was replaced with fresh medium containing 10% CCK-8 reagent, and cells were cultured in an incubator at 37 • C for 2 h in the dark. Thereafter, the optical density (OD) value of each well was measured at 450 nm with a multifunctional microplate reader (BioTek Instruments, Winooski, VT, USA). To determine the number of viable cells, the cell number was proportional to the OD value.
Apoptosis
A549 cells were digested with trypsin, seeded in 6-well plates at a density of 1 × 10 5 per well, and collected at 24 h after transfection or irradiation. Cells were stained with propidium iodide and Annexin-V according to the instructions of the Annexin V-Alexa Fluor 647-Propidium Iodide Apoptosis Detection kit (FCMACS Biotech, Nanjing, China), and the apoptotic rate was determined by flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA).
Colony Formation
Cells were seeded in T25 plastic flasks in triplicate before irradiation. Cells were exposed to a single dose of 0-, 1-, 2-, 4-, 6-Gy X-ray irradiation, and the X-ray beams entered cells from the bottom of the bottle. After irradiation, the cells were counted, plated into Φ60 mm dishes and incubated for 14 days, after which they were fixed with 75% ethanol for 15 min at room temperature and stained with 0.1% crystal violet for 20 min. The survival fraction (SF) of cells at each irradiation dose was determined by the following formula: SF = (no. colonies formed)/(no. colonies seeded) × 100%. (1)
Luciferase Assay
A549 cells were seeded in 6-well plates and co-transfected with the luciferase reporter plasmid LNC EBLN3P 3 UTR wild-type (WT) or 3 UTR mutant (MUT) and the miR-144-3p mimic or negative control (miR-NC). After 48 h of transfection, the cells were processed according to the instructions of the Dual-Luciferase Reporter Gene Assay kit (Beyotime, Shanghai, China), and Firefly and Renilla luciferase activities were measured with a multifunctional microplate reader (BioTek Instruments). All vectors were synthesized by RiboBio.
Animal Treatment
Male BALB/c nude mice at 6-8 weeks of age were purchased from SLACCAS Animal Laboratory (Shanghai, China) and housed under specific-pathogen-free conditions. A549 cells (5 × 10 6 ) were subcutaneously injected into the right flanks of the mice. After two weeks, when the tumor volume reached approximately 70 mm 3 , the tumors were exposed to a single dose of 8-Gy X-ray irradiation using a cone-beam CT-guided precision irradiation system (X-RAD 225Cx; Precision X-Ray, North Branford, CT, USA). Thereafter, si-LNC EBLN3P or si-NC (RiboBio, Guangzhou, China) was injected into the solid tumors every 4 days at a dose of 1 nmol (dissolved in 25 µL sterile PBS) each tumor. The tumor size was measured with a caliper every 4 days for 16 days, and the volume was determined by the following formula: where a indicates the length and b indicates the width. After 16 days, the mice were sacrificed, and tumors were harvested and weighed. All animal experiments complied
Hematoxylin-Eosin Staining and Immunohistochemistry
Tissues were embedded in paraffin, and tissue blocks were sectioned after gradient dehydration. Sections were stained with hematoxylin-eosin according to the instructions of the Hematoxylin-Eosin Stain kit (Solarbio, Beijing, China). For immunohistochemistry, sections were incubated with TNPO1 (catalog no. ab10303; Abcam, Cambridge, UK) and BAX (catalog no. 5023; Cell Signaling Technology) antibodies overnight at 4 • C. The next day, the sections were washed with phosphate-buffered saline containing Tween-20 and incubated with a secondary antibody (catalog nos. PV6001, PV6002; ZSGB-Bio, Beijing, China) at 37 • C for 30 min. Sections were stained with 3,3 -diaminobenzidine, dehydrated, and sealed with neutral balsam. Images were acquired with an inverted microscope (Leica, Wetzlar, Germany), and the cells positively stained with TNPO1 and BAX were analyzed with IHC Profiler (an Image-J plugin).
Statistical Analysis
GraphPad Prism 8.0 or SPSS 16.0 software packages were used for the statistical analysis of the data. There were at least three biological replicates in each experiment. Student's t-test was used for the comparison of means of two groups and ANOVA was used for the comparison of means of three or more groups. Data are presented as mean ± standard deviation (SD). Statistical differences were considered significant at p < 0.05.
Differential Expression of lncRNAs after Carbon Ion and X-ray Irradiation
To identify the ionizing radiation responsive lncRNA signatures, an expression profile of lncRNAs in irradiated A549 cells was investigated. RNA was isolated from A549 cells 2 h after exposed to X-ray (2 Gy) or carbon ion (2 Gy) irradiation and subjected to RNAseq. According to mRNA/lncRNA/miRNA interaction analysis, we constructed a ceRNA regulatory network reflecting the differential response to the two kinds of radiation, among which LNC EBLN3P was demonstrated to play a significant role by interacting with miR-144-3p and TNPO1 ( Figure 1A). The number of reported AGO-CLIP experiments also implied that TNPO1 was the most reliable target of miR-144-3p among the 6 differentially expressed candidate targets (Supplementary Figure S1). LNC EBLN3P expression was reduced by carbon ion irradiation, which is less significantly down-regulated by X-ray irradiation ( Figure 1B). An analysis of the data from gene expression profiling interactive analysis (GEPIA) revealed that the expression of LNC EBLN3P was slightly higher in lung cancer tissues; however, no significant difference was noted in LNC EBLN3P expression level between tumor and normal tissues or in survival rate between high and low LNC EBLN3P patients ( Figure 1C,D), and lung cancer tissues have increased expression of TNPO1 compared with paired normal tissues ( Figure 1F). Furthermore, in samples of lung cancer patients from GEPIA2 (http://gepia2.cancer-pku.cn/#survival/ (accessed on 26 November 2022)), we observed that patients with high TNPO1 expression had lower overall survival (OS) than those with low expression of TNPO1 ( Figure 1G). An analysis of the co-expression results indicated a positive correlation between LNC EBLN3P and TNPO1 in both lung adenocarcinoma and lung squamous carcinoma ( Figure 1E,H). These results show that LNC EBLN3P is downregulated after irradiation, suggesting that radiation-responsive LNC EBLN3P may play a role in the radiosensitivity of lung cancer cells. that patients with high TNPO1 expression had lower overall survival (OS) than those with low expression of TNPO1 ( Figure 1G). An analysis of the co-expression results indicated a positive correlation between LNC EBLN3P and TNPO1 in both lung adenocarcinoma and lung squamous carcinoma ( Figure 1E,H). These results show that LNC EBLN3P is downregulated after irradiation, suggesting that radiation-responsive LNC EBLN3P may play a role in the radiosensitivity of lung cancer cells.
Irradiation Causes Expression Changes of LNC EBLN3P
To determine whether LNC EBLN3P is expressed differently in lung cancer cells and normal lung cells, we performed qRT-PCR experiments using normal BEAS-2B cells and four NSCLC cell lines (A549, H1299, HCC827, and Calu-1) for validation. As shown in Figure 2A, the basal expression level of LNC EBLN3P was higher in lung cancer cells compared with BEAS-2B cells. We further examined TNPO1 expression, and the results
Irradiation Causes Expression Changes of LNC EBLN3P
To determine whether LNC EBLN3P is expressed differently in lung cancer cells and normal lung cells, we performed qRT-PCR experiments using normal BEAS-2B cells and four NSCLC cell lines (A549, H1299, HCC827, and Calu-1) for validation. As shown in Figure 2A, the basal expression level of LNC EBLN3P was higher in lung cancer cells compared with BEAS-2B cells. We further examined TNPO1 expression, and the results showed that the TNPO1 protein level was also higher in A549 and H1299 cells compared with BEAS-2B cells ( Figure 2B, Figure S4). pression of LNC EBLN3P in A549 cells exposed to different doses of X-ray irradiation. The results showed that LNC EBLN3P expression was down-regulated with increasing radiation dose ( Figure 2E). Moreover, the reduction in the TNPO1 protein level by 2-Gy carbon ion irradiation was greater than that by 2-Gy X-ray irradiation ( Figure 2F, Figure S4). These results provide further proof that radiation can decrease the expression of LNC EBLN3P and TNPO1 in lung cancer cells. Expression of LNC EBLN3P was tested in A549 cells which suffered different doses of X−ray irradiation (samples were collected 24 h after irradiation). (F) TNPO1 protein levels were determined with western blotting in A549 cells exposed to the same dose of X−rays and carbon ions. Data shown in A-E represent the mean ± SD (n = 3). * p < 0.05, ** p < 0.01, *** p < 0.001. Expression of LNC EBLN3P was tested in A549 cells which suffered different doses of X−ray irradiation (samples were collected 24 h after irradiation). (F) TNPO1 protein levels were determined with western blotting in A549 cells exposed to the same dose of X−rays and carbon ions. Data shown in A-E represent the mean ± SD (n = 3). * p < 0.05, ** p < 0.01, *** p < 0.001.
To explore whether the expression of LNC EBLN3P and TNPO1 responds to irradiation, we investigated the mRNA levels of LNC EBLN3P and TNPO1 in response to different types of radiation. Total RNA was isolated from A549 cells 6, 12, 18, or 24 h after exposed to X-ray or carbon ion irradiation, followed by qRT-PCR experiments. As shown in Figure 2C,D, the transcription levels of LNC EBLN3P and TNPO1 were both up-regulated at all time points, and the down-regulation in expression induced by carbon ion beams was more significant than that induced by X-rays. Next, we examined the expression of LNC EBLN3P in A549 cells exposed to different doses of X-ray irradiation. The results showed that LNC EBLN3P expression was down-regulated with increasing radiation dose ( Figure 2E). Moreover, the reduction in the TNPO1 protein level by 2-Gy carbon ion irradiation was greater than that by 2-Gy X-ray irradiation ( Figure 2F, Figure S4). These results provide further proof that radiation can decrease the expression of LNC EBLN3P and TNPO1 in lung cancer cells.
LNC EBLN3P Reduces the Viability of A549 Cells
To verify whether LNC EBLN3P could affect the viability of cells, we transfected A549 cells with pcDNA3.1-LNC EBLN3P or si-LNC EBLN3P to up-regulate or down-regulate the expression of LNC EBLN3P ( Figure 3A,C). Cell proliferation was enhanced at 48 h and Figure 3D). We further explored the viability and radiosensitivity of A549 cells after overexpression or knockdown of LNC EBLN3P. We found that the overexpression of LNC EBLN3P significantly decreased the apoptotic rate ( Figure 3E,F), whereas the knockdown of LNC EBLN3P significantly increased the apoptotic rate after 2-Gy X-ray irradiation ( Figure 3G,H). As for radiosensitivity, the overexpression of LNC EBLN3P increased colony formation, whereas the knockdown of LNC EBLN3P decreased colony formation after X-ray irradiation ( Figure 3I,J). Taken collectively, these results indicate that the down-regulation of LNC EBLN3P expression can decrease the viability and increase the radiosensitivity of A549 cells.
LNC EBLN3P Reduces the Viability of A549 Cells
To verify whether LNC EBLN3P could affect the viability of cells, we transfected A549 cells with pcDNA3.1-LNC EBLN3P or si-LNC EBLN3P to up-regulate or down-regulate the expression of LNC EBLN3P ( Figure 3A,C). Cell proliferation was enhanced at 48 h and significantly enhanced at 96 h by the up-regulation of LNC EBLN3P expression ( Figure 3B), whereas the reduction of cell viability induced by the down-regulation of LNC EBLN3P expression persisted from 24 h to 96 h ( Figure 3D). We further explored the viability and radiosensitivity of A549 cells after overexpression or knockdown of LNC EBLN3P. We found that the overexpression of LNC EBLN3P significantly decreased the apoptotic rate ( Figure 3E,F), whereas the knockdown of LNC EBLN3P significantly increased the apoptotic rate after 2-Gy X-ray irradiation ( Figure 3G,H). As for radiosensitivity, the overexpression of LNC EBLN3P increased colony formation, whereas the knockdown of LNC EBLN3P decreased colony formation after X-ray irradiation ( Figure 3I,J). Taken collectively, these results indicate that the down-regulation of LNC EBLN3P expression can decrease the viability and increase the radiosensitivity of A549 cells.
LNC EBLN3P Regulates the Expression of TNPO1
As an oncogene, TNPO1 has been reported to be associated with tumor growth, invasion, and metastasis [9][10][11]. To determine whether LNC EBLN3P can regulate TNPO1
LNC EBLN3P Regulates the Expression of TNPO1
As an oncogene, TNPO1 has been reported to be associated with tumor growth, invasion, and metastasis [9][10][11]. To determine whether LNC EBLN3P can regulate TNPO1 expression in lung cancer cells, qRT-PCR and western blotting experiments were performed to examine the TNPO1 mRNA and protein levels in LNC EBLN3P-overexpressed or LNC EBLN3P-silenced A549 cells. As shown in Figure 4A, the expression of TNPO1 in LNC EBLN3P-overexpressed A549 cells was more than 15-fold higher than that in control cells, and the TNPO1 protein level also showed a significant increase ( Figure 4C, Figure S5). By contrast, both TNPO1 mRNA and protein levels were lower in LNC EBLN3P-silenced A549 cells than that in control cells ( Figure 4B,D, Figure S5). Moreover, it was found that TNPO1-knockdown inhibited proliferation of A549 cells and sensitized the cells to X-ray irradiation by detection of the proliferation, apoptosis, and survival of cells as shown in Supplementary Figure S2. These results indicate that LNC EBLN3P positively regulates TNPO1 expression in NSCLC cells. expression in lung cancer cells, qRT-PCR and western blotting experiments were performed to examine the TNPO1 mRNA and protein levels in LNC EBLN3P-overexpressed or LNC EBLN3P-silenced A549 cells. As shown in Figure 4A, the expression of TNPO1 in LNC EBLN3P-overexpressed A549 cells was more than 15-fold higher than that in control cells, and the TNPO1 protein level also showed a significant increase ( Figure 4C, Figure S5). By contrast, both TNPO1 mRNA and protein levels were lower in LNC EBLN3P-silenced A549 cells than that in control cells ( Figure 4B,D, Figure S5). Moreover, it was found that TNPO1-knockdown inhibited proliferation of A549 cells and sensitized the cells to X-ray irradiation by detection of the proliferation, apoptosis, and survival of cells as shown in Supplementary Figure S2. These results indicate that LNC EBLN3P positively regulates TNPO1 expression in NSCLC cells.
MiR-144-3p Mediates the Regulation of LNC EBLN3P on TNPO1
Based on bioinformatics analysis, we predicted the potential ceRNA network and observed that the regulation of LNC EBLN3P on TNPO1 was mediated by miR-144-3p ( Figure 1A). To verify the interaction between miR-144-3p and LNC EBLN3P or TNPO1, we co-transfected A549 cells with luciferase reporter plasmids containing LNC EBLN3P 3′UTR wild-type (WT) or 3′UTR mutant (MUT) and miR-144-3p mimic or negative control (miR-NC) and measured the relative luciferase activity in cells. The results showed that the luciferase activity of the LNC EBLN3P WT and miR-144-3p co-transfected group significantly decreased, which confirmed the predicted binding site between miR-144-3p and LNC EBLN3P ( Figure 5A). Similar results were obtained by the luciferase-reporter
MiR-144-3p Mediates the Regulation of LNC EBLN3P on TNPO1
Based on bioinformatics analysis, we predicted the potential ceRNA network and observed that the regulation of LNC EBLN3P on TNPO1 was mediated by miR-144-3p ( Figure 1A). To verify the interaction between miR-144-3p and LNC EBLN3P or TNPO1, we co-transfected A549 cells with luciferase reporter plasmids containing LNC EBLN3P 3 UTR wild-type (WT) or 3 UTR mutant (MUT) and miR-144-3p mimic or negative control (miR-NC) and measured the relative luciferase activity in cells. The results showed that the luciferase activity of the LNC EBLN3P WT and miR-144-3p co-transfected group significantly decreased, which confirmed the predicted binding site between miR-144-3p and LNC EBLN3P ( Figure 5A). Similar results were obtained by the luciferase-reporter assay, which confirmed the predicted binding site between miR-144-3p and TNPO1 ( Figure 5B). qRT-PCR detection suggested that miR-144-3p was upregulated by both X-ray and carbon-ion irradiation. However, the upregulation induced by carbon ion beams was more significant than that induced by X-rays ( Figure 5C). Next, we co-transfected A549 cells with LNC EBLN3P siRNA (si-LNC EBLN3P) and miR-144-3p inhibitor (miR-inhibitor) and then detected the effects on cell proliferation, apoptosis, and survival. Our results showed that LNC EBLN3P knockdown decreased TNPO1 mRNA expression and suppressed cell viability, while increased cellular apoptosis and radiosensitivity. All of these effects could be rescued by knockdown of miR-144-3p using its inhibitors ( Figure 5D-H). Moreover, the inhibition of LNC EBLN3P induced miR-144-3p upregulation, which was curtailed by miR-inhibitor transfection (Supplementary Figure S3). Taken collectively, these results indicate that the regulation of LNC EBLN3P on TNPO1 was mediated by miR-144-3p, and the LNC EBLN3P/miR-144-3p/TNPO1 axis, which was inactivated after irradiation, plays a role in the death of NSCLC cells.
( Figure 5B). qRT-PCR detection suggested that miR-144-3p was upregulated by both X-ray and carbon-ion irradiation. However, the upregulation induced by carbon ion beams was more significant than that induced by X-rays ( Figure 5C). Next, we co-transfected A549 cells with LNC EBLN3P siRNA (si-LNC EBLN3P) and miR-144-3p inhibitor (miR-inhibitor) and then detected the effects on cell proliferation, apoptosis, and survival. Our results showed that LNC EBLN3P knockdown decreased TNPO1 mRNA expression and suppressed cell viability, while increased cellular apoptosis and radiosensitivity. All of these effects could be rescued by knockdown of miR-144-3p using its inhibitors ( Figure 5D-H). Moreover, the inhibition of LNC EBLN3P induced miR-144-3p upregulation, which was curtailed by miR-inhibitor transfection (Supplementary Figure S3). Taken collectively, these results indicate that the regulation of LNC EBLN3P on TNPO1 was mediated by miR-144-3p, and the LNC EBLN3P/miR-144-3p/TNPO1 axis, which was inactivated after irradiation, plays a role in the death of NSCLC cells.
Inhibition of LNC EBLN3P Radiosensitizes NSCLC Cells In Vivo through the miR-144-3p/TNPO1 Axis
To explore the potential of LNC EBLN3P in the radiosensitization of photon radiotherapyinduced NSCLC inhibition, we irradiated the tumors with 8-Gy X-rays using a cone-beam CT-guided precision irradiation system. After two weeks, the mice were sacrificed and the pathological changes and expression of LNC EBLN3P and TNPO1 in lung tumor tissues were examined. Compared with the X-ray irradiation group, X-ray irradiation combined with LNC EBLN3P knockdown significantly suppressed tumor development ( Figure 6A-C). After X-ray irradiation, the knockdown of LNC EBLN3P significantly decreased the protein expression of TNPO1 in lung cancer tissues compared with the control (Figure 6D,E). Furthermore, the results of BAX expression experiments and hematoxylin-eosin staining showed that knockdown of LNC EBLN3P induced necrosis and apoptosis in lung cancer tissues ( Figure 6D,F). Next, total RNA was isolated from tissues for qRT-PCR, and the results indicated that the si-LNC EBLN3P treatment group exhibited much lower expression levels of LNC EBLN3P and TNPO1 than the si-NC group ( Figure 6H, I). These findings indicate that LNC EBLN3P knockdown increased the radiosensitivity of NSCLC cells through the miR-144-3p/TNPO1 axis. To explore the potential of LNC EBLN3P in the radiosensitization of photon radiotherapy-induced NSCLC inhibition, we irradiated the tumors with 8-Gy X-rays using a cone-beam CT-guided precision irradiation system. After two weeks, the mice were sacrificed and the pathological changes and expression of LNC EBLN3P and TNPO1 in lung tumor tissues were examined. Compared with the X-ray irradiation group, X-ray irradiation combined with LNC EBLN3P knockdown significantly suppressed tumor development ( Figure 6A-C). After X-ray irradiation, the knockdown of LNC EBLN3P significantly decreased the protein expression of TNPO1 in lung cancer tissues compared with the control (Figure 6D,E). Furthermore, the results of BAX expression experiments and hematoxylin-eosin staining showed that knockdown of LNC EBLN3P induced necrosis and apoptosis in lung cancer tissues ( Figure 6D,F). Next, total RNA was isolated from tissues for qRT-PCR, and the results indicated that the si-LNC EBLN3P treatment group exhibited much lower expression levels of LNC EBLN3P and TNPO1 than the si-NC group ( Figure 6H, I). These findings indicate that LNC EBLN3P knockdown increased the radiosensitivity of NSCLC cells through the miR-144-3p/TNPO1 axis.
Discussion
Radiotherapy has a long history in the treatment of lung cancer, but conventional photon beams, such as γ-rays and X-rays, have many disadvantages, including low efficacy rates in radiation-resistant tumors, the tendency to induce varying degrees of radiation damage, and high tumor recurrence rates. The advent of carbon ion therapy has revolutionized radiotherapy. To date, several clinical studies have shown that carbon ion radiotherapy has excellent efficacy for different stages of lung cancer [12,13]. However, its mechanism of action in NSCLC is unclear, thereby presenting many obstacles to further improving its efficacy and reducing its side effects.
In this study, we found that LNC EBLN3P expression in A549 cells was downregulated by carbon ion irradiation, which was less significantly down-regulated by X-ray irradiation, indicating EBLN3P may play an important role in carbon ion-induced lung cancer cell death. Indeed, knockdown of EBLN3P inhibited cell proliferation and colony formation and promoted A549 cell apoptosis. Our data further revealed that LNC EBLN3P positively regulated the expression of TNPO1, an oncogene. The results of luciferase assays revealed an interaction between miR-144-3p and LNC EBLN3P or TNPO1, and further analysis demonstrated that TNPO1 can interact with miR-144-3p and its expression was positively regulated by LNC EBLN3P. Down-regulation of LNC EBLN3P expression caused the upregulation of miR-144-3p expression, which in turn caused the level of TNPO1 to increase, thereby inhibiting the viability and enhancing the radiosensitivity of lung cancer cells. Thus, our study demonstrates that carbon ion-responsive LNC EBLN3P promotes TNPO1 expression by sponging miR-144-3p and LNC EBLN3P/miR-144-3p/TNPO1 forms a ceRNA network.
Recent studies have demonstrated that lncRNAs are crucial regulators of tumor growth and invasion; they have reported that LNC EBLN3P can function as an oncogene in osteosarcoma and lung adenocarcinoma, and the inhibition of EBLN3P can be used in the treatment of cancer [14,15]. LNC EBLN3P can also promote the recovery of impaired spiral ganglion neurons by regulating the miR-204-5p/TMPRSS3 axis [16]. Although there is no universally accepted set of specific lncRNA markers for diagnosing NSCLC, numerous studies have investigated aberrantly and differentially expressed lncRNAs [17,18], and multiple lncRNAs may be used in the future as diagnostic markers for the pathological staging of NSCLC. In terms of treatment, drug resistance in NSCLC patients is often the main reason for treatment failure. The lncRNAs NNT-AS1 and HOXA-AS3 have been reported to associate with cisplatin resistance, which is the standard adjuvant treatment after chemotherapy for advanced NSCLC [19,20]. In addition, the effect of lncRNA on epidermal growth factor receptor-tyrosine kinase inhibitor (EGFR-TKI) resistance is also a widely investigated research area [21]. The roles of lncRNAs in the efficacy of radiotherapy in non-surgically treated patients with advanced NSCLC are also gradually attracting attention. Wang et al. reported that the expression of the lncRNA CCAT1, which is highly expressed in NSCLC cells, was down-regulated after irradiation, and its knockdown inhibited the mitogen-activated protein kinase pathway, which increased the radiosensitivity of NSCLC cells [22], while Ma et al. demonstrated that the lncRNA LINC00460 can promote gefitinib resistance in NSCLC cells by sponging miR-769-5p to modulate EGFR [23]. The radiation resistance of lung cancer cells has long been a source of frustration for the field of radiotherapy. Unlike most studies that only investigate the role of X-rays in tumor cell sensitization, our study focused on the epigenetic difference in the radiation response between X-rays and heavy ions and examined differentially expressed lncRNAs. Through bioinformatics analyses and mechanistic studies, we identified LNC EBLN3P as a potential molecule functioning in the response of tumor cells to heavy ions through a ceRNA regulatory network.
A wealth of experimental evidence suggests that lncRNAs and miRNAs have binding sites to co-regulate the expression of target genes. Previous studies have attempted to elucidate the mechanistic pathways of lncRNAs in NSCLC through a ceRNA perspective. For example, Zhang et al. reported that MAGI1-IT1 could stimulate the NSCLC cell proliferation and growth by up-regulating AKT1 as a ceRNA [24]. Wang et al. demonstrated that LINC01234 promoted the progression of NSCLC through the miR-433-3p/GRB2 axis [25]. However, up to now, no ceRNA regulatory network has been identified to be involved in the radiobiological effects of heavy ions. We identify, for the first time, a ceRNA network functioning in the regulation of heavy ion-induced NSCLC cell killing and confirm the regulation of LNC EBLN3P on miR-144-3p, which can target and inhibit TNPO1 expression. The results revealed that LNC EBLN3P knockdown decreased cell proliferation, whereas the miR-144-3p antagonist increased cell proliferation. In other words, the miR-144-3p antagonist could rescue the knockdown effects of LNC EBLN3P, thereby allowing normal cell proliferation, which is indicative of their direct regulatory relationship. In addition, TNPO1 is identified as an oncogene and has been reported to be associated with the development of esophageal cancer and cervical cancer [9,26], consistent with our finding that TNPO1-knockdown inhibited proliferation of A549 cells. In our study, we reveal that the knockdown of LNC EBLN3P induced by carbon ion irradiation could inhibit cell proliferation and induce apoptosis and radiosensitivity of NSCLC cells through the down-regulation of TNPO1, demonstrating that TNPO1 may be a key factor functioning in the radiobiological effects of heavy ions.
Conclusions
In conclusion, we report that the LNC EBLN3P/miR-144-3p/TNPO1 axis responds to carbon ion-induced apoptosis of NSCLC cells in vitro and in vivo, implying this pathway may hold promise for improving the treatment efficiency of radiotherapy for NSCLC.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers15020511/s1, Figure S1: Number of the reported AGO-CLIP experiments predicted the interaction between miR-144-3p and its target genes. Figure S2: TNPO1-knockdown inhibited proliferation of A549 cells and sensitized the cells to X-ray irradiation. Figure S3: The relative expression of miR-144-3p in A549 cells co-transfected with si-LNC EBLN3P and miR-Inhibitors. Figure S4: The whole blot image of the western blotting data of Figure 2. Figure S5: The whole blot image of the western blotting data of
Institutional Review Board Statement:
The animal study protocol was approved by the Ethics Committee of Soochow University (ECSU-2021000109).
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available in this article (and supplementary material). | 2023-01-22T05:13:33.464Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "86cf3778e48212e17bd07a1eeb802763544f982a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/15/2/511/pdf?version=1673625514",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86cf3778e48212e17bd07a1eeb802763544f982a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15942316 | pes2o/s2orc | v3-fos-license | Significant and Systematic Expression Differentiation in Long-Lived Yeast Strains
Background Recent studies suggest that the regulation of longevity may be partially conserved in many eukaryotes ranging from yeast to mammals. The three yeast mutants sch9Δ, ras2Δ, tor1Δ show extended chronological life span up to three folds. Our aim is to dissect the mechanisms that lead to the yeast life span extension. Methodology/Principal Findings We obtain gene expression profiles of sch9Δ, ras2Δ, tor1Δ as well as that for a wild type at day 2.5 in SDC medium using Affymetrix Yeast2.0 arrays. To accurately estimate the expression differentiation between the wild type and the long-lived mutants, we use sub-array normalization followed by a variant of the median-polishing summarization. The results are validated by the probe sets of S. pombe on the same chips. To translate the differentiation into changes of biological activities, we make statistical inference by integrating the expression profiles with biological gene subsets defined by Gene Ontology, KEGG pathways, and cellular localization of proteins. Other than subset-versus-other comparisons, we also make local comparisons between two directly-related gene subsets such as cytosolic and mitochondrial ribosomes. Our consensus is obtained by cross-examination of these inferences. The significant and systematic differentiation in the three long-lived strains includes: lower transcriptional activities; down-regulation of TCA cycle and oxidative phosphorylation versus up-regulation of the KEGG pathway Glycolysis/Gluconeogenesis; the overall reduction of mitochondrial activities. We also report some different expression patterns such as reduction of the activities relating to mitosis in ras2Δ. Conclusions/Significance The modification of energy pathways and modification of compartment activities such as down-regulation of mitochondrial ribosome proteins versus up-regulation of cytosolic ribosome proteins are directly associated with the life span extension in yeast. The results provide a new and systematic S. cerevisiae version of the free radical theory from the perspective of functional genomics.
INTRODUCTION
Recent findings suggest that ageing, like many other biological processes, is subject to regulation by pathways that may have been partially conserved throughout evolution [1,2]. In fact, the downregulation of the glucose-sensing/insulin/IGF-1 pathway promotes life span extension in organisms ranging from yeast to mice. In S. cerevisiae two different paradigms are used to measure longevity: the replicative life span, which is defined as the total number of daughter cells generated by a mother cell; and the chronological life span which is measured by monitoring the mean/maximum survival time of a population of non-dividing yeast. Here we focus on the chronological life span, which represents a simple but valuable system to study how post-mitotic cells age [3]. The mutations of a single gene within the principal yeast nutrient-sensing pathways can extend the chronological life span dramatically [4,5]. Among these pathways, the Sch9, Ras2/ cAMP/PKA and TOR pathways are of most interest. Importantly, in higher eukaryotes pathways that appear to share a common evolutionary origin with the Sch9 and TOR pathways are also implicated in life span regulation. In budding yeast inactivation of sch9, homolog of mammalian serine/threonine protein kinase Akt, extends chronological life span by nearly three folds [6]. Deletion of ras2 or down-regulation of cyr1 in the Ras2/cAMP/PKA pathway nearly doubles the chronological life span of yeast [6,7]. In a large scale screening in yeast, several genes that encode components of the nutrient-responsive TOR pathway were found to increase the chronological life span [8].
A number of theories have been proposed to explain the mechanism of ageing. Among them are the disposable soma theory of ageing first suggested by Weismann and later developed by Kirkwood et al. [9,10,11], the accumulated mutation theory first proposed by Medawar in 1952 [12], the antagonistic pleiotropy theory proposed by Williams in 1957 [13], the programmed and altruistic ageing theory [14]. The free radical theory of ageing first proposed by Harman in the 1950s [15] is particularly relevant to the research reported in this article. According to this theory, ageing is a consequence of free radical damage. Later Harman extended the idea to implicate mitochondrial production of ROS in the 1970s [16].
The partial conservation of the life-span regulatory pathways suggests that they may have evolved in ancestral unicellular organisms in order to overcome periods of starvation. Calorie restriction (CR), which resembles the starvation conditions used to assess chronological life span, causes longevity extension in all the ageing model systems. In this article we analyze the gene expression profiles of chronologically long-lived yeast strains. Under this ageing paradigm, haploid yeast are grown in synthetic complete medium (SDC) until nutrients are depleted. Once yeast stop dividing, they are kept in the depleted medium. The viability of the cultures is monitored over time by measuring the colony forming units (CFUs) [3]. Incubation in nutrient-depleted SDC mimics the conditions normally encountered by yeast in the natural environment where microorganisms survive for long periods of time under starvation.
The modification of the chronological life span, is the end effect of genetic interventions such as the knockout of sch9, and environmental changes such as calorie restriction. Extensive results have been obtained in relating genotypes and the phenotype of life span. Our effort aims to understand the intermediate steps of the ageing mechanism.
The microarray technology allows us to measure the expression profiles of a living cell. We obtain the gene expression profiles of the long-lived mutants sch9D, ras2D, and tor1D together with a wild type at day 2.5 in SDC medium using Affymetrix yeast2.0 arrays. How to accurately estimate the differentiation between the wild type and the long-lived mutants is a key problem in our functional genomic study of ageing. Usually the estimation consists of two steps: normalization and summarization. Normalization aims to remove any non-biological difference generated in the reaction and read-out process while keep the real biological differentiation between a target and a reference sample. The invariant-set [17,18] and quantiles [19,20] normalization are two widely used methods in the literature. However, in our case, it is possible that the expression profiles of the long-lived mutants have substantial differentiation compared to that of the wild type. It is also possible that the differentiation is not symmetric. Thereby we adopt the sub-array normalization that is designed to preserve biological differentiation [21,23].
How to translate the differentiation into changes of biological activities is another key problem in the functional genomics of ageing. Relatively complete bio-and genomic-databases exist of S. cerevisiae. They provide us with instruments for statistical inference. In this article, we infer significant modifications of biological activities by integrating the expression differentiation with three sources of biological knowledge: Gene Ontology (GO), KEGG pathways, and cellular localization of proteins. Our consensus inference is obtained by cross-examination of the inferences drawn from the three perspectives. Furthermore, to reduce the gap of statistical significance and biological significance, we compare the transcriptional activities of two ''directly-related gene subsets'' such as the first half and second half of an energetic pathway, or ribosome proteins in either mitochondria or cytosol. The idea of local inference follows the basic principles of statistical design proposed by Sir R. Fisher while the idea of consensus inference is one statistical view of systems biology.
In this work, we study the yeast ageing mechanism from the perspective of functional genomics. The three mutants, sch9D, ras2D, tor1D share the same phenotype: longer chronological life span. The significant and systematic expression differentiation underlying the phenotype can shed light on the mechanism of ageing. We show that the goal is achievable and from the current data set we identify some common and characteristic changes of biological activities, which may directly lead to longevity.
Sample preparation and Affymetrix GeneChip arrays
We obtained the gene expression profiles of yeast strains including wild type, sch9D, ras2D, tor1D cells at day 2.5. Specifically, all strains used were obtained from frozen stocks. Each strain was inoculated in 1 mL SDC and grown overnight. Saturated overnight cultures were then diluted into 3 flasks each containing 50 mL of culture. All samples were incubated at 30uC with shaking (2200 rpm) until day 2.5. Total RNA was isolated from day 2.5 post-diauxic yeast cultures (2.0610 8 cells) according to the acid phenol protocol. Briefly, yeast were collected by centrifugation, washed with cold water once, and resuspended in 400 ml of 10 mM Tris pH 7.5, 10 mM EDTA, 0.5% SDS. After adding 400 ml of warm acid phenol the cell suspension was incubated at 65uC for 20 minutes with vortexing every 5 minutes, centrifuged and the supernatant extracted twice with acid phenol and once with chloroform. Total RNA was recovered by precipitation with ethanol and cleaned up by using the RNAsy kit (Qiagen). RNA (5 mg/sample) was sent to the UCLA DNA array Core Facility. Total RNA from independent cultures of each strain was used as a template to synthesize complementary RNA (cRNA). The biotin-labeled cRNA was hybridized to Affymetrix GeneChipH Yeast2.0 Array. In sum, three biological replicates were obtained for each of wild type, sch9D, ras2D, and tor1D.
In the SDC medium, a substantial proportion of yeast cells are still dividing before day 2. At older ages, such as day 3-5, most of the cells become hypometabolic, which is associated with a dramatic drop in transcription. We harvest mRNA at day 2.5 so that we can extract enough mRNA for microarray experiment while avoid the noise introduced by the transcriptional activities of dividing cells.
Normalization and summarization
After imaging process, the expression of each sample is represented by a CEL file, which includes the fluorescence intensities of all probes. Denote the three arrays of wild type, sch9D ras2D, and tor1D respectively by W1, W2, W3, S1, S2, S3, R1, R2, R3, T1, T2, T3. The conversion of probe intensities to expression values requires two statistical procedures: normalization and summarization. We applied the Sub-Sub normalization [21] to our data sets, aiming at giving enough protection to possible differentiation between the mutants and the wild type. The normalization is carried out in a pairwise fashion. Namely, for each wild type sample, three replicates of a mutant are normalized with respect to this reference. Take sch9D for example, the normalized arrays with respect to W1, W2, W3 are respectively denoted by S1\W1, S2\W1, S3\W1, S1\W2, S2\W2, S3\W2, S1\W3, S2\W3, S3\W3.
Our summarization is a modified version of the median polishing method [22] in RMA (the Bioconductor affy package http://www. bioconductor.org/). The median polishing summarization method is based on a two-factor model, which include the sample effect and probe-specific effect. Namely, the gene expression of each sample is estimated by adjusting each individual probe effect. In our situation, we group the wild type and normalized mutant arrays by the reference, and then summarize each group. Take sch9D for example. We summarize the four arrays (the reference plus three normalized) W1, S1\W1, S2\W1, S3\W1 together. This leads to three estimates of expression fold changes of the mutant versus the wild type. In total, we have nine estimates from three wild type references, and we take their median difference as the final estimate. Due to the nature of normalization [23], only the portion of differentiation that is not confounded with the reference array is estimable. In the above scheme, arrays in a summarization group correspond to the same reference. Thus we expect that the difference between a (normalized) mutant and a wild type is, for the most part, real differentiation. The median is a robust estimate that is consistent with the median polishing method. Our treatment of reference in normalization is somewhat different from existing methods, and correspondingly we use this group median polishing summarization to take into account the reference effect. We note that summarization is done at the probe set level. Roughly speaking, the modified median-polishing summarization aims to remove the reference-specific effect for each probe set as well as the probe-specific effect.
The Yeast2.0 Array contains probe sets for both S. cerevisiae and S. pombe. The observed fluorescence intensities of S. pombe probes are primarily due to cross hybridization, and we only use them in normalization. To some extent, they play the role of external controls. Most genes correspond to only one probe set in Yeast2.0 Array, and for genes with multiple probe sets we take the average of fold changes.
Wilcoxon scoring of gene subsets
Based on the expression fold changes of the three mutant strains with respect to the wild type, we make inference about the modifications of biological activities using gene subsets defined by Gene Ontology (GO), KEGG Pathways, and cellular organelle (GFP fusion localization). A common theme of these analyses is as follows. Suppose we have m gene subsets S 1 ,S 2 ,…,S m . From the log ratios of expression levels of a mutant with respect to the wild type, we want to identify those subsets whose expressions are significantly upregulated or down-regulated. Denote the union of these gene subsets by G~S m i~1 S i . Our strategy consists of two steps. In the first step, for each subset S i , we compare their expressions against those in the complement of S i in G denoted by G2S i . This is a typical two-sample problem in statistics. We use the Wilcoxon rank test to calculate pvalue (one-sided) for each comparison. In the second step, we rank these subsets according to their significances. These subsets could be a GO category, a metabolic pathway, or protein genes localized in an organelle. Other tests and methods such as GSEA [24] could be applied to our study. We report the results by the Wilcoxon scoring due to its well-established statistical properties such as robustness and reasonably good efficiency.
Multiple test correction
To correct for multiple testing, we adjust the p-values by the method introduced by Storey et al. [25,26]. In this method, the qvalue is defined to evaluate the false discovery rate. The computation of q-values is implemented by the ''qvalue'' package provided in the R software (http://www.r-project.org/).
Gene Ontology analysis
Gene ontology information is from ''ftp://genome-ftp.stanford. edu/pub/go/''. The Gene Ontology subsets are defined from three related yet different aspects: biological processes, molecular functions and cellular components. Data structure for gene ontology (GO) is a directed acyclic graph (DAG). Each node in the DAG is a set of genes with specific annotation. The closer the nodes are to the terminal, the more detailed annotations are given and thereby are more informative. To avoid redundancy and overlapping between GO nodes and to facilitate our statistical analysis, we select from the DAG the nodes that are closest to the terminal and have at least 30 genes. This selection ends up with nodes of 44 cellular components, 53 molecular functions and 109 biological processes. The gene subsets defined by these nodes are referred to as terminal informative GO categories (TIGO). Then we apply the Wilcoxon scoring method and multiple test correction to these TIGO categories. By taking only the terminal informative TIGO categories rather than all the GO nodes, results are easier to be interpreted.
KEGG Pathway analysis
The pathway information are from the KEGG database: http:// www.genome.jp/kegg/. In total our study uses 103 S. cerevisiae pathways, most of which are well-established metabolic pathways. To expand our knowledge of ageing, we seek pathways that are significantly changed in the long-lived mutants. We regard each pathway as a subset of genes, and apply our statistical scoring and significance analysis to the 103 pathways and obtain a p-value and a q-value for each of them.
Cellular organelle analysis
The cellular localization data are from http://yeastgfp.ucsf.edu/. In this data set, 75% proteins were classified into 22 distinct subcellular localization categories, including mitochondria, nucleus, nucleolus, vacuole, vacuole membrane, budding neck, etc. Many research indicate that mitochondrion plays a central role in ageing. We also expected that the cellular organelle analysis would provide some information about the role of the different organelles in ageing. In this analysis, genes that function in the same cellular localization are regarded as one gene subset. The protein gene subsets from the yeast GFP fusion localization database are different from the cellular components in the GO categories.
Consensus and local Inference
After making inference using each of the three biological instruments: Gene Ontology, KEGG Pathways, and cellular organelle, we can report a consensus by cross-examination. Our another approach is to compare two gene subsets in a natural ''biological block''. For one example, we consider the first half and second half of an energetic pathway. For another example, we consider ribosomes in cytosol and ribosomes in mitochondria. Using this idea of local comparison, we get over the issue of multiple testing and thereby improve the statistical significance. The scheme of our inference is illustrated in Figure 1.
Preprocessing of microarray data
We obtained the gene expression profiles of yeast strains including wild type, sch9D, ras2D, and tor1D cells at day 2.5 using Affymetrix GeneChipH Yeast2.0 Array. In total, three biological replicates were generated for each strain. RNA of these replicates was obtained from independent populations which were grown in separate flasks under similar conditions. The expression fold changes of 5841 yeast genes were obtained for sch9D, ras2D, and tor1D with respect to the wild type by the Sub-Sub normalization [21] followed by the modified median-polishing summarization (Materials and methods) that aims to remove the reference-specific effect. We optimized the parameters in the Sub-Sub normalization by examining the results among replicates, and by checking probe sets of S. pombe on the Yeast2.0 Array. In this normalization, we divide each array into sub-arrays and normalize probe intensities within each sub-array by least trimmed squares to protect differentiation. The subarray size is selected to be 50 by 50; subarrays overlap by half the subarray size; and the trimming fraction of least trimmed squares is 0.45.
If the experiment conditions and mRNA amount for the reference and target samples are similar, we argue in [23] that a simple linear function is a good approximation in normalization even though the relationship between dye concentration and fluorescent intensity is nonlinear. Namely, to normalize a target array with respect to a reference, we shift and scale the probe intensities by a+b*target intensity in such a way that the differences with reference intensities are minimized. Specifically, the parameter a and b are estimated by least trimmed squares. In Figure 2, we show the estimates of the relative scale b in each subarray for sch9D versus wild type. Since we normalize each of the three target (mutant) arrays versus each of the three reference (wild type) arrays, total nine spatial patterns are shown in Figure 2, in which each row corresponds to a reference and each column corresponds to a target. The corresponding histograms of the scale parameters are shown in Figure S1. The adjustment of spatial effect can help reduce the variation between replicates, see examples in [21,23]. In this case, for each gene the standard deviation of the nine (three targets versus three references, see Materials and methods) expression log-ratios of a mutant versus wild type is calculated. The medians of these standard deviations for sch9D, ras2D, tor1D are respectively 0.146, 0.125, 0.134.
In Figure 3 we show the M-A plots of expression levels of three mutants versus the wild type. In the M-A plots, the x, y coordinate value of a dot respectively show the average and difference of a gene expression levels between a mutant and the wild type. The differentiation for probe sets of S. pombe are around zero as expected, especially at the left end. In fact, the medians of the average log ratios for S. pombe genes are 0.018, 0.047, 0.031 respectively for sch9D, ras2D, tor1D. Some probe sets correspond to homologs of S. pombe and S. cerevisiae, and are expected to be expressed in both. We note that the results are obtained in a blind fashion, for we do not separate S. pombe and S. cerevisiae probes in normalization. Therefore, the probe sets of S. pombe are used in both training and validation. Some differences among the three M-A plots are observed. This is not a surprise because the three genes sch9, ras2, tor1 do play some different roles according to what we know.
We check the M-A plot of raw data without any normalization. That is, for three replicates of a mutant and the wild type, we summarize their expression values by the median polishing method. Then we calculate the average and difference expression of two strains and show the result by M-A plots, see Figure S2. The differentiation for probe sets of S. pombe is mainly distributed along the horizontal direction. This suggests that we can shift the differentiation of S. cerevisiae genes by the median of the differentiation of S. pombe genes. We compare the expressions resulted from this simple median-shift normalization with expression from the above method, the correlation coefficients are respectively 0.990, 0.970, 0.988 for sch9D, ras2D, tor1D.
The similarity indicates that quality of the microarrays is relatively good. Finally, the expression results are confirmed by quantitative RT-PCR for eleven genes and northern blots for two genes.
We calculated the one-sided Wilcoxon rank test score for each TIGO subset versus the rest genes, and ranked these subsets according to their corresponding q-values respectively for the upregulated and down-regulated case. The same computation was carried out for KEGG pathways. These results are rather lengthy, and we report the most significant parts later in an integrative way. The complete spreadsheets can be found in the supplementary materials, Text S1, Text S3 and Text S4. The result of the cellular organelle analysis (GFP fusion localization) is summarized in Table 1, and the details can be found in Text S2.
Lower transcriptional activities
Our analysis shows that the overall transcription activities of the three long-lived mutants are relatively lower than those of the wild type. In fact, in the comparison of KEGG pathway activities, ''basal transcription factors'' and ''RNA polymerase'', are among the most negatively regulated pathways, which also include ''DNA polymerase'', ''Cell cycle'' and ''Proteasome'', see Table 2. The basal transcription factors form a complex that acts as a general transcription machine. One explanation is that sch9D, ras2D, tor1D mutants in the nutrient-depleted environment can live a more economical life and hence a lower basal transcription is sufficient to maintain survival. In consistent with lower transcriptional level, proteasome, the complex in charge of protein degradation, is negatively regulated in the long-lived mutants. Moreover, in Table 3, we list the expression activities of relevant pretranscription and post-transcription TIGO categories, which are all down-regulated with sufficient statistical evidence. From the perspective of protein localizations, we see in Table 1 that the expression activities of the compartment nucleus and nucleolus are significantly lower in the mutants sch9D, ras2D, tor1D compared to the wild type. It is noticed that reduction of transcriptional activities is most significant in ras2D, and this is confirmed by the overall expression profiles shown in Figure 3. All these evidences lead to the consensus that the three long-lived mutants somehow manage to lower their transcription activities in the SDC medium.
Switch of energy pathway
The universal ''currency'' of chemical energy ATP in animal cells and most other non-photosynthetic cells is generated mainly by the aerobic oxidation process. In aerobic oxidation, glucose is metabolized to CO 2 and H 2 O, and the released energy is converted to the chemical energy of phosphoanhydride bonds in ATP. The initial steps of oxidation of glucose, referred to as glycolysis, convert glucose into pyruvate. The reactions of glycolysis occur in the cytosol in both eukaryotes and prokaryotes and do not require O 2 . In contrast, the final steps of oxidation require O 2 and occur in mitochondria in eukaryotes. The synthesis of ATP in mitochondria is driven by the flow of electrons from the reduced coenzymes NADH and FADH 2 to O 2 . This oxidative phosphorylation process depends on the generation of protonmotive force across the inner membrane, with electron transport and proton pumping. Other than glycolysis, reactions in the mitochondria such as the citric acid cycle (TCA) also generate the reduced coenzymes NADH and FADH 2 .
In our KEGG pathway analysis, the most up-regulated metabolic pathways common to all three mutants with respect to the wild type is Glycolysis/Gluconeogenesis. Since much is known about the aerobic oxidation process, we examine the details of the expression differentiation between the long-lived strains and wild type. Specifically, we compare the expression activities of the initial steps and final steps using the Wilcoxon rank test. At the bottom of Figure 4 we show the statistical results for the comparison of the KEGG pathways: glycolysis/gluconeogenesis versus TCA and oxidative phosphorylation. The analysis implies that in the long-lived mutants, TCA cycle and oxidative phosphorylation are negatively regulated compared with Glycolysis/Gluconeogenesis. The comparison is shown by box plots at the top of Figure 4. Usually yeast becomes hypo-metabolic (respiration rates decrease) around day 3-5. One explanation to the observation is that the long-lived become hypometabolic faster than the wild type.
Hxt2 and Hxt4 are both high-affinity glucose transporters, whose expressions are induced by low levels of glucose and repressed by high levels of glucose [27,28]. Our result indicates that dysfunction of either sch9D, ras2D, or tor1D leads to significant up-regulation of Hxt2 (log-ratios 1.63, 0.83 1.54) and Hxt4 (log-ratios 1.15, 1.66, 0.67), and thereby a more efficient usage of glucose. Consistently, the TIGO category associated with monosaccharide catabolism is also positively modified. Genes in this TIGO category participate in chemical reactions that lead to breakdown of monosaccharides and polyhydric alcohols.
Significant changes of other related pathways are also observed. The fructose and mannose metabolism are positively regulated in sch9D, ras2D, and tor1D, the galactose metabolism, starch and sucrose metabolism are positively regulated in ras2D, although not as significant as the Glycolysis/Gluconeogenesis pathway. They indicate that the up-regulation of Glycolysis/Gluconeogenesis is associated with modifications of other catabolic pathways in the mutant cells.
Change of compartment activity
In yeast, the reactions of TCA cycle, electron transport and oxidative phosphorylation occur inside mitochondria whereas those of glycolysis occur in cytosol. In aerobic conditions, oxidative phosphorylation is efficient to generate ATPs, but at the same time it produces the reactive oxygen species (ROS) as byproducts, which is thought to be one of the causes of ageing. The change of energy pathways leads us to consider change of compartment activities. From Table 1, the overall expression levels of mitochondria is significantly lower in sch9D, ras2D, tor1D. Table 4 we examine the expression differentiations for all five TIGO categories specifically associated with mitochondria, which include mitochondrial large ribosomal subunit, mitochondrial small ribosomal subunit, mitochondrial inner membrane, mitochondrion organization and biogenesis and protein-mitochondrial targeting. Their expression activities are consistently down-regulated in all three mutants. Thus we hypothesize that the reduction of biological activities in mitochondria may lead to elongation of chronological life span in S. cerevisiae. In contrast, our GO analysis shows that the TIGO Table 4. The transcription activities of the long-lived strains with respect to the wild type yeast for five TIGO categories that are associated with mitochondria. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . categories, cytosolic large ribosomal subunit (GO:0005842), and cytosolic small ribosomal subunit (GO:0005843) are positively regulated in all the long-lived mutants. Consistent with the GO analysis, the ribosome pathway excluding mitochondria ribosomal subunits is positively regulated in the KEGG analysis. Furthermore, we directly examine expression differentiation between cytosolic ribosomes and mitochondrial ribosomes by the Wilcoxon rank test and all p-values are less than 10 210 . The comparison is illustrated by box plots in Figure 5. As shown in Table 1, ER-located and vacuole-located proteins are positively affected. The GO category (GO:0005789) endoplasmic reticulum membrane is up-regulated too. The endoplasmic reticulum is part of the endomembrane system, which modifies proteins, makes macromolecules, and transfers substances throughout the cell. In budding yeast cells, vacuoles are the storage compartments of amino acids and the detoxification compartments. Under conditions of starvation, proteins are degraded in vacuoles, which is called autophagy. The up-regulations of vacuole-located proteins may imply that autophagy in the cells of these long-lived mutants is enhanced to maintain survival in low nutrient conditions such as SDC medium.
Differences among mutants
Despite the common expression patterns in sch9D, ras2D, and tor1D, we do observe various differences. ras2D shows the lowest overall transcriptional activities. In addition, we found those activities relating to mitosis in the mutant ras2D are all downregulated, see Table 5. They include regulation of mitosis, mitotic sister chromatid segregation, and mitotic spindle organization and biogenesis. The activities in the cellular components (GO categories) such as spindle pole body, spindle pole body and microtubule cycle, bud neck, bud tip and incipient bud site, provide additional evidence. Similar and independent results are found in the cellular location analysis (Table 1). Other activities relating to DNA replication and DNA repair are down-regulated too. It is in consistency with the lower mitotic activities. It is known that Ras proteins regulate cell growth in response to nutrient availability through protein kinase A (PKA) activity, see [29] for references. Also Ras proteins have PKA-independent functions in mitosis and actin repolarization [30]. The expression profiles from our experiments provide systematic evidences to these stories. The down-regulation of the MAPK signaling pathway is significant in ras2D (q-value = 0.009), less slight in tor1D, and not in sch9D. Some differences in metabolisms are also observed. For example, the inositol phosphate metabolism is down-regulated in ras2D, and tor1D (q-value = 0.013, 0.039), but not in sch9D. The phosphatidylinositol signaling system is down-regulated in ras2D, and tor1D (qvalue = 0.036, 0.059), but not in sch9D. Although it is difficult to enumerate them all, differences among the three long-lived mutants, from another angle, suggest that the common expression patterns reported above are strongly linked to longevity.
DISCUSSION
To understand the mechanisms of ageing, we identified the common and characteristic differentiation in the transcriptional profiles of the three long-lived strains sch9D, ras2D, and tor1D. The success of our effort hinges on the measurement accuracy of mRNA expression levels. In the design of Affymetrix GeneChipH, multiple (11)(12)(13)(14)(15)(16)(17)(18)(19)(20) 25-mer probes are used for each ORF (open reading frame) and they serve as within-block statistical replicates. In addition, we do observe higher probe specificity and other improvement in the recent yeast2.0 chips. It should be noted that cross-hybridization always exists and the measured values tend to be smaller than real differentiation.
Our analysis is based on the sub-array normalization that aims to improve accuracy and preserve differentiation. A simple linear function is sufficient, if not perfect, for the purpose of normalizing our yeast microarrays. First, all the microarray experiments were conducted under the same condition. Second, the estimated scale values are mostly in the range [0.7, 1.3]; see the histograms in Figure S1. It is argued in [23] that a simple linear function is sufficient in such a normalization case. Third, the hybridization is a complicated process with various uncontrollable factors. It is important to use a highly robust estimator of the linear function to eliminate unpredictable probe intensities. Least trimmed squares is an appropriate choice due to its robustness in several senses. Fourth, the fair modification of expression profiles from normalization should vary from one situation to another. In our case, by comparing Figure 3 and Figure S2, we feel the fair modification of expression profiles from normalization should not be large. We made an effort to preprocess this yeast microarray data set and examined the validity of the presented results from the perspectives of S. pombe probes sets, non-normalized expression profiles, and other considerations and supporting examples reported in our previous work. It is our belief that the future of functional genomics and proteomics lies not only in scale but also in measurement accuracy. To be consistent with the sub-array normalization, we use the median polishing summarization stratified by the reference selected from raw wild type arrays. Additional investigation of the reference effect is worthwhile in the future research.
The common changes of biological activities in the differentiation are inferred by integrating the expression profiles with biological subsets defined by cellular organelles, metabolism pathways, biological process, and molecular functions. We choose to use those instruments from three sources: cellular localization of proteins, KEGG, and Gene Ontology. Gene Ontology compiles results from literature along three dimensions, and some categories overlap with those from the other two sources. Similarly, we can make inference about the transcriptional regulation in long-lived mutants using expression profiles together with information of ChIP-chip and binding motifs. The results are described in other reports. We use the q-value method developed by Storey et al. [25,26] to deal with the multiple test issue. The definitions and algorithms of q-values were initially obtained based on several assumptions, one of which is that the null distribution of the p-value is uniform[0,1]. In Figure S3, we show the histograms of pvalues from the KEGG pathway analysis. Besides, the same genes could be shared by multiple subsets and dependence among hypotheses exists. The sensitivity of q-values to these assumptions in our study is a subtle problem and it is worth more investigation in our future work.
Other than subset-versus-all comparisons, we also make local comparisons between two directly-related subsets such as cytosolic and mitochondrial ribosome proteins. If the ''directly-related gene subsets'' are appropriately selected, in our opinion, the conclusion drawn from microarray studies can be greatly strengthened by this local inference approach. Our consensus inference is obtained by the cross-examination of the inferences drawn from different instruments.
Our results show that mutants sch9D, ras2D, and tor1D, which share the same phenotype: longer chronological life span, do share some common differentiation patterns. The commonality is particularly interesting in the presence of various differences. For example, the activities relating to mitosis in ras2D are significantly reduced. The significant and systematic expression differentiation underlying the phenotype is critical for understanding ageing in yeast. One such feature is lower pre-and post-transcriptional activities.
Another common characteristics in long-lived strains is the down-regulation of TCA cycle and oxidative phosphorylation. In contrast, the upstream of this process, the Glycolysis/Gluconeogenesis pathway, is slightly or moderately up-regulated. The upregulation of genes relating to Glycolysis/Gluconeogenesis implies that mutant cells consume the carbon sources in a different manner compared to the wild type. The adaption may be achieved through a mechanism similar to that in CR. On the other hand, the down-regulation of genes relating to TCA cycle and oxidative phosphorylation indicates that mutant cells switch to alternative energy pathways that likely depend on glycolysis. Rea et al. [31] proposed a metabolic model to describe the ''Energy switch'' hypothesis for longevity mutants in C. elegans. They suggested that most, if not all, long-lived mutants in C. elegans utilize anaerobic mitochondrial fermentation, which do not involve the electron transport chain and generate fewer radical species. Our results indicate that the notion of ''energy switch'' may be relevant for explaining life span extension in S. cerevisiae. However, rather than anaerobic mitochondrial fermentation, in the yeast strains sch9D, ras2D, and tor1D, the alternative energy pathway is likely to involve glycolysis and occur in cytosol or organelles other than mitochondria.
Evidences from the analysis of cellular organelle, GO, and the comparison of cytosolic and mitochondrial ribosomes all indicate that activities of mitochondria are significantly reduced in sch9D, ras2D, and tor1D. In contrast, the expressions of cytosolic ribosomes are up-regulated. This change of compartment activities supports the ROS theory, which says reactive oxygen species (ROS) damage macromolecules and thereby accelerate ageing. The majority of cellular ROS (approximately 90%) is generated in mitochondria as a byproduct of oxidative phosphorylation during respiration [32]. A number of mutations affecting respiration have been found to increase life span, and at least some may achieve this by decreasing ROS levels [34]. According to our analysis, many of the down-regulated genes encode mitochondrial proteins; conversely, the expression levels of genes that encode proteins localized in mitochondria tend to be negatively regulated in the long-lived mutants. Particularly, in the long-lived mutants, TCA and oxidative phosphorylation are negatively affected, both of which occur in the mitochondria. As a consequence, respiration is reduced and thereby less ROS are produced. Our observations and implications are consistent with results from a systematic RNA interference (RNAi) screen of 5,690 Caenorhabditis elegans genes for gene inactivations that increase lifespan [34]. They found that genes important for mitochondrial function stand out as a principal group of genes affecting C. elegans lifespan. Our results in yeast suggest that reduction of mitochondrial activities closely relates to extension of the yeast chronological life span. | 2014-10-01T00:00:00.000Z | 2007-10-31T00:00:00.000 | {
"year": 2007,
"sha1": "80d14025f60bb92ef996d99ed76639d54ac07c35",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0001095&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80d14025f60bb92ef996d99ed76639d54ac07c35",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236275816 | pes2o/s2orc | v3-fos-license | Influence of Work From Home Policies and Performance Allowance on Employee Performance at The Directorate General of Administration of The Ministry of Home Affairs Indonesia
: The purpose of this study was to determine how much influence the Work from home policy and performance allowances jointly on the performance of employees at the Directorate General of Regional Administration of the Ministry of Home Affairs. The research method used survey techniques with quantitative and correlational approaches with simple random sampling techniques. In this study, the sample size was 73 respondents. Based on the results of the research that has been done, it can be concluded that: (1) From the results of the study, it is found that the Work from home policy variable has a positive, strong and significant effect on employee performance. The performance allowance variable has a positive, strong and significant effect on employee performance. Work from home policy variables and performance allowances has a positive, strong and significant effect on employee performance.
Temporary Cessation of Office Activities in the Context of Preventing the Spread of Corona Virus Disease Outbreaks
within the Ministry of Law and Human Rights.
As part of a government agency, the Directorate General of Regional Administration of the Ministry of Home Affairs has also implemented Work from Home since the beginning of March 2020. The implementation of work activities in each work unit in the Directorate General of Regional Administration Development every day is with each section and field reporting. Their respective activities, whether they are Work from Home or who serve in the office.
Working from home or Work from Home certainly has the same obligations and responsibilities as working from the office. However, in practice, the implementation of Work from Home has challenges and obstacles that are not easy because not all work areas can be done from home. Many factors can affect the implementation of Work from Home, which can directly affect employee performance, such as completeness of work tool applications, communication, lack of coordination, disruption of the environment at home, etc. For this reason, a certain strategy is needed to anticipate and overcome existing obstacles.
The Work from Home policy also indirectly affects the provision of performance allowances for employees. This is related to the incomplete employee attendance rate during this pandemic. The calculation of the condition of performance allowances to be incompatible with the provision of salaries for the State Civil Apparatus is also deducted so that it is burdensome for the economic life of the employees and can affect the level of performance of the employees concerned.
Among the obstacles that occurred during the Corona pandemic were the low job performance of employees; the Work from Home policy implements work activities less optimal; low employee motivation because the Work from a Home policy is not accompanied by awards for employees who achieve work targets; limited work facilities and infrastructure for employees; obstruction of work coordination between units in the organization; difficulty in using work applications; the emergence of work stress due to higher job demands.
II. LITERATURE REVIEW A. Work From Home Policy
Policy in the Big Indonesian Dictionary is a series of concepts and principles that form the outline and basis of a plan to implement a job in achieving goals or objectives. Etymologically, Dunn (2000: 51-52) explains that the term policy comes from Greek, Sanskrit and Latin. In Greek, Policy is called polis which means "city-state," and in Sanskrit, it is called pur which means "city," and in Latin, it is called politia, which means state.
Several scientists explain various kinds of policies, including Friedrich in Indiahono (2009: 18), which states that: "Policy is a direction of action proposed by a person, group or government in a certain environment that provides obstacles and opportunities for policies proposed to use and overcome in order to achieve a goal, or the realization of a particular goal or purpose. " According to Abidin (2004: 30-31), policies are generally divided into 3 (three) levels: 1) General policies, namely policies that serve as guidelines or implementation guidelines, whether positive or negative, covering the entire region or agency concerned. 2) An implementation policy is a policy that describes general policies. At the central level, government regulations on the implementation of a law. 3) Technical policies, namely operational policies that are under implementation policies.
In general, policies are written rules that are formal organizational decisions binding on members associated with the organization, which can regulate behavior to create new values in society. In contrast to laws and regulations, policies are only a guide for action and do not force them like the law. Although policies regulate what can be done and what cannot be done, policies are only adaptive and interpretive. Policies are generally problem-solving in nature and are expected to be general, but without eliminating the local characteristics of an organization or institution, in other words, policies must provide an opportunity to be interpreted according to existing conditions. Work from home is a term working remotely, more precisely working from home. So workers do not need to come to the office face to face with other workers. Work from home is familiar to freelancers, but they often call it remote working or remote working. Work from home and remote working is no different, only in terms; the only difference is the organization's rules. Some apply regular working hours from 8 am to 4 pm or free working hours as long as the Work is done and communication is always fast response.
According to Crosbie & Moore (2004: 21), working from home means paid Work done primarily from home (at least 20 hours per week). Working from home will provide flexible time for workers to provide a balance of life for employees. On the other hand, it also provides benefits for the company.
Based on the theoretical description above, it can be concluded that the Work from home policy is an effort or action to influence the system to achieve the desired goals with strategic, long-term and comprehensive efforts and actions through the Work from home policy, with indicators: 1) low cost, 2 ) flexible at work, 3) increased work productivity, 4) increased job satisfaction, 5) work-life balance, and 6) avoid disturbances in the work environment.
B. Performance Allowance
Performance allowances are given based on the performance achieved by an employee. Based on the attachment in the Regulation of the Minister for Administrative Reform and Bureaucratic Reform Number 63 of 2011 concerning Guidelines for Structuring the Performance Allowance System for Civil Servants, it is explained that performance allowances are allowances given to civil servants which are a function of the successful implementation of bureaucratic reform and are based on the performance of these civil servants, which is in line with the performance of the organization where the civil servant works.
Performance allowances or remuneration can provide additional income to each employee to concentrate more on their Work. The remuneration system for every employee is part of the bureaucratic reforms implemented by the government. Some experts argue that the terms remuneration and compensation are the same. The only difference is the placement of these two words. In Indonesia, this term began to be commonly known by the general public when there was a bureaucratic reform program, one of which was implementing remuneration. Its existence in an organization cannot be ignored because it will be directly related to achieving goals.
Employee benefits are payments and services that protect and supplement the base salary, and the organization pays all or part of the benefits. The main effect of this type of compensation allowance is the long-term retention of employees in the organization. There is little or no evidence that the enormous variety of additional programs, often termed equipment allowances, motivates employees toward higher productivity.
According to Hasibuan (2012: 41), benefits and services are "additional compensation, both financial and non-financial, which is given based on the policies of all employees in an effort to improve their welfare." According to Suharto (2003: 12), the definition of allowances is additional income outside of salary as support for assistance. Suharto further stated that allowances could be said to be the implementation of social security.
Based on the opinions of the experts above, it can be synthesized that the performance allowance is an allowance given to employees as compensation for implementing the bureaucratic reform plan based on the performance achieved by an employee, with indicators including 1) salary, 2) incentives, 3) insurance and 4) facilities.
C. Employee Performance
Etymologically, performance comes from the word work performance or performance. Mangkunegara (2005: 67) argues that the term performance comes from the word job performance or actual performance, namely the quality and quantity of Work achieved by an employee in carrying out his duties following the responsibilities assigned to him.
Furthermore, Mangkunegara (2005: 75) states that performance can be divided into two, namely individual performance and organizational performance. Individual performance results from individual Work both in terms of quality and quantity based on predetermined work standards, while organizational performance is a combination of individual performance with group performance.
Employee performance is the Work that can be achieved by a person or group of people in an organization, following their respective authority and responsibility to achieve the goals of the organization concerned legally, does not violate the law, and is in accordance with morals and ethics (Mathis and Jackson, 2009). : 113). The same thing is said by Ruky (2001: 12), which states that performance is a way of measuring the contribution of individuals/employees to the organization they work for. Bernardin and Russel (2000: 239) provide an understanding of performance to record the resulting outcomes in a specific work function or activity during a certain period. According to Wood et al. (2001: 114), performance is a concise measurement of the quantity and quality of the contribution of tasks performed by individuals or groups to work for units or organizations.
Performance is the activities and results that can be achieved or continued by a person or group of people in carrying out their duties, work well, meaning that they achieve the goals or work standards that have been set before or can even exceed the standards set by the organization in a certain period (Handoko, 2000: 135).
Based on several opinions about employee performance, it can be concluded that performance is the Work achieved by an employee in carrying out his duties following the responsibilities assigned to him, with indicators, including 1) quantity of work, 2) quality of work, 3) work knowledge, 4 ) creativity, 5) cooperation, 6) interdependence, 7) initiative, and 8) self-quality.
III. RESEARCH METHODS A. Research Design
In this study, a survey research method with a quantitative and correlational approach was used to see how much influence the independent variables had on the dependent variable. The research was conducted on a group of Directorate General of Regional Administration of the Ministry of Home Affairs. Through this method, the authors hope to examine specific aspects of a social situation in depth, in this case, the influence of Work from home policies and performance allowances on employee performance. The weakness of this research method is because it studies specific aspects, so the possibility to achieve generalizations is minimal.
Meanwhile, to examine the influence between variables, the writer will use a correlative approach or the associative method. Sugiyono (2014: 11) states that the associative method is a research method that seeks to find the influence of one variable on another variable. The causality and correlative approaches mean that this study is designed to determine different research variables, namely the independent variable on the dependent variable. This approach is not only to provide a description of a variable but also includes testing the effect of the independent variable on the dependent variable. Furthermore, it can be seen how much influence the independent variable has on the dependent variable and the magnitude of the direction of influence understudy.
B. Population and Sample
According to Sugiyono (2014: 90), in general, the population is meant as a part of the generalization area consisting of objects/subjects that have certain qualities and characteristics. Furthermore, Creswell (2013: 151) states, "a population is a group of individuals who have the same characteristic," meaning that a population is a group of individuals who have similar characteristics. The population in this study were all employees at the Directorate General of Regional Administration of the Ministry of Home Affairs, totaling 267 people. Sugiyono (2014: 91) states that the sample is part of the number and characteristics of the population. Suppose the population is large and it is impossible for the author to study everything in the population due to limited funds, energy, and time. In that case, the author can use a sample taken from that population. What is learned from the sample, the conclusions will apply to the population. For this reason, the sample taken from the population must be truly representative.
The sampling technique is a sampling technique. To determine the sample to be used in the study, various sampling techniques were used. In this study, the technique of determining the number of samples used a simple random sampling technique. This method is carried out when members of the population are considered homogeneous because the representative sample or the sample is taken randomly. Sugiyono (2014: 101). The number of samples of respondents in this study was 73 respondents regardless of strata.
C. Data Collection and Processing Techniques
Data collection techniques can use primary and secondary sources, data which can be explained as follows: 1. Primary Source. Primary sources were collected through a questionnaire, namely, the data collection technique was carried out by giving a set of questions or written statements to the respondent to answer. In this questionnaire, the authors use a structured list of statements (questionnaire) which contains 12 statements of Work from home policy variables, 12 statements of performance allowance and 12 employee performance variables. 2. Secondary Sources. Secondary sources are data obtained from organizational records and literature as well as observations that are already related to this research topic. The measurement used in this study is a type of Likert scale, which is the variable to be measured, translated into sub-variables and into measurable components. Sugiyono (2014: 141) states that the data validity test in research is often only emphasized on the validity and reliability tests. In quantitative analysis, the main criteria for research data are valid, reliable and objective. Validity is the degree of accuracy between data that occurs on the object of research and data reported by the researcher. In this study, the data analysis used regression analysis.
IV. RESEARCH RESULT 1) The Effect of Work from Home Policy (X1) on Employee Performance (Y)
To calculate the correlation value between the Work from a home policy on employee performance at the Directorate General of Regional Administration of the Ministry of Home Affairs, the calculation result is 0.722. This shows that the Work from home policy variable has a positive and strong influence on employee performance.
To find out the contribution of the influence of the Work from home policy variable on employee performance, it can be seen that the Work from home policy variable has an impact of 52.2% on employee performance. In comparison, the remaining 47.8% is influenced by other factors outside of research.
To determine the direction of the relationship between Work from home policy variables and employee performance variables whether positive or negative and to predict the value of employee performance variables if the Work from home policy variable value increases or decreases, the regression equation is as follows: Ŷ = a + bX1 Ŷ = 13,813 + 0,718X1 These numbers can be interpreted as follows: a. Constant 13,813; This means that if the Work from home policy (X1) is 0, then the employee's performance (Y) is positive, which is 13.813. b. The regression coefficient for the Work from home policy variable (X1) is 0.718; This means that if the Work from home (X1) policy increases by 1 unit, then employee performance (Y) will increase by 0.718 units. The coefficient is positive, meaning that there is a unidirectional relationship between the Work from home policy and the employee's performance, the more precisely the Work from home policy is implemented, the employee performance will increase.
2) The Effect of Performance Allowances (X2) on Employee Performance (Y)
To calculate the correlation value between the performance allowance on the performance of employees at the Directorate General of Regional Administration of the Ministry of Home Affairs, the calculation result is 0.699. This shows that the performance allowance variable has a positive and strong influence on employee performance.
To find out the contribution of the influence of the performance allowance variable on employee performance, it is found that the performance allowance variable has an influence contribution of 48.8% on employee performance, while other factors outside the study influence the remaining 51.2%.
To find out the direction of the relationship between the performance allowance variable and the employee performance variable whether positive or negative and to predict the value of the employee performance variable if the value of the performance allowance variable increases or decreases, the regression equation is as follows: Ŷ = a + b X2 Ŷ = 13,983 + 0,703X2 These numbers can be interpreted as follows: a. The constant is 13,983; This means that if the performance allowance (X2) is 0, then the employee's performance (Y) is positive, which is 13,983. b. Performance allowance variable regression coefficient (X2) of 0.703; This means that if the performance allowance (X2) increases by 1 unit, then the employee's performance (Y) will increase by 0.703 units. The coefficient is positive, meaning that there is a unidirectional relationship between performance allowances and employee performance, the higher the value of the performance allowance, the employee's performance will increase.
3) Effect of Work from Home Policy (X1) and Performance Allowances (X2) on Employee Performance (Y)
To calculate the correlation value between the Work from home policy and the performance allowance together on the performance of employees at the Directorate General of Regional Administration of the Ministry of Home Affairs, the calculation result is 0.755. This shows that the Work from home policy variable and performance allowances have a positive and strong influence on employee performance.
To determine the contribution of Work from home policy variables and performance allowances together on employee performance, the Work from home policy variables and performance allowances together have an influence contribution of 60.0% on employee performance, while the remaining 40.0 % influenced by other factors outside of research.
To determine the significance of the effect of Work from home policy variables and performance allowances together on employee performance (F test), the constant values a and the regression coefficients b1 and b2 above can be made, namely: Ŷ = a + b1X1 + b2X2 Ŷ = 7,479 + 0,456X1 + 0,386X2 This means that employee performance is due to the Work from home policy and performance allowances can be predicted through the regression equation. Based on the Work from home policy score data and performance allowances, the highest score is 50 (5 x 10). 5 is the highest score for each answer, and 10 is the number of question items. These numbers can be interpreted as follows: important to show employee professionalism at Work. In addition, organizations can do several other things to support employee performance improvement: improving policies in the organization, implementing a good management system, increasing comfort regarding the physical conditions of a pleasant workplace, and establishing good cooperation between employees and employees with superiors. All of these factors will significantly affect employee performance. | 2021-07-26T00:06:21.323Z | 2021-06-06T00:00:00.000 | {
"year": 2021,
"sha1": "e9a08e0d11a33f0bcd2c6e37cd093ba59213624f",
"oa_license": null,
"oa_url": "http://ijefm.co.in/v4i6/Doc/11.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c77aeb3285a090c8ad0bfed3b9716a3b353be172",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
234518090 | pes2o/s2orc | v3-fos-license | Contactless method for online control tension of radio-reflective mesh surface of large-sized folding mirror antennas
A new contactless method for online control tension of a radio-reflective mesh surface of a large-sized folding mirror antenna is presented. The method is based on revealing the correlation between Moire patterns and net-shaped curtain uniformity and tension. The advantages of the method are presented. Its main advantage lies in the fact that it is possible to set the net-shaped curtain on the folding skeleton of an antenna reflector in the online mode. The presented method is universal and can be used not only to manufacture antenna reflectors with the working surface in the form of a net-shaped curtain but also in any other structures in which the checked element is a net, regardless of the material of which it is made. A way to apply the method is presented.
Introduction
Works aiming to construct a large umbrella reflector antenna with a rigid frame skeleton, which can be opened automatically in orbit, have been performed for half a century in Russia and abroad [1][2][3]. The diameter of a reflector of such antennas can be over 100 meters and is limited by the size of the payload area under launch vehicle dome where they are placed in a folded state. Thus, the structure of a large-sized mirror antenna should be characterized by a high transformation factor (from 1/10 up to 1/50), and it should be able to open reliably and secure the required rigidity of the frame and tension of a radio-reflecting material after the opening of an antenna reflector. Figure 1 depicts the general view of an umbrella offset antenna reflector with a diameter of 100 m (the transformation factor is a/b = k). It is one of the first projects for such an antenna type developed in the United States by Langley Research Center (LRC) and Lockheed Missiles and Space Company (LMSC) [1]. In the open state, the antenna reflector represents spacecraft 1 with antenna exciters placed in the antenna reflector's focus 3 when the frame supports 2 have been opened (Fig. 1a). After opening the antenna reflector 3 represents a rigid frame structure consisting of tetrahedral cells 4 connected by hinges. The tetrahedral cells are made of three diagonal 6 and six folding 7 rods connected by central units 5 (Figs. 1b-1d). Figures 1b-1d depict the tetrahedral cell 4 in the folded, intermediate, and opened positions. The elastic metallic net-shaped curtain 8 connected to the folding rods 7 of tetrahedral cells 4 from the operating side of the reflector's frame skeleton is used as radio-reflecting material. All tetrahedral cells 4 are opened synchronously by the elastic forces of spring mechanisms. When the skeleton has been opened and folding rods 7 have been fixed, the elastic metallic net-shaped curtain 8 is tensed with the given force in the rigid frame skeleton of the antenna reflector. The mechanism for opening such Fig. 1) can be manufactured on the base of torsion springs placed directly in the hinge joints, on the base of the compression springs placed inside the folding rods, or in umbrella type mechanisms acting on the diagonal rods [1][2][3].
If it is required to secure a highly rigid structure of the large reflector of mirror antenna, the bicurvature folding frame skeleton consisting of spring-load folding and solid rods connected flexibly is preferable. In this case the frame's skeletons consisting of tetrahedral cells are characterized by the greatest specific rigidity (Fig. 1) [1]. The working surface of such reflectors is approximated by triangular flat cells (facets) with a net-shaped stockinette material made of metallic mono-and complex fibers stretched over it as they mainly meet the given requirements [4,5]. We point out that it is very difficult to make a net-shape curtain weight-free under the on-ground processing of an umbrella antenna reflector. That is why in spite of the fact that there is no gravitational force in space conditions, under on-ground processing, the net-shaped curtain should be tensed uniformly to the open reflector's skeleton with a force excluding its bending caused by its own weight being beyond the permissible values. If the tension of a radio-reflecting net shaped curtain is not sufficient, the contact between gold-plated or nickel-plated metal fibers becomes weaker and the radio-reflecting properties of a net-shaped curtain decrease. In addition, the net-shaped curtain can be heated by the sun during the operation of the antenna and it can weaken a curtain's tension and as a result the facet's flatness can be spoiled due to its bulging. Folds can form on the reflecting surface of an open reflector after its long-term storage. Folds can spoil the geometrical accuracy of the facets' surface. In contrast, the strong and nonuniform tension of a net-shaped curtain onto the skeleton causes an asymmetric damping impact on the opening reflector's skeleton, which decreases the opening reliability and also it becomes necessary to increase the springs' rigidity in the mechanism for opening the rods of a folding frame skeleton or to introduce an additional mechanism for its forced opening [3,[6][7][8][9]. The tension forces of metallic stockinette netshaped curtains on the folding skeletons of modern large space antenna are obtained experimentally. Depending on the type of netshaped curtain, it ranges from 5 to 12 g/cm [4,7].
Thus, a contradictory problem appears. On one hand, the net-shaped curtain should be tensed with force to secure the required radio-technical performance of the antenna. On the other hand, the netshaped curtain tension should be uniform with the minimal permissible force. To solve this problem, it is necessary to generate efficient methods to check the force and uniformity of the netshaped curtain's tension in the reflector's folding skeleton [7].
2. Description of methods of installation of mesh radio-reflection surface on folding structure of large-sized mirror antenna Currently, there are two main types of large-size folding mirror antennas with a mesh radio-reflective surface. These are folding umbrella-type antennas and folding antennas with truss structure from tetrahedral cells (Fig. 1).
Installation of the mesh radio-reflective surface on a folding framework of the umbrella-type mirror antenna includes the following steps [10,11].
Step 1. It is manufacturing of a counterpart of the mesh radio-reflecting surface of the folding umbrella-type antenna.
Step 2. Cutting and attaching a mesh radio-reflective surface to the counterpart (Fig. 2, a). At the same time, the tension control of the mesh radio-reflective surface is carried out, for example, by mechanical method by pressing the indenter into the controlled surface with a specified force and measuring the depth of the indentation [12,13]. Step 3. Transferring the mesh radio-reflective surface to the open folding framework of the umbrella-type mirror antenna by the counterpart and securing it (Fig. 2, b) and deleting the counter pattern.
Installation of the mesh radio-reflective surface on the open truss structure of the folding mirror antenna from the tetrahedral cells is carried out as follows. The initial problem of setting a net-shaped curtain onto the facet of a tetrahedral cell is solved by the transfer method consisting of the following technological operations. The tensed net-shaped curtain 2 is put onto the triangular technological 3a). Moreover, the sizes of the technological frame's sides are a few sizes higher than those of the facet's sides. The distance between pins is 50 mm and the force of the preliminary tension of the netshaped curtain is 5 g/cm. After that with the help of tensoframes 3, the tension forces of the net-shaped curtain 2 are measured and recorded at the reference points (Fig. 3b). The results are processed, and if the disallowed nonuniformity of the net-shaped curtain's tension with forces differing from the required ones are revealed, the operator recommends correcting the tension of the netshaped curtain, which is taken away from the pins in the mentioned places, elastically deformed, and put on again. In this case the correcting displacement of any checked point of the net-shaped curtain initiates the displacement of the neighboring points and as a result the state of the elastic forces obtained previously is violated and the uniformity of the net-shaped curtain's tension in the examined segment of the surface of the triangular facet is also violated. As a result it is necessary to perform the new iterations during the process to specify the position of the same points. The high number of check points, ranging from several dozen to several hundred for one facet, depending on the facet size, and the need for algorithmic data processing have made it very laborious to check a net-shaped curtain's tension with the help of tensoframes, especially if the number of facets is high (Fig. 1). When the operation for tensing the net-shaped curtain onto the technological frame is finished, an unstretched tape made of (for example) aramid fibers is sewn over the triangular perimeter of the tensed net according to the real sizes of the side of a specific triangular facet of the skeleton's tetrahedral cell. The excess of net-shaped curtain is cut off. The obtained triangular segment of radio-the reflecting surface is fixed via the tape to the folding rods 4 from the operating side of the reflector's skeleton. The folding rods 4 from the operating side and the folding rods 5 from the back side are spring-loaded for opening and hinged by rods 6 similar to the structure presented in Figure 1. The edges of neighboring facets are sewn by metallic fiber. After each opening of an umbrella antenna during its trial, the net-shaped curtain's tension on the reflector's skeleton is again selected checked. By considering the fact that the total number of triangular facets made of metallic stockinette fabric of a net-shaped curtain can range from several hundred to several thousand, the process for manufacturing the radio-reflecting surface according to the presented technology is very labor-and time-consuming. In this way, the following complex problem was defined: to develop a method to check online a netshaped curtain's tension, which makes it possible to ensure the uniform tension of a net-shaped curtain with the given force and fix it to the folding rods of the working surface of the skeleton's frame directly on the open reflector's skeleton [7].
The experimental methods for investigating tensions and deformations are well known and some of the main ones are given below: tensometry, polarization-optical method, brittle coating method, analogy method, and separating nets method [7,14]. However, they are not able to solve the defined problem. This is due to the fact that the contact methods directly affect the main parameters of a netshaped curtain's tension and they are very labor-consuming to implement for bicurvature surfaces of large (with a diameter of at least 30 meters) folding structures. Therefore, the contactless methods are of interest. A method for determining a net-shaped curtain's tension is presented in [15], based on methods for processing, analyzing, and neurally classifying the textural photographic images of a netshaped curtain. The procedure for determining a net-shaped curtain's tension consists of three stages: to calibrate the classifiers for the given type of net-shaped curtain; to take photographs of the netshaped curtain; and to calculate the tension forces and generate tension topograms according to the photographs with the help of the developed software. In contrast to measurements with the help of tensoframes (Fig. 3), this method is contactless and makes it possible to increase the measurement accuracy. However, just as the other methods, it is a discrete method, which does not allow us to implement online a net-shaped curtain with uniform tension with the given stretching forces directly onto the skeleton of an antenna reflector. Checking the uniformity of a net-shaped curtain's tension with the given forces online during its installation onto the skeleton of an umbrella antenna reflector makes it possible to significantly decrease the labor-intensiveness of the technological process and increase the automation level. In this way, the contactless online methods for checking the tension of a It is known that if at least two nets consisting of lines, points, or other geometrical elements are overlaid, the Moire pictures consisting of alternating dark and light strips appear [14,16]. It is reasonable to use the Moire phenomenon due to the following effects [7]: it is not necessary to place any nets on the measured object since it is a net itself; the Moire phenomenon visualizes deformations over the entire examined surface; therefore, the epictures, appearing when the reference net is placed on the measured object, give a complete picture of the uniformity of the tension in a net-shaped curtain and the tension forces; the physical properties of the net's material does not influence the measurement result, since the Moire phenomenon has a geometrical character; the reference net can be virtual.
Let us describe and give an example of a practical application of a contactless online method by using the Moire phenomenon for checking online a net-shaped curtain's tension during its assembly on the folding skeleton of an antenna reflector [7].
Preliminarily we need to generate virtual images of the reference net of a specific type and the reference Moire pictures obtained when the deformed net is placed on the reference one. The size and view of Moire pictures depend on the orientation angle of the second reference net and the degree and direction of the deformation with respect to the first (basic) one.
For this purpose the real net-shaped curtain is stretched uniformly on any technological skeleton with less than the required force, which is determined by one of the described methods and the obtained image is fixed with the help of a graphical software, for example, Windows 7 Paint, or a later version. The same net is placed in the transparent mode on the image of the obtained basic reference net. After this the second net is rotated and deformed with respect to the first one. As a result the set of graphical samples of reference Moire pictures are obtained for different angles of rotation ( Fig. 4: 15° (а), 30° (b), 45° (c), 60° (d), 75° (e), 90° (f)) and degree of deformation (Fig. 5). From Figure 4 it is seen that the sizes of Moire pictures become maximal under an angle of rotation of 90° and angles up Figure 5. Moire pictures formed by placing two identical reference nets on one another with one of them being horizontally stretched with respect to another one by 10% (а), 20% (b), 30% (c); same under vertical stretching by 10% (d), 20% (e), 30% (f); same under horizontal stretching by 10% (g), 20% (h), 30% (i). Figure 5 depicts the Moire pictures formed if we place two identical reference nets on each other with one of them stretched horizontally (a-c), vertically (d-f), and horizontally-vertically (g-h) by 10, 20, and 30%, respectively, with respect to another one. From Figure 5 it is seen that the sizes of Moire pictures decrease if the deformation of the stretching curtain is increased. In this way, the tension force of a net-shaped curtain is related to the geometrical sizes of Moire pictures and it can be determined by measuring its linear sizes. The relationship between the tension force and degree of deformation is determined experimentally for the chosen type of net-shaped curtain. As a result a virtual set of graphical samples of reference Moire pictures is obtained and it can be used for different purposes. The folding skeleton is similar to the structures presented in Figs. 1 and 3 and consists of the breaking rods 2 (spring-loaded for opening) of the working surface and of the breaking rods 3 of the back surface hinged to each other by the diagonal rods 4. The video-camera 5 is placed (the support arm is not shown) when installing the net-shaped curtain 1 in the bottom part of the skeleton. It is connected by a cable or by a radio-line with recording units. Monitor 6 with the reference Moire picture corresponding to the required tension uniformity of a specific type of net-shaped curtain and to the force, and monitor 7 with the current online video-image of net 1 on the skeleton, are placed near the operators. Monitors 6 and 7 are also connected by a cable or a radio-line with the recording unit. On the working surface of each of the breaking rods 2, the support arms 8 with deflected "ears" 9 are set to fix a net-shaped curtain 1 to the skeleton. The net-shaped curtain is stretched taut and fixed to the skeleton as follows. The operators stretch a segment of a net-shaped curtain and place it on ears 9 Fundamental and applied problems of mechanics (Figs. 6а, 6b). Moreover, the respective Moire picture appears on monitor 7. After this by replacing the net-shaped curtain on ears 9, we try to obtain in monitor 7 a Moire picture that is similar to the Moire picture in monitor 6, with an acceptable level of uniformity and forces of tension of the netshaped curtain 1 on the skeleton. The geometrical sizes of the obtained Moire pictures are measured directly on the screen of monitor 7. Therefore, the measurement accuracy can be increased by a fragmentary increase of the examined part of the images on the monitors. The operation is then repeated for the neighboring segments (facets) and the installed segments of the net-shaped curtain are finally fixed to the support arms 8 by bending ears 9 and by pressing them in to the respective hollows in the support arm 8. The excess of the net-shaped curtain is cut or tightened. The segments of the netshaped curtain on the support arms are placed with overlapping, and as a result, a reliable electric contact between them is achieved. Figure 7 depicts the fragment of the folding skeleton of an antenna reflector, on which the presented procedure is used for placing a uniformly tensed net-shaped curtain with the given force.
Conclusion
We have presented the method makes it possible to minimize costs and check online a net-shaped curtain's tension over the cell surface during its installation on the folding skeleton of an antenna reflector, increase the measurement accuracy since the Moire strips increase the images and displacements [7,16], exclude the metrological processing of the results of the technical measurements, reduce the errors of a device [17,18], and simplify the measurement system (with respect to the known methods), since it is not necessary to place any nets or devices directly on the object, and which is characterized by a low degree of sensitivity to temperature oscillations and environmental dust.
The developed method also makes it possible to increase the degree of automation of the tension process of a net-shaped curtain on large folding skeletons of antenna reflectors and allows the The developed method is universal and can be used not only to manufacture antenna reflectors with the working surface in the form of a net-shaped curtain but also in any other structures in which the checked element is a net, regardless of the material of which it is made. | 2020-12-24T09:12:34.370Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "20fb8a6dc7b9637cc13c175507bdc1edbcfd223e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1705/1/012005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8b50e2c039b0fa6fe06e9ba9bc27693b32606b4b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245054058 | pes2o/s2orc | v3-fos-license | Geometric-Manifold-Assisted Distributed Navigation Probabilistic Information Fusion Cooperative Positioning Algorithm
: Positioning information is the cornerstone of a new generation of electronic information technology applications represented by the Internet of Things and smart city. However, due to various environmental electromagnetic interference, building shielding, and other factors, the positioning source can fail. Cooperative positioning technology can realize the sharing of positioning information and make up for the invalid positioning source. When one node in the cooperative positioning network has error, the positioning stability of all nodes in the whole cooperative network will be significantly reduced, but the positioning probability information technology can effectively reduce the impact of mutation error. Based on this idea, this paper proposes an information-geometry-assisted distributed algorithm for probabilistic cooperative fusion positioning (IG-CP) of navigation information. The position information of different types of navigation sources is utilized to establish a probability density model, which effectively reduces the influence of a single position error on the whole cooperative position network. Combined with the nonlinear fitting characteristics of the information geometric manifold, mapping and fusion of the ranging information between cooperative nodes on the geometric manifold surface are conducted to achieve cooperative positioning, which can effectively improve the stability of the positioning results. The proposed algorithm is simulated and analyzed in terms of the node positioning error, ranging error, convergence speed, and distribution of the cooperative positioning network. The simulation results show that our proposed cooperative positioning algorithm can effectively improve the positioning stability and display better positioning performance.
Introduction
Cooperative positioning is the core foundation of 5G application technologies such as Internet of Things and smart city. Due to the inherent errors of satellite navigation, inertial navigation, and other navigation methods, some nodes in the cooperative network will have error mutation [1][2][3][4][5][6][7]. It is very important to study the stability of cooperative positioning accuracy. The cooperative position system has received much attention from the research community and has wide applications, such as regional unmanned driving and an unmanned distribution network for urban forests [8][9][10]. Unmanned aerial vehicles (UAVs) need the support of high-stability positioning in the above scenarios; fortunately, the cooperative positioning system has many advantages in improving the stability of positioning accuracy.
Early cooperative positioning technology mainly utilized ranging and direction finding to obtain the positions of nodes. The first generation of cooperative positioning slow convergence speed and high sensitivity to the cooperative positioning topology network structure.
In order to solve the above problems, a cooperative positioning fusion algorithm based on information geometry theory is proposed. Information geometry was first utilized in radar target detection [32][33][34]. All kinds of electromagnetic parameters are transformed into an information probability function, and an electromagnetic scene is constructed. When the parameters of the electromagnetic scene are changed, target detection can be realized. Because different types of electromagnetic parameters are transformed into a probability density function, multitype parameters can be fused. The cooperative positioning network environment is similar to the radar signal detection environment; thus, it greatly increases the positioning accuracy stability.
The positioning probability density function of the cooperative node is constructed using the multigroup positioning information instead of the single-group positioning information. Through the variance and mean value of the function, the positioning performance of the node can be clearly reflected, which is conducive to the fusion processing and improvement of the positioning accuracy stability in a cooperative network. Based on this idea, this paper proposes an information-geometry-assisted distributed navigation information probabilistic cooperative fusion positioning (IG-CP) algorithm. The algorithm utilizes the positioning information of each cooperative node to establish a positioning error probability model, which is mapped to the geometric manifold and combined with the distance information between cooperative nodes to achieve fusion of the positioning information. The optimal fusion estimation of the position distribution probability of cooperative nodes is utilized to replace the positioning result of the last instance, and the process has an iterative solution. Combined with the nonlinear characteristics of the geometric manifold, the stability of cooperative node positioning can be effectively improved.
The rest of this paper is organized as follows: the cooperative positioning system model and the information geometry model are presented in Section 2. Based on this model, the IG-CP algorithm is explained in Section 3. To reduce the influence of the node positioning error and ranging error, the phase interference positioning theory is combined with information geometry to suppress the ranging error and node positioning error. The simulation results are given in Section 4, which mainly include the positioning error, ranging error, node distribution, and computational complexity. Finally, this paper concludes with a brief summary in Section 5.
System Model
Some formulas and symbols used in this paper are defined as Table 1. sufficient statistics of the positioning information data δ the distance measurement error w the distance measurement error vector σ 2 resents the variance n the position error r the distance measurement f (r|d, σ) the probability density measurement function l() the logarithmic likelihood function In the cooperative positioning network, there are hundreds of thousands of cooperative nodes. The scale of the cooperative node network becomes larger than before, and the system model is shown in Figure 1. In the cooperative positioning network, all nodes need to be able to utilize the positioning and ranging information of the surrounding nodes to improve its positioning stability. In our proposed IG-CP algorithm, node D in the cooperative positioning network is randomly selected, and the positioning information of the A, B, and C nodes and the ranging information between them are utilized to realize the cooperative positioning of node D, as shown in the virtual frame of Figure 1.
Information Geometry Probability Model
In the cooperative positioning network, the positioning information of each cooperative node is transmitted to other nodes in the cooperative position network. The positioning accuracy of cooperative networks is mainly determined by the positioning accuracy and ranging accuracy. The change in the positioning accuracy is a nonlinear variation with arbitrary jitter. The existing cooperative position fusion technology mainly adopts attenuation coefficient to modify it such that the longer the time, the smaller the attenuation coefficient. When the positioning result of the cooperative node changes, it can only slowly increase or decrease the influence of the cooperative node on the positioning accuracy of the whole cooperative positioning network. It is difficult to realize the rapid update of cooperative node positioning information, thus limiting the engineering application of cooperative positioning technology. The flat surface of the information geometry itself is a kind of surface manifold, which is more suitable and easier to implement for complete nonlinear estimation. The complete nonlinear estimation of the information geometry transformation model is shown in Figure 2. Figure 2. Information geometry transformation diagram.
In the standard Euclidean space, the positioning estimation of the cooperative node D can be expressed by the likelihood function p(x|u). x represents the positioning sampling data. u represents the position coordinate vector of the cooperative node D. The likelihood function p(x|u) of cooperative node D can form a statistical manifold S = {p(x|u)} in Euclidean geometry, and it can be represented by parameterizations with the natural parameter θ and expectation parameter η. The likelihood function p(x|u) can be smoothly embedded into the Riemann geometric manifold of the exponential distribution by mapping u → θ(u), that is, in the natural parameter space {θ} ∈ A; it becomes a curve in the space, and its parameter equation is {θ = θ(u)}. The coordinate estimation problem of cooperative node D can be solved by the curve θ = θ((u)) in the natural parameter space A. The right graph in the lower half of Figure 2 represents the expected parameter space η ∈ B. The expected parameter space B and the natural parameter space A are dual. Dots represent the positioning information data after conversion in the expected parameter space B, and the corresponding relationship between the natural parameter space and expected parameter space can be established by the Legendre transformation. The nonlinear likelihood function p(x|u) is transformed into a standard family of exponential distributions p(F(x)|θ) by parametric reconstruction, where F(x) represents sufficient statistics of the positioning information data. It can obtain the coordinate estimation natural parameter value θ of cooperative nodes by linear estimation of the sufficient statistics F(x); then, mapping can be conducted from θ to u to obtain the positioning result of the cooperative node D. Based on the theory of information geometry, the position estimation of cooperative nodes is fitted to a point on the exponential geometric manifold; among them, the nonlinear characteristics of the positioning information data are reflected in the geometric structure change of the manifold, and the nonlinear solution of the position result can make full use of the geometrical characteristics of the manifold. On the other hand, differential geometry can be applied to solve manifold problems, and a geodesic iteration instead of a state update in the Kalman filter can yield better positioning results in the cooperative network fusion positioning system.
Information-Geometry-Assisted Cooperative Positioning (IG-CP)
To calculate the positioning of the node D, two groups of distance differences are adopted to construct the phase interference positioning model, two nodes among A, B, C, and D are selected as transmitters, and two sinusoidal signals with a small frequency difference are transmitted to form a differential frequency interference signal [35]. The remaining two nodes act as receivers, and the distance between the four nodes can be calculated according to the phase difference of the received signal, which can be used to eliminate the positioning ambiguity of nodes. For example, if nodes A and B are assumed to be transmitters and nodes C and D are assumed to be receivers, the corresponding phase interference positioning measurements can be expressed as: where X A , X B , X C , and X D represent the position coordinates of four cooperative nodes, respectively. X D − X A represents the distance between A and D; thus, Equation (1) can be rewritten as follows: When positioning node D in the point wireframe of Figure 1, it is also possible to set nodes A and C as transmitters and nodes B and D as receivers, and the corresponding phase interference positioning measurement can be expressed as follows: The measurement vector k (u) of phase interference positioning is established by using two measurement sets of the cooperative node D, which is expressed as follows: where u = [u x , u y ] T represents the position coordinates of the cooperative node D, which are to be estimated using unknown parameters. δ ab and δ ac represent the distance measurement errors. Therefore, the general phase interference positioning model of a cooperative node can be expressed as follows: where w represents the distance measurement error vector, x represents the measurement data, and σ 2 represents the variance in x. However, the position ambiguity error of a node is not considered in the above cooperative positioning model, which is too ideal.
To fit a real cooperative positioning network, each cooperative node must be completely independent, and it is assumed that the positioning error of each cooperative node is an independent zero-mean Gaussian distribution error. Taking the cooperative nodes A, B, and C as examples, the position error is recorded as n a , n b , and n c . After the node position error is introduced, Equation (5) can be expressed as In the process of cooperative network positioning, the errors along each axis of every cooperative node are completely independent, being set as n i,x and n i,y , with i representing the cooperative node number. According to statistical theory, when the position of cooperative node A contains Gaussian noise, we utilized Rice distribution model to construct the distance measurement r between A and D, where d = (x a − u x ) 2 + (y a − u y ) 2 is the real distance from cooperative node A to cooperative node D. Its probability density function can be expressed as follows: where r = (x a − u x + n a,x ) 2 + (y a − u y + n a,y ) 2 , σ represents the standard deviation of the measurement noise, and function I 0 (z) represents a zero-order modified Bessel function of the first kind: Since the distance measurement between cooperative nodes is usually more than 100 times the standard deviation of noise, the last term of Equation 10 can be approximated as follows: Therefore, the probability density measurement function between cooperative nodes can be rewritten as: Furthermore, the measured distance r between cooperative nodes can be approximated by a Gaussian distribution [36] with a mean value µ = √ d 2 + σ 2 and standard deviation σ, namely: When the measurement distance between cooperative nodes obeys the Gaussian distribution and is more than 100 times less than the measurement distance, Taylor series expansion can be utilized to approximate the distance. Taking the measurement distance between A and D as an example, η and ξ represent the distance measurement components on the x-axis and y-axis, respectively. The measurement distance between A and D is expressed as a function of two variables η and ξ as follows: Since the position error is included in the cooperative node A, the distance measurement can be rewritten as r = f (η + n a,x , ξ + n a,y ) = (η + n a,x ) 2 + (ξ + n a,y ) 2 (16) Expanding the Taylor series of the function f (η + n a,x , ξ + n a,y ) at point (η, ξ) yields f (η + n a,x , ξ + n a,y ) = f (η, ξ) + n a,xḟη (η, ξ) + n a,yḟξ (η, ξ) + · · · (17) Due to the position error of cooperative node A being far less than the measurement distance, the influence of higher-order terms in the Taylor expansion is very small, and the distance between cooperative nodes A and D can be approximated as follows: The position errors n a,x and n a,y comprise a Gaussian distribution with zero mean; thus, the sum of the two position errors still obeys the Gaussian distribution, and the mean µ s and variance σ s are expressed as follows: where σ a represents the standard deviation of the position error of collaboration node A.
The distance between cooperative nodes A and D can be approximately expressed by the following Gaussian distribution: From Equation (22), it can be seen that the position error does not affect the distance measurement distribution between cooperative nodes. According to the approximate result of the Taylor-expanded distance measurement shown in Equation (17), d AD , d BD , d BC , and d AC represent the ideal measurement distances between cooperative nodes as follows: The distance measurement related to the positioning solution of cooperative node D is as follows: The phase interference positioning measurement with position error k A,B,C,D can be expressed as Because the position errors of cooperative nodes along each axis are completely independent, the variance σ 2 s1 in the sum of the six position error terms in (28) can be obtained as follows: Similarly, the variance σ 2 s2 in another measurement k A,C,B,D is as follows: When the cooperative nodes have positioning errors, the phase interference positioning distance measurement values k A,B,C,D and k A,C,B,D have approximately the following distributions.
In a real situation where the cooperative node has a position error, the general phase interference positioning model of the cooperative node can be expressed as follows: n(u) represents the equivalent distribution of cooperative positioning node position errors, which is fitted to a Gaussian distribution with a mean value of 0 and variance of Σ s = diag(σ 2 s1 , σ 2 s2 ) in our model. w represents the measurement error of the phase interference positioning distance. Due to the measurement of the distance being completely independent in the phase interference positioning, x represents the positioning sampling data, u represents the position coordinate vector of the cooperative node, Σ w represents the variance of noise, and Σ s represents the variance of position coordinate vector. T he conditional probability distribution between the measurement value and the optimal estimation value is as follows.
The natural gradient method based on the statistical manifold is adopted to realize optimal positioning coordinate estimation of the cooperative node D, and according to the phase interference positioning measurement distribution given in Equation (33), the likelihood function of the measured value can be expressed as follows: The Gaussian probability density function shown in Equation (34) can be arranged into a standard bending index distribution form: According to Equation (35), in information geometry, it is necessary to establish a new parametric representation of natural parameters on the geometric manifold, where the natural parameters of the cooperative positioning nodes are set to (θ, Θ). The relationship between the natural parameter and the local parameter u is as follows: To simplify the calculation, the sufficient statistics of the Gaussian distribution of the measured value are set as a linear model On the geometric manifold, the potential function distributed ϕ(θ, Θ) can be expressed by local parameters as follows: where n represents the dimension of µ(u) or the potential of a set. The maximum-likelihood estimationû of the local parameter u can be obtained by solving the following maximumlikelihood equation: where l(û) represents the logarithmic likelihood function and η represents the expectation function. According to the properties of the bending index distribution, the expected parameter η and the Fisher information matrix G(θ) of the natural parameter can be obtained from the derivative of θ by the potential function ϕ(θ) as follows: The Jacobian matrix of the natural parameter θ with respect to the local parameter u of the cooperative node positioning coordinate is expressed as: The Fisher information matrix of the local parameter u is On the geometric manifold of natural parameters, maximum-likelihood parameter estimation of the bending exponential distribution family is adopted to obtain the positioning of the cooperative node D, and estimation update of the positioning coordinate u is as follows: where λ represents the iterative step size. When the new local parameter u k+1 of the positioning coordinate is obtained, the Fisher information matrix G(u) of the information geometric plane needs to be updated, as shown below: We utilized the iterative calculation method proposed in reference [29]: when the difference value e k+1 between two iterations is less than a certain threshold th, the iteration terminates, which is expressed as follows: The estimated value u k+1 is considered as the actual coordinate of the collaboration node. The flow of our IG-CP algorithm is shown in Figure 3. The natural gradient utilizes the local curvature of the geometric manifold to modify the iterative direction of the standard gradient, which can result in a faster convergence rate. In addition, the Fisher information matrix is updated at the same time in each iteration based on the natural gradient estimation, which can meet the real-time fitting of the nonlinear positioning error and ranging error of the cooperative positioning system. It can effectively improve the accuracy and stability of cooperative positioning.
Ideal Condition Simulation
The size of the cooperative positioning network is 1000 m × 1000 m. In the cooperative network, the coordinates of the known cooperative nodes are (200 m, 200 m), (800 m, 100 m), and (500 m, 900 m), and the variance of the positioning errors of nodes with known positions is 1 m. The true coordinates of the unknown position node are at (500 m, 500 m), and the variance of the positioning error of an unknown node is 5 m. The measurement distance between cooperative nodes is the real value, and the ranging error is 0 m. On the plane of the geometric manifold with natural parameters, the probability density distribution of locating nodes is fused, and the maximum estimation of the probability density distribution is considered as the positioning result of the unknown cooperative node. The simulation results are shown in Figure 4. From Figure 4, we can see that the ranging error is 0 under the ideal condition of cooperative positioning. After the iterative convergence is completed, the optimal position estimation value of the cooperative node with an unknown position is exactly the same as the real position value, both of which are (500 m, 500 m), and the distribution probability density of the position is the same as that of the cooperative node at the unknown position. This result shows that the IG-CP algorithm, which utilizes the information probability to achieve cooperative positioning, can reduce the positioning error of cooperative nodes.
Simulation under Different Ranging Errors
In the cooperative positioning network, the ranging error will have a great impact on the positioning accuracy of the cooperative node. Existing ranging technologies mainly include radio ranging, UWB ranging, laser ranging, radar ranging, and other methods, with accuracies ranging from the cm level to the 10 m level. Therefore, the variance values of the ranging error are 10 m, 5 m, and 1 m in the simulation, and the variance of the positioning error of the cooperative node is 1 m. The distribution of the cooperative positioning network is the same as that under the ideal condition, and the simulation results are shown in Figures 5-7. From Figure 5, we can see that when the variance of the ranging error is 10 m, the unknown positioning node's maximum probability density of the positioning error is only 0.7, far less than that of the other known positioning cooperative nodes; however, the optimal position coordinate estimation of the unknown cooperative node is the same as that in a real situation, and it is still (500 m, 500 m). The simulation results show that the proposed cooperative positioning algorithm based on information geometry can reduce the influence of the ranging error by fusing the information probability of the cooperative node in the geometric manifold. It can be seen from Figures 6 and 7 that as the variance of the ranging error decreases, the maximum value of the positioning error probability density of a cooperative node with an unknown position is close to the ideal situation, and the optimal value of the positioning coordinate is kept at (500 m, 500 m). Moreover, the distribution range of the positioning error probability function also approaches the ideal situation. This outcome shows that our IG-CP algorithm can effectively reduce the influence of ranging errors between cooperative nodes. When the ranging error between nodes becomes larger, it can also ensure the stability of the positioning accuracy of the whole cooperative network.
Simulation of Cooperative Positioning under Extreme Distribution
In the application of the cooperative positioning network, there will be an extreme distribution of other cooperative nodes around some edge nodes in one direction, resulting in a
Integrated Positioning Simulation under a Multinode Network
In the development of the cooperative positioning network, the number of nodes increases exponentially. Because of the cost and load, most of the nodes have only one or no navigation source; thus, it is necessary to improve the accuracy by cooperative positioning. In this section, the scope of the cooperative positioning network is also set to 1000 m × 1000 m, and the total number of cooperative nodes is 20. Among them, the variance of the positioning error of any five nodes is 1 m, and that of the other nodes is 5 m. The variance of the ranging error between cooperative nodes is 1 m. The simulation result is shown in Figure 9.
After multiple iterations, the optimal positioning estimates of all cooperative nodes are close to the real positions. According to the positioning error probability density distribution of cooperative nodes, the positioning accuracies of all cooperative nodes are basically the same, and the maximum probability density is close to 0.9. The positioning accuracy of all cooperative nodes is similar to that of the optimal cooperative nodes. It is proved that our proposed position error probability function fusion technology can quickly realize the positioning of the whole cooperative positioning network, eliminate the influence of the ranging error, and improve the positioning accuracy of the whole cooperative network.
Simulation Analysis of the Convergence Rate
In the cooperative positioning network, the convergence rate is an extremely important index, as it is the key factor of the application of the cooperative positioning network. Among them, the variance of the positioning error of any five nodes is 1 m, that of other nodes is 6 m, and the variance of the ranging error between cooperative nodes is 3 m, after T = 100 Monte Carlo simulations. To compare with the performance of the existing cooperative positioning algorithm, the IG-CP algorithm proposed in this paper is compared with the SDP algorithm proposed in [6], the Taylor-DP algorithm proposed in [8], and the FGCP proposed in [9]. The simulation results are shown in Figure 10. In Figure 10, the positioning errors of the four algorithms decrease and tend to converge with the increase in the number of iterations. The Taylor-DP algorithm requires 15 iterations to complete the convergence, and the MMSE is 1.7 m. The convergence rate of the FGCP algorithm is better than that of the Taylor-DP algorithm. The FGCP needs 12 iterations to complete the convergence, while the MMSE requires approximately 1.3 m. The convergence rate of the SDP is lower than that of our IG-CP algorithm and better than that of the other algorithm, and it requires eight iterations to complete the iterative convergence. The MMSE is close to the FGCP algorithm, requiring approximately 1.3 m. Our IG-CP algorithm has the fastest convergence speed, with convergence being completed in approximately five iterations. The MMSE of IG-CP is close to 1 m, which is equivalent to the cooperative node with the highest positioning accuracy.
Real-Environment Test
The IG-CP algorithm was tested in a real environment by using sensor nodes to construct a cooperative positioning network. The DWM1000 module is adopted to construct the cooperative node, and the distance between the cooperative nodes is measured by the UWB communication of DWM1000 [37]. The range of the DWM1000 module is 3 km, and the measurement accuracy of the module is 0.1 m, roughly the size of a coin. The appearance is shown in Figure 11. To realize the positioning of the cooperative nodes, the STM32 development board designed by our team is utilized in the cooperative positioning system, as shown in Figure 12. To show the positioning performance of a large-scale cooperative node network, the experimental area was a farm near our university. The area included a small village and farmland, as shown in Figure 13. Twenty cooperative nodes were randomly set in the range of 1000 m × 1000 m. The initial positioning error and ranging error depend on the node device. The simulation result is shown in Figure 14. As can be seen in Figure 14, when the iteration is complete, the maximum probability density is close to 0.85. The positioning accuracy of all cooperative nodes is similar to that of the optimal cooperative nodes with high accuracy in the cooperative network. the optimal positioning estimates of all cooperative nodes are close to the real positions. The experimental results are in agreement with the simulation results, and the village buildings have little influence on the positioning accuracy. A few cooperative nodes are at the edge of the cooperative location network and have large fluctuation due to the influence of the accumulation of ranging errors in the same direction. However, most of the cooperative nodes can improve the positioning accuracy through the IG-CP algorithm proposed in this paper. In the real-experiment test, the algorithm processing module is implemented by the STM32 development board and can realize a real-time response, which proves that the IG-CP algorithm has low computational complexity.
Conclusions
The existing distributed cooperative positioning methods generally suffer from high computational complexity and a slow convergence speed; thus, it is very difficult to apply the cooperative positioning technology in practice. The probability density model of positioning error information is established using the navigation information of each cooperative node in a distributed cooperative network; then, the positioning information fusion is carried out by combining the ranging information between cooperative nodes on the plane of the geometric manifold. In this paper, a simulation analysis is carried out in terms of the ranging error, node distribution, convergence speed, and communication overhead. The simulation results show that IG-CP can reduce the influence of the ranging error on the cooperative positioning node when the magnitude of the ranging error is the same as that of the positioning error of the cooperative node. Regarding the extreme distribution of a cooperative location network, the fusion of the location information probability model can avoid the accumulation of single-direction positioning errors and improve the positioning accuracy of cooperative location nodes at the edge. In the context of iterative computation, the iterative speed of IG-CP is more than 30% higher than that of the existing cooperative positioning algorithms, and the communication cost is lower than that of the other cooperative positioning algorithms. Our proposed IG-CP algorithm has lower computational complexity and a higher precision of cooperative positioning, which breaks through the shackles of existing cooperative positioning technology only from the perspective of location information fusion. It has better application value in the next generation of information technology, such as integrated space-based and ground-based networks, smart cities, driverless transport, and material distribution. | 2021-12-12T16:46:03.351Z | 2021-12-08T00:00:00.000 | {
"year": 2021,
"sha1": "90860c73aa6eb3c20d6b7020eaa2968a75a92660",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/24/4987/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f65027d4bb32502e087912e6be62ea4fc574f291",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
1016591 | pes2o/s2orc | v3-fos-license | Streaming GPU Singular Value and Dynamic Mode Decompositions
This work develops a parallelized algorithm to compute the dynamic mode decomposition (DMD) on a graphics processing unit using the streaming method of snapshots singular value decomposition. This allows the algorithm to operate efficiently on streaming data by avoiding redundant inner-products as new data becomes available. In addition, it is possible to leverage the native compressed format of many data streams, such as HD video and computational physics codes that are represented sparsely in the Fourier domain, to massively reduce data transfer from CPU to GPU and to enable sparse matrix multiplications. Taken together, these algorithms facilitate real-time streaming DMD on high-dimensional data streams. We demonstrate the proposed method on numerous high-dimensional data sets ranging from video background modeling to scientific computing applications, where DMD is becoming a mainstay algorithm. The computational framework is developed as an open-source library written in C++ with CUDA, and the algorithms may be generalized to include other DMD advances, such as compressed sensing DMD, multi resolution DMD, or DMD with control. Keywords: Singular value decomposition, dynamic mode decomposition, streaming computations, graphics processing unit, video background modeling, scientific computing.
Introduction
Dynamic mode decomposition (DMD) was first introduced by Schmid in the fluids community [33] as a data-driven method to decompose complex fluid systems into spatiotemporal coherent structures, where each mode is associated with a particular frequency and rate of growth or decay. DMD has since been rigorously connected to nonlinear dynamical systems via Koopman operator theory [29,36], which provides an alternative infinite-dimensional linear representation of nonlinear dynamical systems [18,22,23]. DMD may also be thought of as an algorithm [36], which yields a fundamental matrix decomposition, combining many beneficial features of principal components analysis (PCA) or proper orthogonal decomposition (POD) and the fast Fourier transform (FFT). As such, DMD has gained significant attention in a wide variety of fields [20], including fluid dynamics [32,24,8,27]; neuroscience [7]; robotics [2]; epidemiology [28]; and video processing [14]. Despite the growing success of DMD, the underlying algorithm is based on an expensive singular value decomposition (SVD) on high-dimensional data. Moreover, in many applications, such as video processing and high-performance computations of transient physical processes, a windowed DMD computation must be performed repeatedly for streaming data. The focus of this paper is to develop a new streaming DMD algorithm, designed to eliminate redundant computations when repeatedly performing the DMD on a sequence of data.
Many algorithms have been proposed to increase the speed of the SVD and DMD algorithms. Sayadi and Schmid [30] proposed using a parallel QR factorization as the basis for a parallel SVD on tall-skinny matrices, as are common in scientific computing and video processing. Hemati et al. [16] developed a batch-process and POD compressed version of the DMD, in order to accommodate large data streams. Brand [6] created the incremental SVD, a method for updating an SVD to adjust for new data. Brunton et al. [10] and Erichson et al. [13,12] used random compression in order to reduce the size of the matrix DMD is performed on. In Tu et al. [36] it was shown that the computational bottleneck in the DMD, when computing the singular value decomposition using the method of snapshots [34], is the calculation of the inner product matrix on high-dimensional data. They further note that when computing DMD on a sequential times-series, many redundant inner products may be avoided from one timestep to the next. Thus by copying these shared elements rather than recalculating them, a massive speed-up may be realized. The present work synthesizes and builds on many of these ideas, providing an accelerated DMD computation using a streaming method of snapshots SVD, parallelized on a GPU, and designed to work directly on natively compressed representations of the data, such as JPEG image streams.
One notable application of DMD is for the separation of background and foreground information from a video or sequence of images [14,12]. In video applications, foreground/background modeling is a computationally expensive task, which only becomes more challenging with increased resolution [3,5,4]. Candès et al. [11] framed the problem of background subtraction as a separation of the input matrix into its sparse (foreground) and low-rank (background) components, using robust principle component analysis (RPCA). However, RPCA is expensive, as it continues to iterate until convergence on a final result, performing a singular value decomposition on each iteration. In contrast, DMD requires only one SVD, making it more efficient than RPCA [14] for the same task. Although video processing is not the primary application of DMD, it provides a challenging and intuitive set of benchmark problems to test our methods.
Contributions
In this paper, we develop a streaming DMD, designed to reuse computations when processing sequential inputs. The core of this algorithm is the streaming SVD based on the method of snapshots, which we compare to a standard SVD algorithm, demonstrating considerable speed up with negligible loss in accuracy. We also demonstrate a new, efficient way to calculate DMD mode amplitudes on POD coefficients, as opposed to the traditional high-dimensional least-squares fit. Additionally, we implement both CPU and GPU versions of streaming DMD and show that these algorithms are well suited to parallel processing. We compare the GPU implementation of the streaming DMD against a non-streaming CPU implementation, with negligible difference in outcome. Further, we design this architecture to work with the native compressed format of many data streams, including Fourier compressed image streams and the output from computational codes in the Fourier domain, to reduce data transfer from CPU to GPU and leverage sparse matrix multiplications in the streaming SVD. Many of the innovations developed for streaming, GPU, compressed DMD are also equally valid for the SVD, and may have significant impact on scientific computing. The C++ package for the streaming DMD and SVD algorithms is available under an open-source license.
This paper is organized as follows: First, we review background material, including the method of snapshots SVD and the DMD in Sec. 2. We also discuss the motivation for graphics processing unit (GPU) acceleration for our algorithms. Next, in Sec. 3, we explain our core innovations, including the streaming SVD/DMD, fast computation of DMD mode amplitudes, GPU acceleration, and leveraging compressed data formats. In Sec. 4 we show the significant performance improvements made by our streaming algorithms, and analyze their error against the standard DMD algorithm. Lastly, in Sec. 5, we summarize our findings and conclude with a discussion on applications and future work.
Background
In order to develop our streaming DMD algorithm, we first provide an overview of the standard DMD, the method of snapshots SVD and general purpose GPU computing. The backbone of our streaming versions of the SVD and DMD is the method of snapshots SVD.
In all of the analysis that follows, we consider a matrix of data snapshots X ∈ R n×m , where n is the number of measurements and m is the number of temporal snapshots. For example, if the columns of X represent image frames in a movie, then n is the number of pixels per frame and m is the number of frames in the movie. Similarly, we may consider a time-series of an evolving spatial field from a numerical simulation of a partial differential equation.
Method of Snapshots Singular Value Decomposition (SVD)
The method of snapshots is an alternative way to calculate the singular value decomposition of a matrix X, developed for matrices where one dimension is much larger than the other. This method was originally developed for data from fluid dynamics, in which the target matrices are significantly taller than they are wide [34], i.e. n m. In these applications, it is observed that the nonzero eigenvalues of X * X are the same as those of XX * , although the first matrix is size m × m while the second matrix is size n × n. It is computationally more efficient to compute the eigendecomposition of the smaller matrix X * X and then use this information to reconstruct the left and right singular vectors of X. This allows for significant reductions in computation time, although with a potential reduction in accuracy. The method of snapshots is summarized as follows: 1. Multiply X by its transpose, in whichever order creates the smallest output. We assume X is a tall-skinny matrix (i.e., n m). Then find the eigendecomposition: where Λ are the eigenvalues and V the eigenvectors of X * X. The non-negative square roots of Λ are the singular values Σ of the original matrix X.
2. The left singular vectors U are calculated as follows: This creates an "economy" SVD, where U ∈ R n×m is the same dimension as X, and Σ ∈ R m×m and V ∈ R m×m are both small square matrices. Figure 1(a) shows the singular values calculated with both the standard SVD and the method of snapshots, performed on the Yale faces dataset [1]. The singular values from both methods agree closely; thus it is only when extreme accuracy is needed that the standard method for calculating the SVD should be used. We believe most users of the SVD would benefit by using the method of snapshots, due to its significantly better performance. The method of snapshots is a standard technique in the fluid dynamics community due to the high aspect ratio of the data matrix. [1] between the standard SVD and method of snapshots SVD, as well as the absolute difference between the two. This further demonstrates how close the method of snapshots is to the standard SVD, irregardless of the number of eigenvalues and eigenvectors used to reconstruct the images. In turn, the speed-up provided by the method of snapshots SVD can be carried over to the DMD.
Dynamic Mode Decomposition
The DMD arose out of the fluid dynamics community to analyze the spatio-temporal coherent structures arising from fluids data [33]. It quickly gained popularity as strong connections were made between DMD and Koopman spectral analysis [29,36,20], which provides an infinite-dimensional linear representation of nonlinear dynamical systems [18,22,23].
DMD finds the dominant eigenvalues and eigenvectors of a best-fit linear dynamical system modeling the transition of a state x k to the next time-step x k+1 ; nonlinear model reduction is also possible with similar data [9]. In particular, given a matrix X and another matrix X consisting of the snapshots one time-step in the future: the DMD algorithm obtains the eigendecomposition of the best-fit linear operator A given by where † denotes the Moore-Penrose pseudo inverse [36].
However, since the state dimension n may be quite large (on the order of a million for HD video, tens of millions for 4K video, and even larger for scientific computing applications), the matrix A is too large to directly analyze on simple computational architectures. Instead, it is possible to analyze a smaller matrixà obtained via projection onto the left singular vectors in U: Much like the method of snapshots, the matrixà is size m × m, and it has the same eigenvalues as the high-dimensional matrix A, as shown in [36]. Taking the eigendecompositioñ it is then possible to obtain eigenvectors of the original high-dimensional matrix A via The columns of Φ are called dynamic modes of X and they are spatio-temporal modes that have a single temporal signature given by the corresponding eigenvalue λ in Λ.
The large number of independent inner-products performed in the process of calculating the SVD and DMD make it a perfect fit for being computed on a Graphics Processing Unit (GPU), where their many cores can be leveraged.
DMD for Video Background Subtraction
Grosek and Kutz [14] show that the DMD can be effectively leveraged to compute decomposition of a video into the foreground and background components. This provides a similar decomposition as in the robust principle component analysis (RPCA) [11], but at a fraction of the cost, as RPCA involves an iterative procedure requiring dozens of SVD computations. In this framework, the video X is decomposed into its constituent low-rank and sparse components, where the low-rank contains a low-dimensional representation of the system under observation and the sparse the outliers, noise and/or corruption measured by the input. This is represented as: where L is the low-rank component (background) and S is the sparse component (foreground). Because each DMD mode has a corresponding frequency given by the DMD eigenvalue λ, the discrete-time eigenvalues that are nearly equal to 1 correspond to modes that do not change from frame to frame, i.e., the background modes. Thus, DMD can also be used to split the matrix X into two components, corresponding to slowly varying modes with eigenvalues λ p ≈ 1, and those that have faster dynamics: where t = 1 2 · · · m is a vector of time indices. Refer to Erichson et al. for the state of the art DMD implementation of background modeling [12].
General Purpose GPU Computing
General purpose GPU (GPGPU) computing has proven effective for accelerating many linear algebra problems, as it is able to perform many operations in parallel. Creating an efficient algorithm for use on a modern GPU requires a very different approach than would be used on a central processing unit (CPU). A typical GPU is made up of a number of sub-processors, each able to run multiple threads concurrently. This design allows a GPU to achieve a much higher throughput than a CPU [25], if the algorithm is written with the GPU in mind. This Single-instruction, Multiple Data (SIMD) style of code works best when there is a large amount of input data needing to be independently processed. Matrix multiplication is a common example, however one could expect many large-scale math problems to suit the GPU architecture well. NVIDIA [25] notes that minimizing host (CPU) to device (GPU) memory transfers is key to maximizing performance. This lends itself naturally, then, to streaming algorithms, where only the updated data need be transferred on/off the device.
Proposed Streaming DMD Algorithm
In many applications, data is continually acquired from sensors in a streaming fashion; new data is appended as columns to the right of the matrix X, while old columns may be removed from the left of X if necessary. In streaming applications, such as online video processing or windowed DMD on transient simulations, the cost of repeated DMD and SVD calculations may be prohibitively expensive.
Here, we build a suite of complementary techniques to accelerate repeated SVD and DMD computations for streaming data. The core of the streaming DMD algorithm is the streaming method of snapshots SVD, whereby redundant inner product computations in X * X are reused from one timestep to the next, reducing the SVD computational complexity from O(nm 2 ) to O(nm). The streaming SVD and DMD are discussed in Secs. 3.1 and 3.2, respectively. When it is necessary to compute the mode amplitudes in b, we introduce an efficient computation in Sec. 3.3. All of the above methods are readily parallelized, and we discuss GPGPU implementation in Sec. 3.4. Once GPU parallelized algorithms have been implemented, data transfer from the CPU to GPU becomes the main computational bottleneck. However, in many applications it is possible to leverage the native sparse representation of the data (e.g., image sequences are stored in compressed Fourier or wavelet representations) to significantly reduce data transfer and promote sparse matrix operations, further reducing the computational burden. This is discussed in Sec. 3.5.
Streaming Method of Snapshots SVD
In the streaming context, let X be the current data matrix and X be the next matrix in the sequence. Many of the inner products in X * X, shown in blue, may be reused in X * X : Thus, as X * X is symmetric, only the last row or column will need to be recalculated (shown in green). Removing the redundant inner product calculations reduces the computational complexity from O(nm 2 ) to O(nm). As this is the most time-consuming part of the method of snapshots [36], a large performance gain is realized. This streaming method of snapshots facilitates a streaming version of the DMD.
Streaming Dynamic Mode Decomposition
The streaming DMD relies on the streaming SVD in order to process data in sequence, but is also able to realize speed-ups from reusing intermediate steps from the SVD, and by only returning the last column of the sparse matrix S in the case of background subtraction. Figure 3 shows an outline of how the streaming DMD is set up in order to perform background subtraction. Background subtraction with the streaming DMD is performed by sliding the DMD forward by as many frames as the user wants to process in a given iteration. However, the same process would also be used if only the DMD modes and their frequencies are desired.
Computing Mode Amplitudes Efficiently
Computing the vector b of DMD mode amplitudes has been investigated in the past [17,20]. The simplest approach involves computing a best-fit b vector using the least-squares approximation: Instead, we use the following formulation directly on POD coefficients using Eqs. (7) and (9): where α 1 is the vector of POD coefficients for x 1 . This is significantly more efficient than the high-dimensional least-squares algorithm. Additionally, only the row corresponding to the smallest absolute DMD eigenvalue need be calculated when streaming. The benefit of this faster calculation of the DMD mode amplitudes is even more pronounced on a GPU, requiring fewer synchronizations with the device, and reducing the amount of data transfer.
GPU Implementation
Algorithms 1, 2 and 3 show how the calculations for the SVD, DMD and background subtraction are performed. U is not explicitly calculated so as to reduce space and computational complexity of the DMD. While it is possible for this to cause issues with numerical accuracy, we found the results of these algorithms to be negligibly less accurate. Additionally, we used single-precision to further reduce memory usage and increase performance. Our code relies on OpenBLAS [38] for LAPACK and BLAS functions on the CPU, and MAGMA [35] for GPU LAPACK and BLAS. We also found that writing algorithms in MAGMA improved performance over those written in OpenCL.
Implementation on Sparse Data
After parallelization on the GPU, data transfer from the CPU to the GPU and back becomes a bottleneck. We may naively transfer data in the ambient signal space, such as pixel space for images or a spatial domain for high performance computations. However, in both cases, these signals are typically stored or computed in a transformed basis, such as Fourier or wavelets. Moreover, these transform bases allow the data to be massively compressed, often by orders of magnitude, which would lead to a significant savings in data transfer. Recent work combining compressed sensing and DMD [10] showed that both the SVD and DMD are invariant to unitary transformation, such as the fast Fourier transform (FFT). Thus, it is possible to directly transfer FFT compressed data to the GPU, perform DMD on the Fourier representation, and transfer the compressed DMD from the GPU back for storage or further processing. There is an added benefit that many of the core steps in the DMD algorithm will be performed on sparse data matrices, enabling further efficiency gains. This procedure is shown schematically in Fig. 4. This is not explicitly implemented in our code, but is included because of the potentially important role in reducing transfer from CPU to GPU in practical implementations. Note that compressed and randomized [15] architectures have recently been used to great advantage in scientific computing applications, for example in [31,21].
Results
We now present the performance and accuracy comparisons of our streaming SVD, DMD and background subtraction algorithms. The algorithms are demonstrated on high-resolution video data because they are publicly available and reproducible. However, the streaming DMD method is general to any high-dimensional data, such as data generated by high-performance computing, internet of things, LIDAR sensors, etc. We use the PEViD "Walking Day Indoor 4" video [19] to test performance and scaling, and to subjectively compare the results of background subtraction. The high resolution of this sequence allows us to explore the ability of our algorithm to scale with varying resolution. Additionally, we use the BMC "Video 003" to make a quantitative analysis of the accuracy of our streaming GPU DMD for background subtraction in terms of standard metrics in computer vision.
Performance Benchmarks
We benchmark the various algorithms on the PEViD "Walking Day Indoor 4" video [19], converted to greyscale and resized to common 16:9 resolutions. 1 We choose not to include data transfers between CPU and GPU in our benchmarks, as they don't reflect the computational differences in CPU versus GPU code; instead they are only representative of a hardware limitation of current computers. However, in practical implementations, we discuss the potential to significantly reduce data transfer using compressed data formats as shown in Fig. 4 measure the time taken to update the SVD, DMD or DMD background subtraction from steady state, where the system has already been initialized. For both implementations, the initial time would be equal to the elapsed time taken by their respective non-streaming versions to update. Figure 5 shows comparisons of the CPU, GPU, streaming CPU (SCPU) and streaming GPU (SGPU) implementations for the SVD, DMD and DMD background subtraction. Our first set of benchmarks holds the width constant at m = 90 frames (number of columns in X), and varies the resolution from n = 640 × 360 to n = 2560 × 1440 (number of rows in X). Likewise, resolution is kept constant at n = 1920 × 1080, and width is varied from m = 20 to m = 120 in steps of 20 in our second benchmark set. This test shows streaming to significantly benefit the CPU implementation, putting it on par with that of the GPU. In real world applications, this is promising as it could reduce the need for a dedicated GPU, while still netting a large performance improvement. Further, the streaming GPU is significantly faster than the other three versions, and has a much smaller slope, suggesting better scaling for even larger input dimensions (i.e., resolution n and number of frames m). This trend is maintained in the DMD and background subtraction algorithm benchmarks as well. Looking at the n = 2560 × 1440 resolution by m = 90 frames test on the SVD, the CPU implementation took approximately 1.15 seconds, while the streaming GPU took only .06 seconds, for a speed-up of nearly 20x. When the resolution is kept constant, we see that the scaling of both streaming algorithms is more favorable than that of the non-streaming algorithms. This is as expected, since that the cost to update X * X is on the order of O(n). This shows that the streaming Table 2: Results created using BMC Evaluation Wizard on the results of streaming GPU DMD for background subtraction on BMC "Video 003" [37]. A width of 60 frames was used for streaming. The foreground mask was generated by setting all values less than .2 to 0 and any greater to 1 after subtracting the low-rank matrix from the original input.
algorithm scales more favorably with regards to width than the standard SVD and DMD, as well as for the DMD for background subtraction. Similar to the constant width tests above, in the constant height test, a speed-up of around 25x is realized for the SVD at a m = 120 frame width, with the CPU taking .77 seconds and the streaming GPU taking .03 seconds.
Error Analysis
It is important to verify that the significant speed-up of the streaming GPU implementations does not come with an unacceptable loss in accuracy. Table 1 shows comparisons made between our streaming SVD and DMD output against Python implementations of the standard algorithms. In both cases, the relative error is quite small, even for the largest input sizes. The DMD comparison was made on the product of column of Φ corresponding to the smallest absolute value in Λ. The relative error is somewhat larger than that of the SVD, in part because of accumulated floating-point math errors from the GPU (e.g. fused multiply add instructions). However, in many applications of DMD, such as video background modeling, this constitutes an acceptable error for the considerable speed-up, as downstream processing algorithms do not require machine precision.
To show that the streaming GPU background subtraction is sufficiently accurate, we benchmark on the "Video 003" from the Background Modeling Challenge dataset [37]. The results are shown in Table 2, and are consistent with other versions of the DMD [12]. We also provide a subjective comparison of background subtraction between our streaming GPU and the standard algorithm in Figure 6. The bottom row of Figure 6 shows, from left to right, a close-up of the original frame, the CPU foreground, the streaming GPU foreground and the difference between the two foregrounds. The difference is small, and has little impact on the thresholded results.
Discussion and Outlook
We have developed and analyzed streaming singular value and dynamic mode decomposition algorithms and their GPU implementations. In addition, we show performance benefits for streaming video background subtraction. In all cases, a large number of calculations are able to be carried forward from frame to frame by exploiting the structure of the method of snapshots SVD. This allows both the SVD and DMD to process large data streams in real-time, whether for video or otherwise. We have evaluated the proposed algorithms on multiple datasets, demonstrating the significantly improved computational performance for stream processing with negligible loss in accuracy. Our C++ and CUDA implementation will be made available under an open-source license.
The results of our performance comparison suggest that streaming algorithms are favorable, regardless of whether a GPU is available on a target platform. Additionally, significant speed-ups are possible at smaller data sizes once faster transfers are available to and from a GPU. While not suitable for extreme-precision applications, we believe our streaming SVD and DMD algorithms provide a valuable improvement for many applications due to their improved computational performance. The small loss in accuracy was shown to be negligible for video background modeling applications.
There are a number of interesting future directions that may arise from this work. One could modify the streaming algorithms shown here to support dynamic updating with more than one column at a time; when data inputs slow down, the number of new columns processed may be increased to catch up, and vice versa. This dynamic streaming update could help to recover from a build-up of columns waiting to be processed for a long-running instance of the streaming SVD or DMD. A streaming input build-up could also be used instead of waiting for enough initial inputs for the first SVD or DMD. This would instead pre-allocate the maximum matrix size, but start the algorithm with only 2 columns. Until the matrix is filled, the new columns would be appended without erasing the oldest. In using the streaming DMD for background subtraction, the algorithm could be modified to use some small subset of background DMD modes rather than just the single slowest changing mode, as suggested in Erichson et al. [12]. This would incur a performance penalty, but could improve results as well. Lastly, our method could be joined with other modified DMD algorithms, such as compressed or multi-resolution in order to improve performance.
The emergence of the big data era across the physical, biological, social and engineering sciences has severely challenged our ability to extract meaningful features from data in a real-time manner. Critical technologies such as LIDAR, 4K video streams, computer vision, high-fidelity numerical simulations, sensor networks, brain-machine interfaces, internet of things, and augmented reality will all depend on scalable algorithms that can produce meaningful decompositions of data in real time. Failure to compute data streams in real time results in a data mortgage [26] whereby the cost of collection and storage limits the available resources to analyze and extract features. We are already seeing this across the sciences where massive data sets are collected and stored, yet remained un-mined for informative features and/or critical information for automated decision making processes. The streaming technique presented here provides a mathematical architecture for real-time processing of data and extraction of features. The method is adaptive, efficient, parallelizable and scalable, potentially enabling a host of applications currently beyond the capabilities of standard techniques. | 2016-12-23T05:06:25.000Z | 2016-12-23T00:00:00.000 | {
"year": 2016,
"sha1": "78b9b9ce3e3d448e14de5d2918e9cd65263432a4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4348eda5033c05c32543d63e7732413d97d4c8e9",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
229502911 | pes2o/s2orc | v3-fos-license | The Impact of COVID-19 on the China Online Video Industry
This paper is aimed at analyzing the impact of COVID-19 on the Chinese online video industry. The hypothesis is that since people had more time to spend on leisure during the quarantine, the online video industry should be positively affected. Using methods such as collecting data from consulting firm’s research reports, analyzing news events, and summarizing evidence from security firm’s capital market research reports, I concluded that the pandemics did bring a short-term increase in people’s attention to the online video industry, opened up new opportunities for creative business models, yet also posed potential threads shall the virus strike again in the future.
Introduction
The sudden COVID-19 outbreak at the end of 2019 caught everyone by surprise. Many industries suffered severe challenges because of the pandemic. While there are numerous articles investigating the impact of the pandemics on industries that are most seriously damaged such as the traveling industry, the restaurants and food industry and etc., little attention was paid to industries that might be positively affected by the virus. Due to the quarantine, since residents were out of work and had more time to spend on leisure, the entertainment industry, especially the online video sector, should somehow be positively affected. Therefore, this paper will examine the impact of COVID-19 on the Chinese online video industry in details.
Industry history
The Chinese online video industry history can be traced back to 2005 and 2006, when Youku and Tudou was first established, respectively. As two of the earliest long video platforms in China, their main business model was video uploading and sharing from individual users in the beginning. These corporations encouraged people to record moments of their lives with mobile phones and provided a platform for them to publish their work. Similar to YouTube, these platforms mainly consisted of user generated content, which largely relied on individual users sharing.
In 2008, as the anti-piracy policy released, copyrights of movies and TV dramas became the main source of revenue. This marked the second stage in the history of the online video industry. The major content of operation for video platforms shifted to professional generated content, which was produced by professional video makers with copyrights protection. Long video platforms invested heavily in the ownership and the rights to broadcast, promoting the supply of long videos and giving rise to more varied long video forms such as reality show.
Beginning from 2016 to now, the industry entered the third stage, characterized by more diversified video forms. With the progress of technology and the increasing number of Internet users, people turned to a more convenient and timely form of video sharing, which stimulated the rise of short video platforms and live platforms. The video industry started to include both user generated content and professional generated content.
Industry size and sector breakdown
Data sourced from Statista, iResearch The graph showed the total online entertainment market size and the percentage of market share that each individual subsection occupied. Long video, short video, and live broadcastings still own over 80% market share of the online entertainment industry. Over the past two years, long video and live broadcasting gradually lost their markets to short videos. In 2020Q1, the short video subsection owned 43.6% of the market, replacing the position of long video and became the largest subsector in the online entertainment industry. With reduced video length and richer video content, short video meets the preference of the modern people. Even in time of the epidemics, when people had long spare time periods to watch long videos, short videos still dominated. A shift in consumer preference is evident even when time constraint for long videos is the least concern during quarantine.
Data sourced from iResearch
The Chinese online video industry revenue has increased exponentially over the last ten years, though at a decreasing rate in the most recent years. After 2017, the lower growth rate signaled that the online video market has entered a mature stage. Although the whole industry has fewer spaces to grow, the yearly increment in RMB is still noticeable.
Data sourced from iResearch
Advertising and membership services are two important sources of Chinese online video revenue. However, the percentage of revenue from these two sources switched their position in importance over the past two years. A decrease in proportion of advertising revenue and an increase in that of membership revenue is due to the adoption of payment business model and growing public payment awareness, which greatly influenced the income structure of the online video industry. During the quarantine, residents are more willing to pay for membership to watch videos at home.
Policies
Overall, the government supports the video industry as long as the content is healthy, copyrighted, and regulated. Government's action in 2010 against internet privacy and copyright infringement revealed the importance of copyrighted content, further promoting the fight on copyrights between platforms. At the same year, Rules for Cooperation and Protection of Internet Film and Television Copyright was released and signed by multiple internet corporations and film productions. The war on piracy has always been an important aspect of government policy in the video industry.
During the epidemics, due to the quarantine, public gathering was strictly prohibited. Most of the offline entertainment places were shut down, resulting in great losses for the entertainment industry, especially the movie industry. For long video production, some video shooting and reality show recording had to be stopped, which delayed the online release time to a certain degree. Facing these difficulties, many local governments published several policies to alleviate the problems of the video industry mainly by providing subsidy and relief. In Beijing, the government issued a policy to grant a special subsidy to some movie projects that were influenced by the epidemic and increased the subsidies for movie theatres. In addition, the required audit time for online reality shows was also shortened. In short, the subsidies during the epidemics showed that the policy makers are determined to support and promote the industry in times of business downturns.
Supply chain
The supply chain of the entertainment industry includes content productions, distribution channels, and users. For the long video industry, the first step in the supply chain is content production. For PGC, content production is often done in professional studios with directors, actors, and other crew members. Then, the content production teams provide the copyrighted video content to distribution platforms to sell their work. After the distribution channels payed for the right of broadcasting from content production teams, the platforms broadcast the videos to their users and charge a fee for watching.
Threat of new entrants
The industry has a medium threat of new entrants. Barrier of entry for the long video platform industry is low because technically it is not that hard to build a new website, and it requires little cumulative experience to run a video website. The consumers' behavior also shows that they do not have obvious brand loyalty to a certain platform. However, despite the low barrier of entry and low brand loyalty, it is still hard for new entrants to compete with existing big platforms since the leaders of the long video industry possess a large proportion of resources and users. Overall, entering the industry is relatively easy, but the new entrants are basically impossible to threaten the position of existing major corporations.
However, during the epidemics, the quarantine prevented professional content producers from working in their studios. Therefore, they were left with no choice but to film at home and upload more short videos instead. Nevertheless, from the demand side, the increased user time during the epidemics provided incentives for new entrants.
Power of buyers
The long video platform industry has a relatively high power of buyers. Users can choose to buy VIP services or not and spend how much money on these services. Buyers tend to be less price-sensitive since the price of paid services, such as the adfree version and premium content, is generally low. Besides, the differences in price and services between competitors are low, so buyers have the major power of choice. Still, a single buyer's influence is low for the whole industry because there are numerous existing buyers.
During the past two years, the growth rate in number of buyers decreased, meaning that the market has reached a relatively mature stage. Although the number of buyers is increasing at a decreasing rate during the quarantine, the average daily use time of each device showed a significant increase, indicating that buyers have the power to choose. The year-on-year growth rate for Jan., Feb., and Mar. 2020 were 12.8%, 28.0%, and 24.8%, respectively 1 . Since recreational videos are not strictly a daily necessity, consumers are relatively free to choose whether to spend time and money on it or not. The data during the pandemics showed how elastic the demand is for video consumption.
Data sourced from iResearch
Data sourced from iResearch
Substitute products
There are numerous substitute products for the long video platform, and the rise of various entertainment industries presents a considerable threat to the long video industry. As an online entertainment, the long video platform industry faces its emerging online competitors: short video platform, live-streaming, online game, online reading, etc. At the same time, some offline entertainment such as movies also influence the long video industry. For these substitutes, especially for online substitutes, most of them are not very expensive, which makes buyers easily substitute for other entertainment. Besides, each buyer is very likely to involve in multiple ways of entertainment, which furthers the threat of substitute products.
However, during the epidemics, substitute options decreased. Because of the strict quarantine and prevention measures, almost all offline activities were shut down. People had to stay at home so that they had a lot of free time to spend on watching long videos.
Power of suppliers
In the long video platform industry, there are numerous suppliers providing videos for various platforms. The supplier's power of each supplier is low since a large video platform usually has several video sources.
During the epidemics, the video shooting process was greatly limited by the isolation measures, which affected the supply of video to a certain extent. However, since it takes an enormous amount of time to produce a certain video, there is a time lag between when a firm is produced and when it is shown to the public. Difficulties in shooting only had a relatively large influence on those firms that are still in process. Since the suppliers might have unaired videos, and the platform could also rerun the old videos, the pandemic had a limited influence on the whole video supply chain. In addition, some updating programs were able to start their shooting without an audience or offline gathering during the epidemics, which guaranteed the supply of long videos.
Existing competitors
For the long video industry, although there are numerous competitors, the diversity is low since most of the long video platforms provide similar services. Consequently, the users brand loyalty is low because different choices would not affect the user experience. Consumers are likely to use several different platforms and selectively use them based on their current video content.
The industry concentration is medium and is characterized by five leading firms taking up most of the market share. According to Statista, the total industry market size in 2019 is RMB102b. Except for Tencent Video, whose total revenue in 2019 is not revealed, the total revenue of the other four major players in the industry in 2019 is RMB748b. Assume that Tencent Video is approximately the same size as IQIYI or Youku, then the top five player's total revenue is comparable with the total industry size that Statista revealed. As a result, we can conclude that the industry's market share is mainly distributed to the top five major players.
Financials Revenue
Data sourced from iQIYI 2019 annual report According to the iQIYI 2019 annual report, the total annual revenue has continued to increase from RMB5b in 2015 to RMB29b in 2019, nearly sixfold in five years. Similar to the entertainment industry revenue trend, iQIYI's revenue trend also showed an increasing pattern at a decreasing growth rate.
Revenue breakdown
Data sourced from iQIYI 2019 annual report Revenue from membership services is the main source of revenue for the past two years. Before 2018, revenue from advertisers was the main source. The change in revenue distribution is consistent with that of the entire industry. Learning & Education
Net loss
Data sourced from iQIYI 2019 annual report The annual net loss of iQIYI is relatively high, especially from 2018. From 2017 to 2018, the annual net loss nearly tripled, and it broke through RMB10b in 2019.
Strengths
In terms of content, iQIYI is one of the first firms to produce highly popular, original trend-setting content. These contents attract many users and brought a strong social response. For example, iQIYI self-produced TV show The Rap of China in 2017 generated over 3.0 billion video views. 2 In addition to original content, iQIYI also cooperates with premium content suppliers such as six big Hollywood studios, Netflix, etc. These high-quality production companies are important resources and attract more users with their exclusive content. Besides, iQIYI also maintains relationships with smaller long-tailed content providers, which further enriches the video content.
For its operation, iQIYI is one of the first firms to adopt the membership payment business model, which is an industrychanging event for the long video platforms industry. The early adoption of payment platforms develops the paying consciousness of consumers, making more users accept and pay for membership services. Since 2018, revenue derived from membership services has become the largest proportion of total revenue. Besides, the synergies with Baidu offer iQIYI many supports technically, including AI technology and cloud services. These technical supports help iQIYI to better improve users experience and find target audience. For example, for The Rap of China, iQIYI uses AI technology to select suitable celebrities and study users' preference.
During the epidemic, iQIYI produced another highly popular original reality show Youth with You. According to iQIYI, the premiere of this reality show broke 9000 "heat value", which is a weighted value reflecting the content popularity specifically used for iQIYI content, and peaked at 9210 "heat value", setting a new high record in iQIYI original content. During the broadcast, there were 468 related Hot Searches on Weibo, and the hottest topic reached a peak of 17.37 million discussions. 3 iQIYI became one of the top corporations in the long video industry for its success in original content production, advanced business model, and technical help from Baidu.
Weaknesses
The biggest weakness of iQIYI is its huge losses. Though iQIYI has an important position in the entertainment industry, its influence is confined to video broadcasting with limited sources of revenue. Except for its strength in the long video industry, other fields such as games, IP value-added services, e-commerce, live broadcasting, and celebrity brokerage were neither profiting nor influential.
In iQIYI 2019 annual report, the company suffered from continued losses from RMB3b in 2017 to over RMB10b in 2019, which greatly exceeds the losses that Baidu could cover. It would be a financial problem if iQIYI continues to lose such a great amount of money since Baidu also posts annual net loss of RMB2b in 2019. 4
Tencent Video
As a part of Tencent, Tencent Video is not an independent publicly listed company but a division 100% owned by Tencent.
Its key advantage is that it has strong resources and financial supports from one of the biggest companies in China, Tencent. The comprehensive Tencent industrial chains involved almost every aspect of internet products. It is very convenient for Tencent Video to interact with other types of platforms under Tencent, a fact that can expand its influence and enlarge its accessibility. According to Tencent's 2019 annual report, although the revenue from Tencent Video alone is unknown, the net loss stood at RMB-3b and is significantly lower than that of iQIYI's. The losses from Tencent Video seem trivial compare to the financial situation of the whole Tencent group. In addition, Tencent Video also has the financial power to buy large numbers of copyrighted videos and produce original content.
However, Tencent Video also has some problems in its brand image and user experience. Recently, as Tencent tried to expand its entertainment supply chain by investing in their own videos and celebrities, the firm tried to find a magic equation for producing celebrities in an assembly line. This risky act heavily affected its brand image since one of the celebrities brought significant negative views.
2 Data sourced from iQIYI 2019 annual report 3 Data sourced from Funji.com 4 Data sourced from iQIYI 2019 annual report and Baidu 2019 annual report
Youku
As one of the earliest long video platforms, Youku acquired its earliest competitor, Tudou in 2012. After being taken over by Alibaba Group in 2015, Youku delisted in 2016.
Similar to Tencent Video, Youku receives its strong financial backup from Alibaba. According to Alibaba's 2020 annual report, the net income of year end March 31 st , 2020 is RMB140b, showing its financial ability to cover the losses of Youku. Besides, since it entered the industry very early, its existing user group is relatively larger.
However, although Youku once held half of the market share and had its own strengths, it is gradually falling behind iQIYI and Tencent Video and losing its advantages. Unlike the huge Tencent industry chains, the support that Youku could get from Alibaba is highly limited. Since Alibaba specializes in e-commerce, except for financial support, there are limited resources that Alibaba can give Youku. Besides, when other video platforms started to buy many copyrighted contents, Youku insisted on its UGC idea. While Youku was defeated by new UGC platforms such as Bilibili, it is already falling behind its competitors.
Financials Revenue
Data sourced from MGTV 2019 annual report According to Mango Excellent Media's 2019 annual report, total revenue maintained a steady increase from 2017 to 2019. While the growth rate in 2018 is in line with industry average, the 2019 growth rate doubled industry average, signaling its strong growth.
Revenue breakdown
Data sourced from MGTV 2019 annual report The revenue source is relatively evenly distributed in 2018. New media platform operation, represented by MGTV, contributed most. Different from other video platforms, Mango's sources of revenue are diversified since its revenue from content production and media retail is significant enough. Learning & Education
Net profit
Data sourced from MGTV 2019 annual report Although the net profit is insignificant compare to large corporations that offer financial supports such as Tencent and Ali, Mango Excellent Media manages to remain profitable and has seen a slight growth rate in net profit from 2018 to 2019.
Strengths
Mango Excellent Media was found by Golden Eagle Broadcasting System and Hunan TV. As the first and the best performing state-controlled video platform in domestic A-share market, Mango Excellent Media outstands from the other video platforms in that it is supported by the local government.
With government support, MGTV realizes profits from the high-quality original shows with low royalties. Its original and pioneering content for the past 10 years has attracted many users, especially young female users. Over 90% content in MGTV is its original exclusive contents produced by Golden Eagle Broadcasting System, and the contents were sold to MGTV for a relatively low price. 5 Since other video platforms must pay billions of RMB for their video's copyright, this self-marketing business model saved a large amount of money for Mango. Besides, MGTV is the only video platform that owns both IPTV and OTT license, a fact signaling that its content can be broadcasted on varied devices in various forms to reach different groups of people.
MGTV's accumulation of operational experience over the years was finally seen by the capital market during the epidemics, a time when one of the most popular TV shows was aired on MGTV, directly raising up MGTV's stock price. While the epidemics brought none of the other video platform significant changes in industry position, MGTV jumped into the tier 1 video platforms category and numerous research firms started to follow MGTV.
Weaknesses
MGTV's integrated supply chain with content production, celebrity agency, marketing and broadcasting could also be a weakness. Owned by Mango Excellent Media, MGTV is the only long video platform that has the exclusive broadcast right to launch its self-made content. To build its own broadcasting platform, MGTV gives up billions of royalties every year. In addition, the whole platform operation relies on the output of high-quality, self-produced content. Once the self-made content does not appeal to the audience, MGTV is unable to buy copyrighted videos to retain its users such as other platforms.
Financials Revenue
Data sourced from Bilibili 2019 annual report According to Bilibili's 2019 annual report, its net revenue increased at a high rate from 2017 to 2019. Even though the industry growth rate started to decline in 2017, Bilibil's revenue continued to grow at a rapid speed. This is partially due to its distinct business model. Bilibili is not a typical long video platform since UGC were the main source of video, so the changes in traditional long video industry would not have a strong impact on Bilibili. Different from the other video platforms, Bilibili also 5 Data sourced from Mango Excellent Media 2019 annual report Qianqian Wu Volume 9 Issue 2 | 2020 | 19 Learning & Education saw a significant amount of revenue from games, which is more robust to the video industry cycle changes.
Revenue breakdown
Data sourced from Bilibili 2019 annual report In 2017, most of the Bilibili's revenue was from mobile games, and other revenue sources have negligible contributions to the total revenue.
Net loss
Data sourced from Bilibili 2019 annual report Bilibili continued to incur net losses for the past three years at a high growth rate each year. Since Bilibili insisted on providing its contents for free, it lacked revenue generated from advertisements and VIP membership fee.
Strengths
The strengths of Bilibili lies in its originality and popularity among young people. Bilibili heavily depends on user generated content and creates a relaxed and innovative platform for individual to share their work. It encourages other users to upload videos, which eventually forms large pools of content. For Bilibili, the user experience is high since individual users have a sense of participation to become part of the community. Bilibili is also the first firm to adopt the idea of "bullets", which allows audience to share real-time comments on the top of the screen. Such interaction offers users a more open platform to discuss the work and share their opinion, which furthers the social engagement and community building. These features of Bilibili are highly attractive among young people at first. Over time, as individual users actively contribute content in varied fields on Bilibili, people of all ages realized its rich content and convenience as an ad-free platform.
When it was just listed, Bilibili depended too much on revenue from games. As it developed its services in advertising, e-commerce, live broadcasting, and others, it is gradually becoming a more comprehensive business.
During the epidemics, another advantage for user generated content emerged. While the production of professional generated content meets some difficulties during the quarantine, UGC will not be hindered as much as PGC. Since individual users can shoot videos at home, those users that do not need to work during the epidemics had more time to produce firms at home during the isolation. The outbreak had less impact on the supply of video content for Bilibili than on that of other long video platforms.
Weaknesses
Developed from ACG work, Bilibili has a large proportion of loyal users from this subculture. While it is expanding massively and attracting more users, it is hard for Bilibili to maintain its original cultural atmosphere, which will inevitably result in the loss of a part of the ACG fans. Besides, since it is difficult to regulate user generated content, there are still numerous videos with unknown copyright. These videos, if sued by the copyright holders, will bring further losses for Bilibili.
Comparison of key statistics between the major players
This section compares several key figures between the leading firms to see how their industry position has changed due to the COVID-19.
Data sourced from SWS Research
The number of monthly active users of iQIYI, Tencent, and Youku are much higher than other competitors, indicating that these three major players own most of the market.
The MAU of iQIYI, Tencent, and Youku are very close in the first half of 2018, all of which showed a slow but steady growth. In the middle of 2018, after the MAU of Youku suddenly increased, Youku started to lose its popularity and was never in line with the other two platforms again in terms of MAU. During the quarantine, users did spend more time on those platforms, resulting in a temporary increase in MAU. However, after it reached a peak in February 2020, the MAU of all four video platforms started to decline in the next two months. This changing pattern can be explained by delayed effect of the shut-down requirement for the long video supply chain during the epidemics, and the resumption of work after the epidemics.
For MGTV, although it does not have an online user base as large as others, it kept a steady growth and the decline in MAU after the pandemics was significantly less than that of other platforms, indicating that a high quality content is somewhat robust to general industry trends.
Data sourced from Western Securities
Among the four largest long video platforms, Tencent video owns the highest number of self-made reality shows, revealing its considerable financial power. Although the number of self-made reality shows decreased for iQIYI and Tencent during the first half of 2020 compared with 2019, that of Youku and MGTV increased, signaling that the epidemics did not bring such as severe negative impact on the supply of online video platforms.
The number of advertisements drawn during the epidemics is indeed lower than that of the previous year due to the sluggish demand from advertisers. However, MGTV's advertising revenue doubled despite the epidemics, showing the importance of highquality content.
Conclusion
Under the regular epidemic prevention and control, long video industry is facing new opportunities and new threats. While other industries were mostly severely affected, the long video industry was not heavily affected in a negative way as the other industries. Although it suffered competition from short videos, shortage of advertising revenue from advertisers, shortage of supply due to quarantine, the industry also had opportunities due to increasing demand and new business models. Factors that contributed to new business models included switch from PGC to UGC, people's willingness to pay for membership fees, and integrated supply chains. For future research, more business models such as the broadcasting of movies on video platforms instead of offline cinemas could be explored. | 2020-12-03T09:04:18.032Z | 2020-11-10T00:00:00.000 | {
"year": 2020,
"sha1": "c30786c3f48184d4351f47265cb4720fdfc38d74",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.piscomed.com/index.php/L-E/article/download/1387/1265",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5c24bc77dc04def64ec4d4e19a2a4e243ad7970a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
13282718 | pes2o/s2orc | v3-fos-license | Dynamics of a polymer test chain in a glass forming matrix: The Hartree Approximation
In this paper the Martin-Siggia-Rose formalism is used to derive a generalized Rouse equation for a test chain in a matrix which can undergo the glass transition. It is shown that the surrounding matrix renormalizes the static properties of the test chain. Furthermore the freezing of the different Rouse modes is investigated. This yields freezing temperatures which depend from the Rouse mode index.
I. INTRODUCTION
It is wellknown that for relatively short polymer chains the standard Rouse model can describe the dynamics of a melt reasonably well [1,2].On the contrary, for chain length N exceeding a critical length, the entanglement length N e , the behavior is usually described by the reptation model [1].Here we restrict ourselves to chain lengths N < N e , i.e. the entangled polymer dynamics will be beyond of our consideration.
The reason why in a dense melt the Rouse model provides so well dynamical description for short chains is connected with a screening of the long-range hydrodynamic as well as the excluded volume interactions.As a result the fluctuations of the chain variables are Gaussian.But there are further essential questions: How does the bare monomeric friction coefficient ξ 0 and the entropic elastic modulus ε (which are simple input parameters of the standard Rouse model) change due to the interactions of the test chain and the surrounding matix?Why does such a simple model work so well for describing short chain melts?Obviously, the corresponding answers cannot be given by the Rouse model, which describes only the dynamics of connected Gaussian springs without further interactions.
On the other hand, at relatively low temperatures close to the glass transition of the surrounding matrix the deviations from the standard Rouse behavior will be definitely more pronounced.For example, Monte Carlo (MC) studies of the bond fluctuation model at low temperatures (but still above the temperature region where possibly the glass transition mode coupling theory [3] applies) show that the Rouse modes remain well-defined eigenmodes of the polymer chains and the chains retain their Gaussian properties [4].Nevertheless, the relaxation of the Rouse modes displays a stretched exponential behavior rather than a pure exponential.It could even be expected that at temperatures below the glass transition temperature of the matrix T G the Rouse modes are frozen out.In these temperature regimes the interactions between monomers take a significant role and determine the physical picture of the dynamics as will be shown below.
The generalized Rouse equation (GRE), which can be used for the investigation of the problems mentioned above, has been derived by using projection formalism methods and mode coupling approximations (MCA) [5][6][7].As a result of projection operator formalism the time evolution of the test chain is expressed in terms of a frequency matrix, which is local in time, and a memory function contribution due to the inter-chain forces exerted on the test chain segments.With the assumption that the frequency matrix term has the same form as in the standard Rouse model (linear elasticity with the entropic modulus ε = 3k b T /l 2 ) all influence of the matrix chains reduce to the memory function contribution [5][6][7].
The projection operator methods appears to be exact but rather formal, and to derive explicit results further approximations have to be made, which can be hardly controlled often.Therefore it is instructive to use another alternative theoretical method to derive the GRE.Recently, a non-pertubative variational method which is equivalent to a selfconsistent Hartree approximation was used for the investigation of the dynamics of manifolds [8] and sine-Gordon model [9] in a random media.
As a starting point the authors employed the standard Martin-Siggia-Rose (MSR) functional integral technique [10,11].Here we follow this approach to derive a GRE and study the dynamics of a test polymer chain in a glass forming matrix.
The paper is organized as follows.In section 2, we give a general MSR-functional integral formulation for a test chain in a polymer (or non-polymer) matrix.Under the assumption that the fluctuations of the test chain are Gaussian the Hartreetype approximation is applied and a GRE is finally derived.The case when the fluctuation dissipation theorem (FDT) and the time homogenity are violated is also shortly considered.In section 3 on the basis of the GRE some static and dynamical properties of the test chain are discussed.In particular the theory of the test chain ergodicity breaking (freezing) in a glassy matrix is formulated.Section 4 gives some summary and general discussion.The appendices are devoted to some technical details of the Hartree-type approximation.
A. MSR-functional integral approach
Let us consider a polymer test chain with configurations characterized by the vector function R(s, t) with s numerating the segments of the chain, 0 ≤ s ≤ N, and time t.The test polymer chain moves in the melt of the other polymers (matrix) which positions in space are specified by the vector functions r (p) (s, t), where the index p = 1, 2, ..., M numerates the different chains of the matrix.The test chain is expected to have Gaussian statistics due to the screening of the self-interactions in a melt [1].We consider the simultaneous dynamical evolution of the R(s, t) and r (p) (s, t) variables assuming that the interaction between matrix and test chain is weak.
The Langevin equations for the full set of variables {R(s, t), r (1) (s, t), . . ., r (M ) (s, t)} has the form where ξ 0 denotes the bare friction coefficient, ε = 3T /l 2 the bare elastic modulus with the length of a Kuhn segment denoted by l, V (• • •) and Ṽ (• • •) are the interaction energies of test chain-matrix and matrix-matrix respectively, and f j (s, t), fj (s, t) are the random forces with the correlator After using the standard MSR-functional integral representation [10] for the system (1-3), the generating functional (GF) takes the form where the dots represents some source fields which will be specified later and Einstein's summation convention for repeated indices is used.In GF (4) the MSR-action of the free test chain is given by As we will realize later the explicit form of the full action of the medium A 1 r (p) (s, t), r(p) (s, t) plays no role.In principle it could have any form and in particular, for a polymer matrix, the following one In order to obtain an equation of motion for the test chain one should integrate over the matrix variables r (p) (s, t) first.For this end it is reasonable to represent GF (4) as where the influence functional Ξ is given by Ξ In the spirit of the mode coupling approximation (MCA) [3,6] the force between the test chain and the matrix should be expressed as a bilinear product of the two subsystems densities.In order to assure this we expand the influence functional (8 with respect to the forces F j = −∇ j V between the test chain and the matrix up to the second order.This leads to where the matrix density and the response field density were introduced and • • • 1 denotes cumulant averaging over the full MSR-action A 1 [r, r] of the matrix.In eq. ( 9) the term (t ′ ⇔ t) is the same like the previous one but with permutated time arguments.The terms which are linear with respect to F j vanishes because of the homogenity of the system.In the Appendix A we show that because of causality the correlator Π l (r, t)Π j (r ′ , t ′ ) 1 equals zero [10][11][12].Taking this into account and performing the spatial Fourier transformation the expression for GF (7) takes the form where the correlation function and the response function of the matrix are naturally defined.Going beyond the LRT-approximation would bring us multi-point correlation and response functions.
We should stress that in contrast to the matrix with a quenched disorder which was considered in [8,9] in our case the matrix has its own intrinsic dynamical evolution which is considered as given.For example, for the glass forming matrix, which is our prime interest here, the correlation and response functions are assumed to be governed by the Götze mode-coupling equations [3].
B. The Hartree approximation
The Hartree approximation (which is actually equivalent to the Feynman variational principle) was recently used for the replica field theory of random manifolds [13] as well as for the dynamics of manifolds [8] and sine-Gordon model [9] in a random media.
In the Hartree approximation the real MSR-action is replaced by a Gaussian action in such a way that all terms which include more than two fields R j (s, t) or/and Rj (s, t) are written in all possible ways as products of pairs of R j (s, t) or/and Rj (s, t), coupled to selfconsistent averages of the remaining fields.As a result the Hartreeaction is a Gaussian functional with coefficients, which could be represented in terms of correlation and response functions.After these straightforward calculations (details can be found in the Appendix B) the GF ( 12) takes the form where and In eq.(16,17) the response function the density correlator and the longitudinal part of the matrix response function are defined.The pointed brackets denote the selfconsistent averaging with the Hartree-type GF (15).
Up to now we considered the general off-equilibrium dynamics with the only restriction of causality [10][11][12].We now assume that for very large time moments t and t ′ , where the difference t − t ′ is finite so that t−t ′ t → 0, time homogenity and the fluctuation-dissipation theorem (FDT) holds.This implies where β ≡ 1/T .By using this in eq. ( 15) and after integration by parts in the integrals over t ′ the GF in Hartree approximation takes the form where the subscript ′′ st ′′ indicates the static correlation functions.This generating functional immediately leads to the following generalized Rouse equation (GRE) where the memory function and the effective elastic susceptibility are defined.The correlation function of the random force F j is given by As a result we have obtained basically the same GRE as in the papers [5][6][7] but with one additional elastic term.This term (see the 2-nd term in eq. ( 27)) is mainly inversely proportional to the temperature and is, in contrast to the first term, of an energetic nature.The two factors of kV (k) quantify the forces exerted by a pair of surrounding segments on the test chain segments s and s ′ , whereas the S st (k) and F st (k; s, s ′ ) factors quantify the static correlations between the segments of surrounding and test chain segments, respectively.In [5][6][7] only the entropic elastic part was taken into account.The memory function (26) has the same form as in [5][6][7] and the relationship (28) is assured as soon as the FDT ( 22) and ( 23) is fullfilled.
C. Generalized Rouse equations for the off-equilibrium dynamics In this subsection we give GRE's for the more general case when the time homogenity (stationarity) and the FDT do not hold [14].
By employing the standard way [8] one can derive two coupled equations of motion for correlators C(s, s ′ ; t, t ′ ) and response functions G(s, s ′ ; t, t ′ ) with the initial conditions and In the stationary case all correlators and response functions in eq.(29-32) only depend from the differences of time moments, t − t ′ .If we assume again that FDT (22) and ( 23) holds, then from eq. ( 30) after performing the integrations by parts (in the integrals over t ′′ ) one arrive at the GRE for t > 0 Of course, eq. ( 33) could be obtained immediately from eq. ( 25) by multiplying both sides with R j (s ′ , 0), averaging and taking into account that because of causality F (s, t)R(s ′ , 0) = 0 at t > 0. We will use the GRE eq.( 33), where the functions Γ and Ω are given by eqs.(26, 27), in the next section for the investigation of the test chain ergodicity breaking (freezing).
CHAIN
The new features of the GRE (33) relative to the standard Rouse equation are that it contains the integral convolution with respect to the s−variable in the frictional term as well as in the elastic term.The frictional term is also non-local in time.All these things together should change the statical and dynamical behaviour of the Gaussian test chain in comparison to the ideal chain.
We also should stress that the GRE is substantially nonlinear because the memory function (26) depends from the test chain correlator C(s, s ′ ; t) in such a way that a positive feedback obviously exists.That is the reason why one could expect that eq. ( 33) shows an ergodicity breaking in the spirit of Götze's glass transition theory [3].
As usual it is convenient to introduce the standard Rouse mode variables [1]: with the inverse transformation In general one also needs a 2-dimensional Rouse transformation where functions like Γ(s ′ , s ′′ ) should be treated like N × N-matrices.For example the density correlator (19) should be considered as an exponential function from a N × N-matrix Q(s ′ , s ′′ ) and the series expansion holds : We also assume that matrices in the Rouse mode representation are nearby diagonal Γ(p, q) = δ p,q Γ(p) (38) Ω(p, q) = δ p,q Ω(p) (40) for any p and q not equal zero [1].
Then as a result of Rouse mode transformation the GRE for the Rouse mode time correlation function, C(p, t) ≡ X(p, t)X(p, 0) , takes the form (for p = 0) and For p = 0 the GRE describes the dynamics of the centre of mass and has the following form with and As a result all Rouse mode variables relax independently.The conclusion that the Rouse modes are still "good eigenmodes" even in the melt is supported by Monte-Carlo [4] and molecular-dynamic [15] simulations.
For cases where the assumption of diagonality [38,39,40] cannot be justified, the Rouse modes do not decouple and one have to go back to eq. ( 33).In the Rouse mode representation it reads as As we have already discussed in sec.II.B the interaction with the surrounding segments renormalizes the elastic properties of the Rouse chain so that the test chains elastic susceptibility is given by eq. ( 43).The additional elastic term in GRE leads to the renormalized static normal modes correlator Explicit evaluation of the Ω(p) can be done if we use for the static correlator F st (k; p) the standard Rouse expression Then the calculation yields for the two limiting cases where and we have chosen l −1 as a cutting parameter.It is evident from the previous eqs.
(50a, 50b) that at small p • the elastic modulus gains an energetic component which, in contrast to the entropic part ε, increase with the cooling of the system, • initially absolutely flexible chains acquires a stiffness because of terms of order p 4 and higher.
At large p the elastic behaviour reduces to the standard Rouse one, as it is expected.
In Fig. 1 is shown the result of a numerical calculation of the static correlator (49).
The Fourier component of the potential is taken, as it is customary e.g. in the theory of neutron scattering [16], in the form of a pseudo potential approximation, , where γ and σ have dimensions of molecular energy and distance, respectively.The static structure factor S st (k) is chosen in the form of the Percus-Yevick's simple liquid model [17].One can see that for N = 500 the small Rouse mode index limit (50a) starts at p N ≤ 3 • 10 −3 whereas the opposit limit (50b) is fullfilled at p N ≥ 10 −1 .Because the correlator C st (p) depends mainly from p/N, for relatively short test chains the high mode index limit (50b) is shifted into the window of calculations (see Fig. 1 for N=20).
At least qualitatively this deviation from the standard Rouse behaviour have been seen by Kremer and Grest in their MD-simulations (see Fig. 3 in [15]).
B. The test chain ergodicity breaking in a glass forming matrix
First we consider the case p = 0.In the nonergodic state the Rouse mode correlation functions can be represented as where the non-ergodicity parameter was introduced and Ψ reg (p, t → ∞) = 0.
For the correlation function of the glassy matrix we can use the standard result of the glass transition theory [3] where the proximity parameter ∆ ≡ (T G − T )/T G is defined and T G is the temperature of the matrix ergodicity breaking (Götze temperature).In eq. ( 55) f c (k) is the non-ergodicity parameter of the matrix, τ ∆ ∝ ∆ −1/2a is the characteristic time scale, a is the characteristic exponent, 0 < a < 1/2 and h(k) is some amplitude.
In order to derive the equation for g(p) let us take the limit t → ∞ in eq. ( 41) keeping in mind the definitions (54) and (55).Very close to the test chain ergodicity breaking temperature T c (p), g(p) goes to zero (A-type transition [3]) and we can expand the exponential function in eq. ( 42) up to the first order with respect to g(p).The solution of the resulting equation has the simple form The critical temperature T c (p) is determined by the equation The numerical solution of eq. ( 57) is given in Fig. 2. It is obviously that if the entropic part of Ω(p) dominates, the critical temperature is given by Fig. 2 really shows that this law (58) is well satisfied due to the fact that the critical temperatures T c (p) are quite high.But for low temperatures the energetic contribution in Ω(p) is enhanced which leads to a deviation from this simple (N/p) 2dependence.
Now we consider the case for p = 0.The equation (45) for the velocity of the center of mass leads to the equation for the velocity correlator where Because of causality the correlator on the r.h.s. of eq. ( 60) has the form where, as it comes from eq. ( 45) Taking into account the definition of (f c.m ) i (t) and eq.( 28) this yields to the correlator Because of the causality property (61) only the δ-functional term on the r.h.s. of eq. ( 28) contributes to the correlator (64).Therefore the resulting equation for the self-diffusion coefficient takes the form which was obtained before in [5,6].
One can calculate the second term in the denominator of eq. ( 66) selfconsistently.
Because now the relevant times t ≫ τ rouse the approximation could be used in eq. ( 45).Then the density correlator (47) is given by With the use of eqs.( 68),(55) and eq.( 61) in the limit D → 0 eq.( 66) becomes where the denominator is given by static properties only.Similar statements have been suggested already in [18,19] The solution of eq. ( 69) has the simple form where Finally, the temperature of the ergodicity breaking (localization) for the mode p = 0 of the test chain is Fig. 3 shows the results of numerical calculations of T c (p = 0) and T c (p = 1) as functions of N. One can see that in the reasonable range of parameters T c (p = 0) > T c (p = 1).As a result one can say that on cooling of a test chain in a glassy matrix the mode p = 0 is the first to be freezed.On the subsequent cooling the modes p = 1, 2, . . ., N are freezed successively, It is apparent that the system studied here is a nontrivial polymeric generalization of the model introduced by Sjögren [20].This model was used for the investigation of the β-peak in the spectrum of glass forming systems [21].
IV. DISCUSSION
In this paper we have derived a GRE for a test polymer chain in a polymer (or non-polymer) matrix which has its own intrinsic dynamics, e.g. the glassy dynamics [3].We have used here the MSR-functional integral technique which could be considered as an alternative to the projection operator formalism [6].One of the difficulties in this formalism is the necessity of dealing with the projected dynamic, which is difficult to handle with explicitly.On the contrary in MSR-technique the dynamic of slow variables is well defined and several approximations which one have to employ could be justified.
In the interaction of the test chain with the surrounding matrix only two-point correlation and response functions are involved.In terms of MCA [6] this obviously corresponds to the projection of the generalized forces only onto the bilinear variables: product of test chain density and matrix density.
To handle with the action in the GF of the test chain we used the Hartree-type approximation (i.e., equivalent to the Feynman variational principle) [8,9,13], which is reasonable when the fluctuations of the test chain are Gaussian.In the case of a polymer melt (high densitiy) this is indeed the case due to the screening effects for the excluded volume [1].
The use of the Hartree-type makes the problem that we deal with analytically amenable and results in the GRE's for the case when the FDT holds as well as for the case when FDT does not hold.In this paper we have restricted ourselves to the first case and have shown that the interaction with the matrix renormalizes not only the friction coefficient (which makes the chain non-Markovian) but also the elastic modulus (which changes the static correlator).The form of the static correlator for the Rouse mode variables is qualitatively supported by MD-simulations [15].
As regards the dynamical behaviour, we have shown that the test chain in a glassy matrix (with the matrix glass transition temperature T G ) undergoes the ergodicity breaking transition at a temperature T c (p) ≤ T G .The critical temperature T c (p) could be parametrized with the Rouse mode index p and is a decreasing function of p.
We have considered only the A-type transition which is assured by the bilinear term in the expansion of eq.(42).It seems reasonable that keeping the whole exponential function in eq. ( 42) might lead to a B-type transition also.The results also essentially would change if the off-diagonal elements in the matrix (36) can not be neglected (see eq. ( 48)).In this case only one ideal transition temperature T c would be possible.The general theory of a A-type transition was discussed in [23].
This picture of freezing here should not be mixed with a different one, the un-derlying glass transition by itself (e.g. the glass transition of the matrix at T = T G ).
According to the present view of this phenomenon [3], the spontaneous arrest of the density fluctuations is driven by those of the microscopic lengthscale k 0 , where k 0 is the wave vector which corresponds to the structure factor's main maximum.The freezing of these fluctuations then arrests the others through the mode coupling.
same reason self-loops of response functions vanish [10,11].MRF's which consists only of r-variables also vanish.
In the case (A1) all time arguments of the r-variables are equal to the corresponding time arguments of r-variables and as a result the MRF in eq.(A1) vanishes.
The pointed brackets in eq.(B2) represent the selfconsistent averaging with the Gaussian Hartree action.Taking this into account and using the generalized Wick theorem [22], after straightforward algebra, we have where the last equation comes from the fact that the response function G(t, t ′ ) ∝ Θ(t − t ′ ) and G(t ′ , t) ∝ Θ(t ′ − t).
The 3-rd term in the exponent eq. ( 12) can be handled in the same way.The response function for the isotropic matrix has the form where P (k, t) is the longitudinal part of the matrix response function.Then the Hartree approximation of the 3-rd term in the exponent (12) Taking into account eq.(B1) with eq.(B2) and eq.(B5) leads to the Hartree-type approximation (15). | 2014-10-01T00:00:00.000Z | 1997-07-21T00:00:00.000 | {
"year": 1997,
"sha1": "f0903839fd13565eb7f8cb91cc734f5c60749a3d",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/physics/9707017",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f0903839fd13565eb7f8cb91cc734f5c60749a3d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Materials Science"
]
} |
267852579 | pes2o/s2orc | v3-fos-license | Myopia progression following 0.01% atropine cessation in Australian children: Findings from the Western Australia – Atropine for the Treatment of Myopia (WA‐ATOM) study
A rebound in myopia progression following cessation of atropine eyedrops has been reported, yet there is limited data on the effects of stopping 0.01% atropine compared to placebo control. This study tested the hypothesis that there is minimal rebound myopia progression after cessation of 0.01% atropine eyedrops, compared to a placebo.
| INTRODUCTION
Myopia progression is generally fastest during childhood and stabilises around mid-adolescence, 1-3 although it continues to progress in about one-third of young adults, albeit at a slower rate. 4Myopia control is thus a mid-to long-term commitment.However, it remains unclear for how long treatment should be undertaken.Low-concentration atropine eyedrops are a method of myopia control that has been shown not only slow myopia progression, [5][6][7][8][9] but also to reduce myopia incidence. 10owever, as documented in the Atropine for the Treatment of Myopia (ATOM) trials in Singapore, 11 cessation of high concentration (0.5% and 1.0%) atropine eyedrops after 2 years of treatment was followed by a 'rebound effect', where the rate of myopia progression was faster compared to the placebo group.In contrast, the ATOM study of 0.01% atropine eyedrops did not result in such rebound myopia progression after stopping treatment, at least relative to higher concentrations. 11s later confirmed by the Low-concentration Atropine for Myopia Progression (LAMP) study in Hong Kong, 12 the amount of rebound myopia progression following treatment cessation is inversely linked to the concentration of the atropine eyedrops used, with concentrations of 0.05% linked to greater rebound myopia progression compared to concentrations of 0.025% and 0.01%.The LAMP study further reported that, following 2 years of initial low-concentration atropine therapy, continued usage of 0.05% atropine eyedrops for at least 12 months provided better myopia control outcome than complete cessation. 12ased on the findings of these studies, it has been generally assumed that stopping long-term use of 0.01% atropine eyedrops leads to minimal, if any, rebound myopia progression.However, there is a lack of studies that have directly investigated the effects of stopping longterm 0.01% atropine eyedrops compared to a placebo.In a Japanese study, Hieda et al. 13 found there was no difference in rate of myopia progression in the 12 months following cessation of eyedrops between children who used 0.01% atropine for 2 years compared to those who had used a placebo.However, that study is limited by the large number of participant withdrawals between the start and the end of the 12-month wash-out period ($68% lost to follow-up), which is likely to have impacted their statistical power.While the ATOM 11 and LAMP 12 studies concluded that rebound after stopping 0.01% was limited, neither study had a parallel placebo group to which the wash-out effects of 0.01% atropine could be compared.
Given that atropine drops are commonly prescribed for myopia control therapy, especially in Australia where a commercial version of the drug, Eikance 0.01% (Aspen Australia) has been approved for use since mid-2022 and many compounding pharmacies have dispensed 0.01% atropine eyedrops on prescription, the clinical course following cessation of long-term atropine eyedrops must be delineated.
We recently reported the results of the first 2 yearsthe treatment phase-of the Western Australia (WA)-ATOM study, 9 in which we found that 0.01% atropine eyedrops had a mild to moderate myopia control effect in Australian children after 18 months, compared to a placebo.However, this benefit waned to non-significant levels after 2 years of therapy.This study reports on the third year-the wash-out phase-of the WA-ATOM study, where we tested the hypothesis that rebound myopia progression is minimal for at least 1 year after ceasing application of 0.01% atropine eyedrops compared to a placebo.
| METHODS
The WA-ATOM study 14 is a single-centre, randomised double-masked placebo-controlled trial in which 0.01% atropine eyedrops were tested for myopia control efficacy and safety in parallel against a placebo.Its full design and protocol have been detailed previously. 14A total of 153 Australian children, 6-16 years of age, with documented myopia of ≤À1.50D and progression of ≥0.50D/year were recruited.To enhance recruitment rate, participants were randomised at a 2:1 ratio to receive either 0.01% atropine eyedrops or a placebo.The WA-ATOM study has three phases.The first 2 years of the study comprise the treatment phase, where participants received the allocated eyedrops on a nightly basis.The following 12 months was the washout phase, where eyedrop use ceased and wash-out effects were monitored.During these 3 years of the study, participants and investigators remained masked to the treatment allocation and participants were examined every 6 months.
At enrollment, following a full explanation of the nature of the study, participants and their parents or caregivers provided verbal and written consent, respectively.This trial was conducted in accordance with the Declaration of Helsinki and was approved by the University of Western Australia Human Research Ethics Committee.The use of the placebo and 0.01% atropine eyedrops was approved by the Therapeutics Goods Administration, Department of Health, Australia, and the trial was registered in the Australia and New Zealand Clinical Trials Registry (ACTRN12617000598381).
| Eye examination
Participants were examined at the start of the wash-out period (cessation of eyedrops; 24-month follow-up) and 6 and 12 months thereafter (30-and 36-month follow-ups, respectively).Distance and near visual acuities (logMAR charts) were measured monocularly with participants' habitual distance correction followed by pinholes over their correction.The better VA of the two measurements was taken as the best-corrected visual acuity (BCVA). 8,14ccommodative amplitude was measured using a Royal Air Forces rule (Good-Lite Elgin, Illinois, California), while anterior chamber depth (ACD) and axial length (AL) were measured using an IOLMaster V5 (Carl Zeiss Meditec AG, Jena, Germany).Pupillary measures, including mesopic pupil size and pupil light reactions, were measured in a dark room using a NPi-200 digital pupillometer (NeurOptics Inc., Laguna Hills, California).Participants were instructed to fixate at a small red target at a distance of $3 m for 5 s while a 50-milliWatts white light stimulus lasting 0.8 s triggered pupil light response and the device automatically measured pupil size, and the latency and velocity of constriction.The onset of constriction was defined as a decrease of 5% of the initial baseline pupil size (per confirmation from manufacturer).
One to three drops of 1% cyclopentolate were then instilled in each eye, depending on the amount of cycloplegia achieved, and autorefraction and autokeratometry (Nidek ARK-510A, NIDEK Co. Ltd, Japan) were performed at least 20 min after the last drop, with cycloplegia being confirmed through assessment of the light pupil response.The cycloplegic refraction was then quantified as the spherical equivalent (SphE).Additionally, at the 24-and 36-month visits, Scheimpflug imaging (Oculus Pentacam [software version 6.08r27; OculusOptikgerate GmbH, Wetzlar, Germany]) was conducted to measure the central corneal thickness and crystalline lens thickness after dilation.
| Statistical analysis
Analyses were conducted on an intention-to-treat basis.The main outcome measure during this wash-out phase was the changes in SphE and AL from during the wash-out phase (24-and 36-month).Linear mixed effect models were used to analyse the difference between groups.A random intercept term with nesting for eyes within individuals was included in the models to account for the repeated measurements of the two eyes and multiple visits.Baseline measurement of the ocular outcome was included in the models as a fixedeffect covariate, along with index age given the significant difference in age between groups at baseline (Table 1). 8We further explored for treatment group interaction effects with age and ethnicity.All analyses were conducted on R version 4.1.1(2021 The R Foundation for Statistical Computing Platform [https:// www.r-project.org/]) and the level of significance was set at p < 0.05.
| RESULTS
Of the 153 participants enrolled, 104 were randomised to receive 0.01% atropine eyedrops and 49 received a placebo.The demography and baseline ocular measures of each group are shown in Table 1.Compared to those in the atropine group, participants in the placebo group were on average 1 year older, had 0.1 mm deeper anterior chambers, 0.06 mm thinner crystalline lens, and 0.015 ms slower constriction onset ( p = 0.010 to 0.031; Table 1) at baseline.As reported previously, 9 during the 2-year treatment phase, significantly more participants withdrew from the placebo group than the atropine group ( p = 0.024 on Fisher exact test; Figure 1).During the wash-out phase, 2 participants withdrew from the study, both from the atropine group.The reasons cited for withdrawal are described in the footnotes of Figure 1.None of the participants were using any other forms of myopia control, with the exception of progressive addition or bifocal lenses, while they were in the 3-year study.
| Changes after stopping treatment
At the start of the wash-out phase, the median SphE and AL were À4.00 D (interquartile range [IQR] = À5.25 to À3.00) and 25.1 mm (IQR = 24.8 to 25.7) in the placebo group, and À3.88 D (IQR = À4.75 to À3.22) and 25.0 mm (IQR = 24.6 to 25.5) in the atropine group, respectively.Neither of these measures significantly differed between groups at the start of the washout phase (p = 0.57 for SphE and p = 0.19 for AL).
Figure 2 and Table 2 show the changes in SphE and AL during the wash-out phase.Children who had been receiving 0.01% atropine exhibited faster myopia progression after stopping the eyedrops compared to those who had been on a placebo.This difference in rate of myopia progression can be noted at both 6-and 12-months posttreatment but was only statistically significant at the latter visit.By the end of 2 years' treatment and 1-year wash-out periods, the cumulative myopia progression since baseline was similar between the two groups (Figure 2).
In both groups, the rate of SphE and AL change was significantly slower in the second half of the wash-out period (last 6 months) compared to the first half (p = 0.032 and p < 0.001, respectively).As also seen in Figure 2, the slowing appeared more pronounced in the placebo group than the 0.01% atropine group, but this difference in rate of slowing was not statistically significant (p-value for group by visit interaction effect = 0.91 for SphE and 0.13, for AL).
Within just the atropine group, myopia progressed significantly faster in terms of both SphE and AL change in the 12 months following eyedrop cessation compared to the 12 months immediately before.In contrast, progression rate in the placebo group was similar before and after eyedrop cessation (Table 3).
Additionally, a significant increase in ACD was noted in the atropine group compared to the placebo group at both 6 and 12 months after cessation of eyedrops (Table 2).Changes in pupillary measures did not differ significantly between the placebo and control groups after cessation of eyedrops.
In both groups, older age was associated with slower myopia progression, such that SphE and AL changes were slowed by 0.026 (95% CI = 0.001 to 0.051; p = 0.045) and 0.008 (95% CI = 0.001 to 0.017; F I G U R E 1 Sample size and retention.a Reasons for withdrawal: 3 did not like diagnostic or study drops; 4 had difficulty adhering to treatment regimen; 3 had difficulty attending appointments + concern of receiving placebo; 2 were uncontactable; 1 relocated; 3 wanted to seek myopia treatment (atropine or orthokeratology) privately; 2 was uncontactable; 4 cited personal reasons/ did not provide reason.b Reasons for withdrawal: 1 sought atropine and orthokeratology eyedrops privately, 1 was uncontactable.c Some participants did not attend the 30-month visit due to restrictions or concerns about the COVID-19 pandemic but remained in the study.
F I G U R E 2 Change in spherical equivalent (top row) and axial length (bottom row) since start of wash-out phase (left column) and since baseline (right column).Yellow highlighted region indicates the treatment phase, during which participants were using allocated eyedrops.p = 0.034) for each year older in age.There was a trend for less rebound axial elongation with older age, but this failed to reach statistical significance ( p = 0.16).There were no interaction effects between treatment group and age on change in SphE ( p = 0.74) or between group and ethnicity on either myopia progression measure ( p ≥ 0.32) during the wash-out period.As summarised by Brennan et al., 15 'rebound should be assumed unless proven otherwise'.There has been an implicit assumption that rebound myopia progression after stopping long-term use of 0.01% eyedrops is minimal, yet empirical evidence to support this assumption has been limited due to the lack of placebo-control trials.
Findings from the current analysis demonstrated that rate of myopia progression increases after cessation of long-term 0.01% atropine eyedrops.Ultimately, after 2 years of eyedrop use and a year of wash-out, there was no difference in cumulative myopia progression between the 0.01% atropine and placebo groups.
A main strength of this study is the inclusion of a placebo-control group to which the effects of treatment wash-out could be compared, as opposed to the ATOM study in Singapore 11 and the LAMP study. 12However, our study did not have a treatment group that continued 0.01% atropine treatment from 24 to 36 months, neither did we test other concentrations of low-concentration atropine eyedrops.A comparison between continuing and ceasing treatment has been done by the LAMP study, 12 which confirms a 'rebound' myopia progression effect after ceasing low-concentration atropine use, compared to continuing treatment.The same study, along with the Singapore ATOM study, also concluded that the amount of rebound is positively associated with the concentration. 11,12Another potential limitation of our study is that we lacked the power to detect statistical difference in some outcomes between groups, for example, a differential rate of slower in myopia progression throughout the wash-out period (i.e., first 6 months vs. the last 6 months).Furthermore, we did not evaluate the effects of tapering atropine treatment.Several researchers have highlighted the importance of evaluating the effects of tapering treatment, [16][17][18] but studies on this remain scarce.A study by Polling et al. 19 in a clinical setting reported that, in children with good myopia control response to 0.5% or 0.25% atropine eyedrops, reducing the concentration to 0.01% resulted in less rebound myopia, although there was no control group to which this could be compared.Nonetheless, their findings suggest that tapering could be an effective way of reducing rebound effects.
In contrast to the LAMP study, we did not find a significant effect of age on the amount of myopia progression after eyedrop cessation.This may be related to our smaller sample size and the older age of our participants, relative to the LAMP study.Because of these limitations of our study, it may not be appropriate to assume that age does not have an effect on rebound myopia progression.Further studies should explore this issue, which will inform on the age at which myopia control can cease without leading to a rebound effect.Studies have reported that the mean age of myopia stabilisation occurs during the mid-teenage years, ranging from 14 to 17 years old, 1,2 although this varies widely between individuals and according to sex, 2 ethnicity, 1 severity of myopia, 1 and amount of near work. 20The Correction of Myopia Evaluation Trial (COMET) reported that up to 23% and 10% of participants with myopia continued to have progression after 18 and 21 years of age, respectively. 1Thus, the age of natural myopia stabilisation, ergo the age at which myopia control can stop, should be tailored to each individual, although this may be difficult to assess without stopping treatment.
However, the current findings, along with the recent observations from the Atropine Treatment Long-term Assessment Study (ATLAS), 21 may have a more pessimistic conclusion-that the use of atropine eyedrops does not meaningfully alter final refractive error in the long term.The ATLAS is a long-term follow-up study of the ATOM 1 and 2 studies in Singapore.In the ATOM1 study, by the end of 2 years of treatment and 1 year of wash-out, the 1% atropine group had significantly less myopia progression in terms of SphE and AL, compared to those who were on placebo eyedrops.However, when the participants were re-examined 20 years later as part of the ATLAS, there was no significant difference in T A B L E 3 Estimated marginal mean (and 95% confidence intervals) myopia progression in the 12 months immediately before and after cessation of eyedrops.either measure between the two groups. 21Likewise, in the ATOM2 study, there was no significant difference in refractive error between groups who used different atropine concentrations (0.01%, 0.1%, and 0.5%) more than 10 years after the trial ended, 21 even though the majority of those participants underwent re-treatment with 0.01% atropine in the final 2 years of the 5-year trial. 22This highlights the possibility that lowconcentration atropine eyedrops only delays and do not control myopia progression.The LAMP2 study, 10 which evaluated the effects of atropine eyedrops on myopia incidence, similarly concluded that the lower myopia incidence in the treated group, relative to the placebo-control group, could simply be a delay in myopia onset, rather than prevention.This hypothesis that treatment only delays and does not control myopia progression is further supported by the current findings.While myopia progression was slower in the second half of the 12 months wash-out period compared to the first half in both groups, there was a trend for a more pronounced slowing in the placebo group, albeit not statistically significant possibly due to a lack of study power.Nonetheless, we may assume that the rate of slowing in myopia progression in the placebo group is natural and age-related, given the lack of intervention in this group.The attenuated slowing in the control group may be due to a tendency for myopia progression to 'catch up' following cessation of treatment.
While using eyedrops
Given the relatively recent growth in myopia control research and use in clinical practice, further longterm investigations are critical to test this hypothesis as it will have direct and highly impactful clinical significance in myopia control.There has been limited data on the long-term benefits, or lack thereof, of atropine therapy and other forms of myopia control.The ATLAS study, 21 which now comprises participants in their third and fourth decades of life, failed to find any difference in eye and refractive error outcomes, including rates of myopia-related complications, between groups using varying concentrations of atropine eyedrops.However, the ATLAS participants had generally been using mid-to-high concentrations, apart from the 0.01% atropine and the placebo group.The current study, along with those from the LAMP study in Hong Kong, 12 provides some insight on the short-to midterm effects of stopping low-concentration atropine eyedrops.
The current findings demonstrate rebound myopia progression after stopping 0.01% eyedrops, such that early cessation of 0.01% atropine eyedrops might negate the benefits of treatment in the preceding years.Future studies should explore the effect of tapering atropine dosage or frequency of instillation.
Participant demography and ocular measures at baseline.
T A B L E 1Note: Difference between groups analysed using a independent t-test for age, b chi-square test for categorical variables, and c linear mixed effect model for ocular measures.Abbreviations: BCVA, best-corrected visual acuity; IQR, interquartile range; SD, standard deviation.*Significantly different between groups at p < 0.05. | 2024-02-25T06:17:15.389Z | 2024-02-23T00:00:00.000 | {
"year": 2024,
"sha1": "50f79fd60fa7ea6a16c1598c37a4bc204c2a34fc",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ceo.14368",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "6c4c88564678356fd1251ce98b75e20f8c9daab3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248965046 | pes2o/s2orc | v3-fos-license | Planning with Diffusion for Flexible Behavior Synthesis
Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility.
Introduction
Planning with a learned model is a conceptually simple framework for reinforcement learning and data-driven decision-making. Its appeal comes from employing learning techniques only where they are the most mature and effective: for the approximation of unknown environment dynamics in what amounts to a supervised learning problem. Afterwards, the learned model may be plugged into classical trajectory optimization routines (Tassa et al., 2012;Posa et al., 2014;Kelly, 2017), which are similarly wellunderstood in their original context. However, this combination rarely works as described. Because powerful trajectory optimizers exploit learned models, plans generated by this procedure often look more like adversarial examples than optimal trajectories (Talvitie, 2014;Ke et al., 2018). As a result, contemporary modelbased reinforcement learning algorithms often inherit more from model-free methods, such as value functions and policy gradients (Wang et al., 2019), than from the trajectory optimization toolbox. Those methods that do rely on online planning tend to use simple gradient-free trajectory optimization routines like random shooting (Nagabandi et al., 2018) or the cross-entropy method (Botev et al., 2013;Chua et al., 2018) to avoid the aforementioned issues.
In this work, we propose an alternative approach to datadriven trajectory optimization. The core idea is to train a model that is directly amenable to trajectory optimization, in the sense that sampling from the model and planning with it become nearly identical. This goal requires a shift in how the model is designed. Because learned dynamics models are normally meant to be proxies for environment dynamics, improvements are often achieved by structuring the model according to the underlying causal process (Bapst et al., 2019). Instead, we consider how to design a model in line with the planning problem in which it will be used. For example, because the model will ultimately be used for planning, action distributions are just as important as state dynamics and long-horizon accuracy is more important than single-step error. On the other hand, the model should remain agnostic to reward function so that it may be used Code and visualizations of the learned denoising process are available at diffusion-planning.github.io. planning horizon denoising Diffuser local receptive field Figure 2. Diffuser samples plans by iteratively denoising twodimensional arrays consisting of a variable number of state-action pairs. A small receptive field constrains the model to only enforce local consistency during a single denoising step. By composing many denoising steps together, local consistency can drive global coherence of a sampled plan. An optional guide function J can be used to bias plans toward those optimizing a test-time objective or satisfying a set of constraints.
in multiple tasks, including those unseen during training. Finally, the model should be designed so that its plans, and not just its predictions, improve with experience and are resistant to the myopic failure modes of standard shootingbased planning algorithms.
We instantiate this idea as a trajectory-level diffusion probabilistic model (Sohl-Dickstein et al., 2015;Ho et al., 2020) called Diffuser, visualized in Figure 2. Whereas standard model-based planning techniques predict forward in time autoregressively, Diffuser predicts all timesteps of a plan simultaneously. The iterative sampling process of diffusion models leads to flexible conditioning, allowing for auxiliary guides to modify the sampling procedure to recover trajectories with high return or satisfying a set of constraints. This formulation of data-driven trajectory optimization has several appealing properties: Long-horizon scalability Diffuser is trained for the accuracy of its generated trajectories rather than its singlestep error, so it does not suffer from the compounding rollout errors of single-step dynamics models and scales more gracefully with respect to long planning horizon.
Task compositionality Reward functions provide auxiliary gradients to be used while sampling a plan, allowing for a straightforward way of planning by composing multiple rewards simultaneously by adding together their gradients.
Temporal compositionality Diffuser generates globally coherent trajectories by iteratively improving local consistency, allowing it to generalize to novel trajectories by stitching together in-distribution subsequences.
Effective non-greedy planning By blurring the line between model and planner, the training procedure that improves the model's predictions also has the effect of improving its planning capabilities. This design yields a learned planner that can solve the types of long-horizon, sparse-reward problems that prove difficult for many conventional planning methods.
The core contribution of this work is a denoising diffusion model designed for trajectory data and an associated probabilistic framework for behavior synthesis. While unconventional compared to the types of models routinely used in deep model-based reinforcement learning, we demonstrate that Diffuser has a number of useful properties and is particularly effective in offline control settings that require long-horizon reasoning and test-time flexibility.
Background
Our approach to planning is a learning-based analogue of past work in behavioral synthesis using trajectory optimization (Witkin & Kass, 1988;Tassa et al., 2012). In this section, we provide a brief background on the problem setting considered by trajectory optimization and the class of generative models we employ for that problem.
Problem Setting
Consider a system governed by the discrete-time dynamics s t+1 = f (s t , a t ) at state s t given an action a t . Trajectory optimization refers to finding a sequence of actions a * 0:T that maximizes (or minimizes) an objective J factorized over per-timestep rewards (or costs) r(s t , a t ): where T is the planning horizon. We use the abbreviation τ = (s 0 , a 0 , s 1 , a 1 , . . . , s T , a T ) to refer to a trajectory of interleaved states and actions and J (τ ) to denote the objective value of that trajectory.
Diffusion Probabilistic Models
Diffusion probabilistic models (Sohl-Dickstein et al., 2015;Ho et al., 2020) pose the data-generating process as an iterative denoising procedure p θ (τ i−1 | τ i ). This denoising is the reverse of a forward diffusion process q(τ i | τ i−1 ) that slowly corrupts the structure in data by adding noise. The data distribution induced by the model is given by: is a standard Gaussian prior and τ 0 denotes (noiseless) data. Parameters θ are optimized by minimizing a variational bound on the negative log likelihood of the reverse process: θ * = arg min θ −E τ 0 log p θ (τ 0 ) . The reverse process is often parameterized as Gaussian with fixed timestep-dependent covariances: The forward process q(τ i | τ i−1 ) is typically prespecified.
Notation. There are two "times" at play in this work: that of the diffusion process and that of the planning problem. We use superscripts (i when unspecified) to denote diffusion timestep and subscripts (t when unspecified) to denote planning timestep. For example, s 0 t refers to the t th state in a noiseless trajectory. When it is unambiguous from context, superscripts of noiseless quantities are omitted: τ = τ 0 . We overload notation slightly by referring to the t th state (or action) in a trajectory τ as τ st (or τ at ).
Planning with Diffusion
A major obstacle to using trajectory optimization techniques is that they require knowledge of the environment dynamics f . Most learning-based methods attempt to overcome this obstacle by training an approximate dynamics model and plugging it in to a conventional planning routine. However, learned models are often poorly suited to the types of planning algorithms designed with ground-truth models in mind, leading to planners that exploit learned models by finding adversarial examples.
We propose a tighter coupling between modeling and planning. Instead of using a learned model in the context of a classical planner, we subsume as much of the planning process as possible into the generative modeling framework, such that planning becomes nearly identical to sampling. We do this using a diffusion model of trajectories, p θ (τ ). The iterative denoising process of a diffusion model lends itself to flexible conditioning by way of sampling from perturbed distributions of the form: The function h(τ ) can contain information about prior evidence (such as an observation history), desired outcomes (such as a goal to reach), or general functions to optimize (such as rewards or costs). Performing inference in this perturbed distribution can be seen as a probabilistic analogue to the trajectory optimization problem posed in Section 2.1, as it requires finding trajectories that are both physically realistic under p θ (τ ) and high-reward (or constraint-satisfying) under h(τ ). Because the dynamics information is separated from the perturbation distribution h(τ ), a single diffusion model p θ (τ ) may be reused for multiple tasks in the same environment.
In this section, we describe Diffuser, a diffusion model designed for learned trajectory optimization. We then discuss two specific instantiations of planning with Diffuser, realized as reinforcement learning counterparts to classifierguided sampling and image inpainting.
A Generative Model for Trajectory Planning
Temporal ordering. Blurring the line between sampling from a trajectory model and planning with it yields an unusual constraint: we can no longer predict states autoregressively in temporal order. Consider the goalconditioned inference p(s 1 | s 0 , s T ); the next state s 1 depends on a future state as well as a prior one. This example is an instance of a more general principle: while dynamics prediction is causal, in the sense that the present is determined by the past, decision-making and control can be anti-causal, in the sense that decisions in the present are conditional on the future. 1 Because we cannot use a temporal autoregressive ordering, we design Diffuser to predict all timesteps of a plan concurrently.
Temporal locality. Despite not being autoregressive or Markovian, Diffuser features a relaxed form of temporal locality. In Figure 2, we depict a dependency graph for a diffusion model consisting of a single temporal convolution. The receptive field of a given prediction only consists of nearby timesteps, both in the past and the future. As a result, each step of the denoising process can only make predictions based on local consistency of the trajectory. By composing many of these denoising steps together, however, local consistency can drive global coherence.
Trajectory representation.
Diffuser is a model of trajectories designed for planning, meaning that the effectiveness of the controller derived from the model is just as important as the quality of the state predictions. As a result, states and actions in a trajectory are predicted jointly; for the purposes of prediction the actions are simply additional dimensions of the state. Specifically, we represent inputs (and outputs) of Diffuser as a two-dimensional array: with one column per timestep of the planning horizon.
Architecture. We now have the ingredients needed to specify a Diffuser architecture: (1) an entire trajectory should be predicted non-autoregressively, (2) each step of the denoising process should be temporally local, Observe state s; initialize plan τ N ∼ N (0, I)
4:
for i = N, . . . , 1 do 5: // parameters of reverse transition 6: µ ← µ θ (τ i ) 7: // guide using gradients of return 8: // constrain first state of plan 10: Execute first action of plan τ 0 a0 and (3) the trajectory representation should allow for equivariance along one dimension (the planning horizon) but not the other (the state and action features). We satisfy these criteria with a model consisting of repeated (temporal) convolutional residual blocks. The overall architecture resembles the types of U-Nets that have found success in image-based diffusion models, but with two-dimensional spatial convolutions replaced by onedimensional temporal convolutions ( Figure A1). Because the model is fully convolutional, the horizon of the predictions is determined not by the model architecture, but by the input dimensionality; it can change dynamically during planning if desired.
Training.
We use Diffuser to parameterize a learned gradient θ (τ i , i) of the trajectory denoising process, from which the mean µ θ can be solved in closed form (Ho et al., 2020). We use the simplified objective for training themodel, given by: in which i ∼ U{1, 2, . . . , N } is the diffusion timestep, ∼ N (0, I) is the noise target, and τ i is the trajectory τ 0 corrupted with noise . Reverse process covariances Σ i follow the cosine schedule of .
Reinforcement Learning as Guided Sampling
In order to solve reinforcement learning problems with Diffuser, we must introduce a notion of reward. We appeal to the control-as-inference graphical model (Levine, 2018) to do so. Let O t be a binary random variable denoting the optimality of timestep t of a trajectory, with p(O t = 1) = exp(r(s t , a t )). We can sample from the set of optimal trajectories by setting h(τ ) = p(O 1:T | τ ) in Equation 1: We have exchanged the reinforcement learning problem for one of conditional sampling. Thankfully, there has been much prior work on conditional sampling with diffusion models. While it is intractable to sample from this distribution exactly, when p(O 1:T | τ i ) is sufficiently smooth, the reverse diffusion process transitions can be approximated as Gaussian (Sohl-Dickstein et al., 2015): where µ, Σ are the parameters of the original reverse process transition p θ (τ i−1 | τ i ) and This relation provides a straightforward translation between classifier-guided sampling, used to generate classconditional images , and the reinforcement learning problem setting. We first train a diffusion model p θ (τ ) on the states and actions of all available trajectory data. We then train a separate model J φ to predict the cumulative rewards of trajectory samples τ i . The gradients of J φ are used to guide the trajectory sampling procedure by modifying the means µ of the reverse process according to Equation 3. The first action of a sampled trajectory τ ∼ p(τ | O 1:T = 1) may be executed in the environment, after which the planning procedure begins again in a standard receding-horizon control loop. Pseudocode for the guided planning method is given in Algorithm 1.
Goal-Conditioned RL as Inpainting
Some planning problems are more naturally posed as constraint satisfaction than reward maximization. In these settings, the objective is to produce any feasible trajectory that satisfies a set of constraints, such as terminating at a goal location. Appealing to the two-dimensional array representation of trajectories described by Equation 2, this setting can be translated into an inpainting problem, in which state and action constraints act analogously to observed pixels in an image (Sohl-Dickstein et al., 2015). All unobserved locations in the array must be filled in by the diffusion model in a manner consistent with the observed constraints.
The perturbation function required for this task is a Dirac delta for observed values and constant elsewhere. Concretely, if c t is state constraint at timestep t, then The definition for action constraints is identical. In practice, this may be implemented by sampling from Even reward maximization problems require conditioningby-inpainting because all sampled trajectories should begin at the current state. This conditioning is described by line 10 in Algorithm 1.
Properties of Diffusion Planners
We discuss a number of Diffuser's important properties, focusing on those that are are either distinct from standard dynamics models or unusual for non-autoregressive trajectory prediction.
Learned long-horizon planning. Single-step models are typically used as proxies for ground-truth environment dynamics f , and as such are not tied to any planning algorithm in particular. In contrast, the planning routine in Algorithm 1 is closely tied to the specific affordances of diffusion models. Because our planning method is nearly identical to sampling (with the only difference being guidance by a perturbation function h(τ )), Diffuser's effectiveness as a long-horizon predictor directly translates to effective long-horizon planning. We demonstrate the benefits of learned planning in a goal-reaching setting in Figure 3a, showing that Diffuser is able to generate feasible trajectories in the types of sparse reward settings where shooting-based approaches are known to struggle. We explore a more quantitative version of this problem setting in Section 5.1.
Temporal compositionality. Single-step models are often motivated using the Markov property, allowing them to compose in-distribution transitions to generalize to outof-distribution trajectories. Because Diffuser generates globally coherent trajectories by iteratively improving local consistency (Section 3.1), it can also stitch together familiar subsequences in novel ways. In Figure 3b, we train Diffuser on trajectories that only travel in a straight line, and show that it can generalize to v-shaped trajectories by composing trajectories at their point of intersection.
Variable-length plans.
Because our model is fully convolutional in the horizon dimension of its prediction, its planning horizon is not specified by architectural choices. Instead, it is determined by the size of the input noise τ N ∼ N (0, I) that initializes the denoising process, allowing for variable-length plans (Figure 3c).
Task compositionality.
While Diffuser contains information about both environment dynamics and behaviors, it is independent of reward function. Because the model acts as a prior over possible futures, planning can be guided by comparatively lightweight perturbation functions h(τ ) (or even combinations of multiple perturbations) corresponding to different rewards. We demonstrate this by planning for a new reward function unseen during training of the diffusion model (Figure 3d). Multi-task Average 21.5 -16.9 129.4 Table 1. (Long-horizon planning) The performance of Diffuser and prior model-free algorithms in the Maze2D environment, which tests long-horizon planning due to its sparse reward structure. The Multi2D setting refers to a multi-task variant with goal locations resampled at the beginning of every episode. Diffuser substantially outperforms prior approaches in both settings. Appendix A details the sources for the scores of the baseline algorithms.
Experimental Evaluation
The focus of our experiments is to evaluate Diffuser on the capabilities we would like from a data-driven planner.
In particular, we evaluate (1) the ability to plan over long horizons without manual reward shaping, (2) the ability to generalize to new configurations of goals unseen during training, and (3) the ability to recover an effective controller from heterogeneous data of varying quality. We conclude by studying practical runtime considerations of diffusion-based planning, including the most effective ways of speeding up the planning procedure while suffering minimally in terms of performance.
Long Horizon Multi-Task Planning
We evaluate long-horizon planning in the Maze2D environments (Fu et al., 2020), which require traversing to a goal location where a reward of 1 is given. No reward shaping is provided at any other location. Because it can take hundreds of steps to reach the goal location, even the best model-free algorithms struggle to adequately perform credit assignment and reliably reach the goal (Table 1).
We plan with Diffuser using the inpainting strategy to condition on a start and goal location. (The goal location is also available to the model-free methods; it is identifiable by being the only state in the dataset with non-zero reward.) We then use the sampled trajectory as an open-loop plan. Diffuser achieves scores over 100 in all maze sizes, indicating that it outperforms a reference expert policy. We visualize the reverse diffusion process generating Diffuser's plans in Figure 4.
While the training data in Maze2D is undirected -consisting of a controller navigating to and from randomly selected locations -the evaluation is single-task in that the goal is always the same. In order to test multi-task flexibility, we modify the environment to randomize the goal location at the beginning of each episode. This setting is denoted as Multi2D in Table 1. Diffuser is naturally a multi-task planner; we do not need to retrain the model from the single-task experiments and simply change the conditioning goal. As a result, Diffuser performs as well in the multitask setting as in the single-task setting. In contrast, there is a substantial performance drop of the best model-free algorithm in the single-task setting (IQL; Kostrikov et al. 2022) when adapted to the multi-task setting. Details of our multi-task IQL with hindsight experience relabeling (Andrychowicz et al., 2017) are provided in Appendix A. MPPI uses the ground-truth dynamics; its poor performance compared to the learned planning algorithm of Diffuser highlights the difficulty posed by long-horizon planning even when there are no prediction inaccuracies.
Test-time Flexibility
In order to evaluate the ability to generalize to new test-time goals, we construct a suite of block stacking tasks with three settings: (1) Unconditional Stacking, for which the task is to build a block tower as tall as possible; (2) Conditional Stacking, for which the task is to construct a block tower with a specified order of blocks, and (3) Rearrangement, for which the task is to match a set of reference blocks' locations in a novel arrangement. We train all methods on 10000 trajectories from demonstrations generated by PDDLStream (Garrett et al., 2020); rewards are equal to one upon successful stack placements and zero otherwise. These block stacking are challenging diagnostics of test- time flexibility; in the course of executing a partial stack for a randomized goal, a controller will venture into novel states not included in the training configuration.
We use one trained Diffuser for all block-stacking tasks, only modifying the perturbation function h(τ ) between settings. In the Unconditional Stacking task, we directly sample from the unperturbed denoising process p θ (τ ) to emulate the PDDLStream controller. In the Conditional Stacking and Rearrangement tasks, we compose two perturbation functions h(τ ) to bias the sampled trajectories: the first maximizes the likelihood of the trajectory's final state matching the goal configuration, and the second enforces a contact constraint between the end effector and a cube during stacking motions. (See Appendix B for details.) We compare with two prior model-free offline reinforcement learning algorithms: BCQ (Fujimoto et al., 2019) and CQL , training standard variants for Unconditional Stacking and goal-conditioned variants for Conditional Stacking and Rearrangement. (Baseline details are provided in Appendix A.) Quantitative results are given in Table 3, in which a score of 100 corresponds to a perfect execution of the task. Diffuser substantially outperforms both prior methods, with the conditional settings requiring flexible behavior generation proving especially difficult for the model-free algorithms. A visual depiction of an execution by Diffuser is provided in Figure 5.
→ Figure 5. (Block stacking) A block stacking sequence executed by Diffuser. This task is best illustrated by videos viewable at diffusion-planning.github.io.
Offline Reinforcement Learning
Finally, we evaluate the capacity to recover an effective single-task controller from heterogeneous data of varying quality using the D4RL offline locomotion suite (Fu et al., 2020). We guide the trajectories generated by Diffuser toward high-reward regions using the sampling procedure described in Section 3.2 and condition the trajectories on the current state using the inpainting procedure described in Section 3.3. The reward predictor J φ is trained on the same trajectories as the diffusion model.
We compare to a variety of prior algorithms spanning other approaches to data-driven control, including the model-free reinforcement learning algorithms CQL and IQL (Kostrikov et al., 2022); returnconditioning approaches like Decision Transformer (DT; Chen et al. 2021b); and model-based reinforcement learning approaches including Trajectory Transformer (TT; Janner et al. 2021), MOPO (Yu et al., 2020), MOReL (Kidambi et al., 2020), and MBOP (Argenson & Dulac-Arnold, 2021). In the single-task setting, Diffuser performance comparably to prior algorithms: better than the model-based MOReL and MBOP and return-conditioning DT, but worse than Table 2. (Offline reinforcement learning) The performance of Diffuser and a variety of prior algorithms on the D4RL locomotion benchmark (Fu et al., 2020). Results for Diffuser correspond to the mean and standard error over 150 planning seeds. We detail the sources for the performance of prior methods in Appendix A.3. Following Kostrikov et al. (2022), we emphasize in bold scores within 5 percent of the maximum per task (≥ 0.95 · max). denoising planning horizon the best offline techniques designed specifically for singletask performance. We also investigated a variant using Diffuser as a dynamics model in conventional trajectory optimizers such as MPPI (Williams et al., 2015), but found that this combination performed no better than random, suggesting that the effectiveness of Diffuser stems from coupled modeling and planning, and not from improved open-loop predictive accuracy.
Warm-Starting Diffusion for Faster Planning
A limitation of Diffuser is that individual plans are slow to generate (due to iterative generation). Naïvely, as we execute plans open loop, a new plan must be regenerated at each step of execution. To improve execution speed of Diffuser, we may further reuse previously generated plans to warm-start generations of subsequent plans.
To warm-start planning, we may run a limited number of forward diffusion steps from a previously generated plan and then run a corresponding number of denoising steps from this partially noised trajectory to regenerate an updated plan. In Figure 7, we illustrate the trade-off between performance and runtime budget as we vary the underlying number of denoising steps used to regenerate each a new plan from 2 to 100. We find that we may reduce the planning budget of our approach markedly with only modest drop in performance.
Related Work
Advances in deep generative modeling have recently made inroads into model-based reinforcement learning, with multiple lines of work exploring dynamics models parameterized as convolutional U-networks (Kaiser et al., 2020), stochastic recurrent networks (Ke et al., 2018; Medium-Expert when varying the number of diffusion steps to warm-start planning. Performance suffers only minimally even when using one-tenth the number of diffusion steps, as long as plans are initialized from the previous timestep's plan. Hafner et al., 2021a;Ha & Schmidhuber, 2018), vectorquantized autoencoders (Hafner et al., 2021b;Ozair et al., 2021), neural ODEs (Du et al., 2020a), normalizing flows (Rhinehart et al., 2020;Janner et al., 2020), generative adversarial networks (Eysenbach et al., 2021), energy-based models (EBMs; , graph neural networks (Sanchez-Gonzalez et al., 2018), neural radiance fields , and Transformers (Janner et al., 2021;Chen et al., 2021a). Further, Lambert et al. 2020 have studied non-autoregressive trajectory-level dynamics models for long-horizon prediction. These investigations generally assume an abstraction barrier between the model and planner. Specifically, the role of learning is relegated to approximating environment dynamics; once learning is complete the model may be inserted into any of a variety of planning (Botev et al., 2013;Williams et al., 2015) or policy optimization (Sutton, 1990;Wang et al., 2019) algorithms because the form of the planner does not depend strongly on the form of the model. Our goal is to break this abstraction barrier by designing a model and planning algorithm that are trained alongside one another, resulting in a non-autoregressive trajectory-level model for which sampling and planning are nearly identical.
A number of parallel lines of work have studied how to break the abstraction barrier between model learning and planning in different ways. Approaches include training an autoregressive latent-space model for reward prediction (Tamar et al., 2016;Oh et al., 2017;Schrittwieser et al., 2019); weighing model training objectives by state values (Farahmand et al., 2017); and applying collocation techniques to learned single-step energies Rybkin et al., 2021). In contrast, our method plans by modeling and generating all timesteps of a trajectory concurrently, instead of autoregressively, and conditioning the sampled trajectories with auxiliary guidance functions.
Diffusion models have emerged as a promising class of generative model that formulates the data-generating process as an iterative denoising procedure (Sohl-Dickstein et al., 2015;Ho et al., 2020). The denoising procedure can be seen as parameterizing the gradients of the data distribution (Song & Ermon, 2019), connecting diffusion models to score matching (Hyvärinen, 2005) and EBMs (LeCun et al., 2006;Nijkamp et al., 2019;Grathwohl et al., 2020). Iterative, gradientbased sampling lends itself towards flexible conditioning and compositionality (Du et al., 2020b), which we use to recover effective behaviors from heterogeneous datasets and plan for reward functions unseen during training. While diffusion models have been developed for the generation of images (Song et al., 2021), waveforms (Chen et al., 2021c), 3D shapes (Zhou et al., 2021), and text (Austin et al., 2021), to the best of our knowledge they have not previously been used in the context of reinforcement learning or decision-making.
Conclusion
We have presented Diffuser, a denoising diffusion model for trajectory data. Planning with Diffuser is almost identical to sampling from it, differing only in the addition of auxiliary perturbation functions that serve to guide samples. The learned diffusion-based planning procedure has a number of useful properties, including graceful handling of sparse rewards, the ability to plan for new rewards without retraining, and a temporal compositionality that allows it to produce out-of-distribution trajectories by stitching together in-distribution subsequences. Our results point to a new class of diffusion-based planning procedures for deep modelbased reinforcement learning.
In this section, we provide details about baselines we ran ourselves. For scores of baselines previously evaluated on standardized tasks, we provide the source of the listed score.
A.1 Maze2D experiments
Single-task. The performance of CQL and IQL on the standard Maze2D environments is reported in the D4RL whitepaper (Fu et al., 2020) in Table 2.
We ran IQL using the offical implementation from the authors: github.com/ikostrikov/implicit q learning.
We tuned over two hyperparameters: Multi-task. We only evaluated IQL on the Multi2D environments because it is the strongest baseline in the single-task Maze2D environments by a sizeable margin. To adapt IQL to the multi-task setting, we modified the Qfunctions, value function, and policy to be goal-conditioned.
To select goals during training, we employed a strategy based on hindsight experience replay, in which we sampled a goal from among those states encountered in the future of a trajectory. For a training backup (s t , a t , s t+1 ), we sampled goals according to a geometric distribution over the future recalculated rewards based on the sampled goal, and conditioned all relevant models on the goal during updating. During testing, we conditioned the policy on the groundtruth goal.
We tuned over the same IQL parameters as in the single-task setting.
A.2 Block stacking experiments
Single-task. We ran CQL using the following implementation Figure A1. Diffuser has a U-Net architecture with residual blocks consisting of temporal convolutions, group normalization, and Mish nonlinearities.
Multi-task. To evaluate BCQ and CQL in the multitask setting, we modified the Q-functions, value function and policy to be goal-conditioned. We trained using goal relabeling as in the Multi2D environments. We tuned over the same hyperparameters described in the single-task block stacking experiments.
A.3 Offline Locomotion
The scores for BC, CQL, IQL, and AWAC are from Table 1 in Kostrikov et al. (2022). The scores for DT are from Table 2 in Chen et al. (2021b). The scores for TT are from Table 1 in Janner et al. (2021). The scores for MOReL are from Table 2 in Kidambi et al. (2020). The scores for MBOP are from Table 1 in Argenson & Dulac-Arnold (2021).
Appendix B Test-time Flexibility
To guide Diffuser to stack blocks in specified configurations, we used two separate perturbation functions h(τ ) to specify a given stack of block A on top of block B, which we detail below.
Final State Matching To enforce a final state consisting of block A on top of block B, we trained a perturbation function h match (τ ) as a per-timestep classifier determining whether a a state s exhibits a stack of block A on top of block B. We train the classifier on the demonstration data as the diffusion model.
Contact Constraint
To guide the Kuka arm to stack block A on top of block B, we construct a perturbation function h contact (τ ) = 64 i=0 −1 * τ ci − 1 2 , where τ ci corresponds to the underlying dimension in state τ si that specifies the presence or absence of contact between the Kuka arm and block A. We apply the contact constraint between the Kuka arm and block A for the first 64 timesteps in a trajectory, corresponding to initial contact with block A in a plan.
Appendix C Implementation Details
In this section we describe the architecture and record hyperparameters.
1. The architecture of Diffuser ( Figure A1) consists of a U-Net structure with 6 repeated residual blocks. Each block consisted of two temporal convolutions, each followed by group norm (Wu & He, 2018), and a final Mish nonlinearity (Misra, 2019). Timestep embeddings are produced by a single fully-connected layer and added to the activations of the first temporal convolution within each block.
2. We train the model using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 4e−05 and batch size of 32. We train the models for 500k steps.
3. The return predictor J has the structure of the first half of the U-Net used for the diffusion model, with a final linear layer to produce a scalar output.
5.
We found that we could reduce the planning horizon for many tasks, but that the guide scale would need to be lowered (e.g., to 0.001 for a horizon of 4 in the halfcheetah tasks) to accommodate. The configuration file in the open-source code demonstrates how to run with a modified scale and horizon. 6. We use N = 20 diffusion steps for locomotion tasks and N = 100 for block-stacking.
7. We use a guide scale of α = 0.1 for all tasks except hopper-medium-expert, in which we use a smaller scale of 0.0001.
8. We used a discount factor of 0.997 for the return prediction J φ , though found that above γ = 0.99 planning was fairly insensitive to changes in discount factor.
9. We found that control performance was not substantially affected by the choice of predicting noise versus uncorrupted data τ 0 with the diffusion model. | 2022-05-23T01:15:42.294Z | 2022-05-20T00:00:00.000 | {
"year": 2022,
"sha1": "94dd96bf94cc20cd06dcdc98c4f573cbb7f27b47",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3ebdd3db0dd91069fa0cd31cbf8308b60b1b565e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54715034 | pes2o/s2orc | v3-fos-license | POMOLOGICAL AND TECHNOLOGICAL CHARACTERISTICS OF COLLECTED SELECTIONS OF CHERRY PLUM PRUNUS CERASIFERA ERHR
Miletić R., M. Žikić, N. Mitić, and R. Nikolić (2005): Pomological and technological characteristics of collected selections of cherry plum Prunus cerasifera Erhr. – Genetika, Vol. 37, No. 1, 39-47. A plantation collection containing 32 genotypes selected from spontaneous populations of cherry plum Prunus cerasifera Erhr. was set up in the region of the Eastern Serbian town of Svrljig. The fruit trees budded from Prunus cerasifera seedlings and were planted at 5x4 m spacing on a mild slope of south-western aspect. This study shows the most important characteristics of the 19 selections in the collection, and the average results recorded in the 2000-2003 period. The most significant characteristics of the trees, their productivity, and fruit and stone characteristics are presented. The averge coarseness of fruits, i.e. their lenght, width and thickness, measured 25.0x 24.4x25.0 mm, while stone coarseness was 14.4x10.3x3.6 mm.The average fruit weight was 12.1 g (24.3-4.8 g), and stone weight 0.85 g (2.2-0.3 g). Depending on fruit and stone weight, the mesocarp content was 93% (96.3-90.3%). Taking into consideration the possibility of fruit exploitation for the production of biologically high-quality food, the mesocarp chemical composition was thoroughly examined. The fruits were found to have increased contents of total acids, achieving an average of 3.09% (3.44-2.60%), which was the initial objective of this selection. Total solids content was 13.5% (16.210.3%), total soluble solids 12.5% (14.5-9.5%) and total sugars 6.00% (11.45-3.14%). Considering these characteristics, the selections that were
INTRODUCTION
Cherry plum is a species with a number of desirable characteristics.It has a wide distribution range, pronounced vitality, stability of yield and moderate cultivation requirements.Its fruits are being used for generative rootstock in nursary production.They also provide good nourishment when consumed fresh, and may be used for the production of brandies, fruit juices and other processed foodstuffs.Across many Mediterranean countries, i.e. in Southern and Central Europe and mid-Asia, cherry plum is widespread as a wild species, but it also has cultivated and highly profitable forms.Breeding programmes in countries of the former USSR have resulted in a number of large-fruit cherry plum cultivars and hybrids (JANES and PAE, 2002).
Despite its favourable characteristics, cherry plum has a second rate status in Serbia, compared with other continental fruit species.Such situation has called for a selection, collection, and comparative research of cherry plum forms that have favourable properties.GAVRILOVIĆ and STANČEVIĆ (1960) had selected a cherry plum named "Kablarka".STAČEVIĆ et al. (1988) described 12 selected cherry plum ecotypes in Serbia, while MILUTINOVIĆ et al. (1997) made similar reports of 16 genotypes in the region of Mt.Avala.GEORGIJEV et al. (1985) and RISTEVSKI (2001) conducted comparative examination of a number of introduced cherry plum cultivars and local selected types under identical conditions in Macedonia.PEJKIĆ et al. (1991) initiated a cherry plum selection programme in the livestock-raising parts of Serbia, and the work was later continued by MILETIĆ (1995).The goal was to study the cherry plum population and select forms that are suitable to the fruit-juice industry.As a result of these efforts, a collection stand was set up and was used for further investigation of the pomological and technological characteristics under identical conditions, which is also the subject of this research.
MATERIALS AND METHODS
The collection stand was set up in the area of Svrljig in 1995, and it included 32 genotypes selected from a spontaneous cherry plum population in Eastern Serbia.The fruit trees were grafted onto cherry plum seedlings and planted at 5x4 m spacing on a slight slope of south-western aspect.The soil was a moderatefertile degraded eutric cambisol.Tillage and pruning were practiced to form an improved pyramidal crown.Each selection was represented by five to seven seedlings.
Fruits for the planned investigation were harvested at full maturity.Fruit and stone coarseness were measured by a high-precision calliper square, and weight on a Mettler technical balance.Total solids were determined by drying out the fruits at 105 °C to constant weight, and total soluble solids by refrectometry.
Total sugars were determined by Bertrand's method, total acids by neutralization with NaOH, and mesocarp pH with a pH meter.The article presents the most important pomological and technological characteristics of the 19 most outstanding selections.The data was obtained from three vegetation seasons and statistically processed by the analysis of variance and LSD test.
RESULTS
Fruit and stone coarseness of the chosen cherry plum selections are shown in Table 1.Fruit coarseness was medium or large, and the shape round.The average fruit coarseness (length, width, thickness) for all selections was 25.0x24.4x25.0mm.The average fruit length was 32.0-20.4mm, width 30.0-20.0 mm and thickness 31.6-20.1 mm.Selections 15, 1, 20 and 9 were found exceptional regarding fruit coarseness.The stones were elongated and flat-shaped, with the average coarseness of 14.5x10.3x6.6 mm.Stone length was 18.2-11.6,width 17.0-8.1 mm and thickness 8.2-5.0 mm.Larger stones were found in selections 6, 7, as well as 12 and 9, while small stones, which is a desirable trait, were found in selections 11, 23, 13 and 22.The analysis of variance and LSD test confirmed highly significant differences between most of these selections.In other words, the investigated cherry plum selections expressed their individual characteristics when grown under identical conditions.
Fruit and stone weight and mesocarp content make some of the foremost parameters in considerations of any fruit species, especially fruits such as cherry plum, which is used for industrial processing.According to data presented in Table 2, the average fruit weight of the selected cherry plums was 14.5 g.The highest weights were 18.2 and 16.4 g (measured in selections 6 and 12, respectively), and the lowest 13.0 and 12.7 g (measured in selections 22 and 23, respectively).Furthermore, stone weight was 0.85 g on the average, ranging from 0.3 g (selections 23 and 19) to 2.2 g (selection 24).The proportion of mesocarp was at the highest in selections 8 (96.3%) and 23 (95.8%), and lowest in selections 7, 14 (90.8%) and 25 (90.3%).Characteristically, mesocarp content in all selections was above 90.0%,which is their especially advantageous trait.Nevertheless, the analysis of variance and LSD test detected highly significant differences between most selections.Dominating in the population are light yellow-skinned selections.The fruits of the other selections are light or dark red, or red.Selection 9 was found to have a distinctly dark blue colour, which is rare in the population.
As cherry plum fruits are primarily used as raw material for the foodstuffs industry, their chemical composition is an important indicator of the quality of the chosen selections, Table 3.The selections are characterized by a high contents of solids, 13.5% on the average, and total soluble solids as high as 12.5%.The highest solids content was recorded in selections 7 (16.2%)and 18 (15.9%),and lowest in selection 13 (10.3%).The situation was similar as regards total soluble solids.The highest values of 15.0% and 14.5%, and lowest of 9.5% were found in the same selections, respectively.Total sugar content was highest in selections 13 (11.45%)and 24 (9.53%), and lowest in selection 15 (3.14%).
The selections were characterized by high total acid contents, 3.09% on the average.Selection 7 had an outstanding percentage of total acids (3.44%), followed by selections 21 and 15 (3.23%).On the other hand, the lowest total acid content was found in selection 13 (2.60%).Consequently, fruit pH was high, 3.45 on the average, ranging from 3.75 (selections 21 and 11) to 3.06 (selection 13).The total sugars/acids ratio was as low as 1.94 on the average.It was the highest in selection 13 (4.4), and lowest in selection 15 (0.94).
Nearly all selections had a low sugar/acid ratio, meaning that they are mostly sour in taste and primarily suitable for the production of fruit juices.The selections showed significant differences ragarding their chemical composition, i.e. the contents of individual components.Depending on their intended uses (method of processing), some of them may be singled out for their favourable solids contents or total sugars, and especially for total acids.A confirmation for this comes from the statistically processed data.Highly significant and significant differences were determined between most selections for all mentioned aspects of chemical compositions.
DISCUSSION
Due to their unrepressed generative reproduction, the cherry plum population includes forms with different characteristics.Some of them have fruits notable for their high-quality fruits.STANČEVIĆ et al. (1988) had selected 12 types of cherry plum with large and very large fruits, weighing 17.2-6.4g, and with mesocarp content measuring 97.2-92.7%,total soluble solids 16.5-11.5%,and total acids 1.54-0.48%.MILUTINOVIĆ et al. (1997) found a high variability in the cherry plum population.The fruit weight of 16 selected genotypes ranged 19.88-8.76g, mesocarp content 96.11-92.2%and soluble solids content 17.22-12.87%.
Especially invaluable to our work was the data reported by GEORGIJEVA et al. (1985) and RISTEVSKI (2001).The cherry plum cultivars introducted into Macedonia had very large fruits and high weight.They were found to have favourable chemical composition, which makes them suitable for consumption as fresh fruits.Some of them were characterized by high total acid contents.The cultivar Iskušenija contained 3.20% total acids, which corresponds to those found in the selections in this investigation, which aimed at selecting forms with increased solids and total acid contents.The selections that were isolated and collected expressed the same or even better characteristics in the collection stand than in their natural habitats.
An exceptional genetic variability is due to the centuries-long adaptation of the cherry plum population to the agro-ecological conditions existing in their local habitats.According to MIŠIĆ (1983), andNENADOVIĆ-MRATINIĆ andKOJIĆ (1988), the region of Serbia may in this respect be designated as a secondary centre of divergence of P. cerasifera Ehrh..All these results show that cherry plum deserves a much closer attention and pomological treatment.It is especially important in arid regions with extensive agriculture, where the more noble fruit species and cultivars of contiental fruits fail to provide stable and high-quality production.Cherry plum grows, yields and reaches its comparatively early maturation (in July) under extensive agricultural conditions, i.e. without any special cultivation or tillage.Organized stands and moderate agricultural practices secure high yields and desirable fruit quality, while production costs are lower.Besides, cherry plum requires no chemical protection from parasites and pests, so that its final products can be declared as having biological high-quality and safety, which is a global trend in food production today.Its wide uses (ČOLIĆ et al., 2001), the existing selections and favourable agro-eco-logical conditions provide a basis for an organized and intensive commercial production.
The properties of our selections showed considerable differences, and the results of statistical data processing confirmed them.It is therefore difficult to single out those with the most advantageous characteristics as they have a number of top qualities.Depending on the objective and purpose of cultivation (type of processing), it is nevertheless possible to make a most appropriate choice.This especially refers to the mesocarp chemical content.Besides, data obtained from this stand indicates possibilities for plantation growing and fruit production.The chosen selections, by all their properties, deserve a closer attention in terms of preservation of biodiversity, gene bank formation and commercial cultivation.
CONCLUSION
Based on our investigation of several cherry plum selections chosen from a collection stand, it is possible to make the following conclusions: The fruits were found to be round-shaped, medium large or large, and their average coarseness being 25.0x24.4x25.0mm, while the average stone coarseness was 14.5x10.3x6.6.The average fruit weight was 14.5 g (18.2-12.7),stone weight 0.85 g (2.2-0.3), and mesocarp content 93.0% (96.3-90.3%).The fruits were mostly yellow in colour, but also red or red-yellow.Mesocarp had a good soluble solids content of 12.5% (15.0-9.50%), as well as 6.00% (11.45-3.14%)total sugars, and a high total acids content of 3.09% (3.32-2.60%).The fruits of the chosen selections were found to be appropriate for different types of processing.Based on their intended purposes it would be possible to choose selections with most adequate characteristics.
Table 1 .
Fruit and stone coarseness
Table 2 .
Fruit and stone mass | 2018-12-11T19:25:46.310Z | 2005-01-01T00:00:00.000 | {
"year": 2005,
"sha1": "e78f4188aab3fa9b723c484973b300f25c387cdf",
"oa_license": null,
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0534-00120501039M",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e78f4188aab3fa9b723c484973b300f25c387cdf",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
249723744 | pes2o/s2orc | v3-fos-license | Impaired Sensitivity to Thyroid Hormones Is Associated With Elevated Blood Glucose in Coronary Heart Disease
Context Thyroid hormones influence glucose homeostasis through central and peripheral regulation. To date, the association between thyroid hormone sensitivity and elevated blood glucose (EBG) in patients with coronary heart disease (CHD) remains unknown. The purpose of this study was to investigate the association between thyroid hormone sensitivity and risk of EBG in patients with CHD, and to further explore their association in different sexes and ages. Methods This large multicenter retrospective study included 30,244 patients with CHD (aged 30–80 years) between 1 January 2014 and 30 September 2020. Parameters representing central and peripheral sensitivity to thyroid hormones were calculated. Central sensitivity to thyroid hormones was assessed by calculating the Thyroid Feedback Quantile-based Index (TFQI), Thyroid-stimulating Hormone Index (TSHI), and Thyrotropin Thyroxine Resistance Index (TT4RI), and Parametric Thyroid Feedback Quantile-based Index (PTFQI); peripheral sensitivity to thyroid hormones was evaluated using the ratio of free triiodothyronine (FT3) /free thyroxine (FT4). Taking normal glucose tolerance (NGT) as a reference, logistic regression was used to analyse the relationship between central and peripheral thyroid hormone sensitivity and EBG in patients with CHD. Results Among the 30,244 participants, 15,493 (51.23%) had EBG. The risk of EBG was negatively correlated with TSHI (OR: 0.91; 95%CI: 0.91 to 0.92; P < 0.001), TT4RI (OR: 0.99; 95% CI: 0.99 to 0.99; P<0.001), TFQI (OR: 0.82; 95%CI: 0.80 to 0.84; P <0.001) and PTFQI (OR: 0.76; 95%CI: 0.74 to 0.78; P<0.001). Compared to males and patients aged 60 and below, the OR value for EBG was lower in females and in patients aged over 60 years old. Conversely, EBG risk was positively associated with FT3/FT4 (OR: 1.08; 95% CI: 1.07 to 1.09; P <0.001) and in the sex-categorized subgroups, males had higher OR values than females. Conclusions This study showed that thyroid hormone sensitivity is significantly associated with EBG in patients with CHD. This association is higher in females than in males, and the association in those aged over 60 years old is higher than that in patients aged 60 years and below.
INTRODUCTION
Cardiovascular diseases are the leading cause of death worldwide (1), seriously affecting the patient's quality of life and longevity (2,3). The vast majority of cardiac deaths are caused by coronary heart disease (CHD) secondary to coronary atherosclerosis (4). The relationship between CHD and thyroid hormones is very close, and abnormal thyroid hormones often accompany patients with CHD (5)(6)(7)(8)(9). In addition, patients with CHD and EBG are at a higher risk of nephropathy and other cardiovascular diseases. CHD complicated with diabetes is associated with decreased insulin sensitivity, impaired blood pressure regulation, damaged vascular endothelial cells and dysfunction of the fibrinolytic system (10,11). Lowering blood sugar is extremely important to improve prognosis and prevent the onset of cardiovascular diseases (12)(13)(14). Diabetes has been clinically defined as fasting or postprandial hyperglycaemia or abnormally increased glucose excursion in response to an established glucose load. However, this clinical definition defines a relatively late stage in the disease process. Indeed, a defect in glucose homeostasis can be detected long before diabetes occurs.
For early prevention and treatment of secondary elevated blood glucose (EBG) in patients with CHD, identifying the risk factors of prediabetes and diabetes is a critical step. Recent theories show that hypothyroidism and subclinical hypothyroidism are risk factors for diabetes (15). However, some researchers believe that elevated thyroid stimulating hormone (TSH) is a risk factor for diabetes, and an increase in free triiodothyronine (FT3) and free thyroxine (FT4) has a protective effect on the occurrence of diabetes (16). TI de Vries et al. reported that there was no significant relationship between plasma TSH levels in the normal range and the incidence of diabetes in patients at high cardiovascular risk (17). These contradictory results are common. Moreover, almost all previous analyses focused on the influence of TSH and FT4 levels on the risk of prediabetes or diabetes. No research has investigated the association between thyroid hormone sensitivity and EBG in patients with CHD. Furthermore, results of previous studies investigating EBG prevalence in different sexes and ages Abbreviations: CHD, coronary heart disease; CI, confidence interval; DBP, diastolic blood pressure; EBG, elevated blood glucose; FBG, fasting blood glucose; FT3, free triiodothyronine; FT4, free thyroxine; HbA1c, glycated haemoglobin; HDL-C, high-density lipoprotein cholesterol; LDL-C, low-density lipoprotein cholesterol; NGT, normal glucose tolerance; OR, odds ratio; SBP, Systolic blood pressure; SD, standard deviation; TC, total cholesterol; TG, triglyceride; TH, thyroid hormone; TSH, thyroid stimulating hormone. have been inconsistent (18,19). Due to inconsistent results, the relationship among EBG, sex, and age remains controversial.
The thyroid hormones regulate glucose homeostasis by interacting with the entire central nervous system and surrounding target organs (20). Therefore, this large-scale, multicenter retrospective study aimed to investigate the association of central and peripheral sensitivity to thyroid hormones in patients with CHD and EBG. Further, we aimed to explore the differences of these associations in different sexes and ages to provide a basis for clinical adjustments of medication according to the situation of individual patients with CHD, and to improve patient conditions.
Patients
Participants in this study were 107,301 CHD inpatients of cardiology departments from 1 January 2014 to 30 September 2020 from six hospitals in Tianjin. Patients who were younger than 30 years or older than 80 years, had malignancy, infectious or severe liver or kidney disease, incorrect data, and lack of data on TSH, FT3, FT4, fasting blood glucose (FBG), or glycated haemoglobin (HbA1c) were excluded based on the study design. Ultimately, 30,244 participants were included in the study. A flowchart of the patient recruitment process is shown in Figure 1. This study was approved by the ethics committee of Tianjin University of Traditional Chinese Medicine (approval number TJUTCM-EC20190008) and registered with the Chinese Clinical Trial Registry on 14 July 2019 (registration number ChiCTR-1900024535) and ClinicalTrials.gov on 18 July 2019 (registration number NCT04026724).
Data Collection
Trained medical staff collected personal medical history records. These records included information such as age, sex, history of smoking and drinking, which were investigated using standard structured questionnaires (21). Systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured by experienced technicians using automatic blood pressure monitors. Fasting venous blood samples were collected from all participants in the morning. FBG, total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), triglycerides (TG), low-density lipoprotein cholesterol (LDL-C), and HbA1c were measured directly using an automatic haematology analyser. Quality control was conducted in the laboratory according to standard procedures.
Baseline Characteristics
The baseline characteristics of the 30,244 participants are shown in Table 1. The average age of the subjects was 64 years old, and the proportion of females (48.3%) was slightly lower than that of males (51.7%). Among the participants, 15,493 patients demonstrated EBG (51.2%). The subjects were divided into two groups according to glucose metabolism level. Compared with NGT participants, EBG participants were more likely to be older females with hypertension, dyslipidaemia, and a tendency to smoke and drink. Hypertension, dyslipidaemia, age, smoking, and alcohol consumption were positively associated with EBG, and participants with EBG tended to have higher levels of FT3/ FT4 and lower levels of TSHI, TT4RI, and PTFQI.
Relationship Between Thyroid Hormone Sensitivity and EBG
The association between thyroid hormone sensitivity and EBG was estimated by using different logistic regression models ( Table 2) After adjusted for sex and age, as a continuous variable, TSHI, TT4RI, PTFQI, and TFQI levels were negatively associated, and FT3/FT4 levels was positively associated with EBG risk. These associations remained significant after multivariate adjustment. In the adjusted model, the OR values of TSHI, TT4RI, and PTFQI were the lowest at T3 when T1 was used as a reference. In addition, when used as a continuous variable, FT3/FT4 level was positively correlated with EBG. Considering the differences between prediabetes and diabetes in the presence of abnormal glucose metabolism, different stratified analyses were performed (Supplementary Tables S1-S3).
Relationship Between Thyroid Hormone Sensitivity and EBG in Different Sexes
The relationship between thyroid hormone sensitivity and EBG in the different sexes is shown in Table 3. Subgroup analyses stratified by sex showed that thyroid hormone sensitivity was significantly associated with EBG in both sexes. After adjusted for age, as a continuous variable, the OR values of males for TSHI, PTFQI, TFQI, and EBG were slightly higher than those of females. These differences remained significant after adjusting for the model (P <0.001). Of all the indicators representing central thyroid hormone sensitivity, the PTFQI had the lowest OR value, with females (OR: 0.76; 95%CI: 0.73 to 0.79; P <0.001) having a lower PTFQI than males (OR: 0.77; 95%CI: 0.74 to 0.80; P <0.001).
Relationship Between Thyroid Hormone Sensitivity and EBG In Different Age Stratifications
The relationship between thyroid hormone sensitivity and EBG in different age stratifications is shown in Table 4. After adjusted for sex, as a continuous variable, the OR values of TSHI, PTFQI, and TFQI in patients aged 60 years were slightly higher than
Relationship Between Thyroid Hormone Sensitivity and EBG in Different Sexes and Different Ages
Based on the individual sex and age stratification results, the relationship between thyroid hormone sensitivity and EBG in different sexes and ages was analysed. As shown in Table 5, in the unadjusted and adjusted models, all the central thyroid hormone sensitivity indices had lower OR values with EBG in females aged over 60 years old, and the peripheral thyroid sensitivity index had a higher OR with EBG in females aged 60 and below.
DISCUSSION
To our knowledge, this is the first study to evaluate and confirm the relationship between central and peripheral sensitivity to thyroid hormone indicators and EBG risk in a large sample of patients in China with CHD. Our study found that the central thyroid hormone sensitivity indices TSHI, TT4RI, TFQI, and PTFQI were negatively associated with EBG risk in patients with CHD. With a gradual increase in TSHI, TT4RI, and PTFQI, the OR value of EBG also gradually decreased. Peripheral thyroid hormone sensitivity index FT3/FT4 was positively associated with EBG. Finally, most associations were observed in different sexes and age groups when considered separately.
Previous studies have shown that almost two-thirds of patients with cardiovascular disease suffer from abnormal glucose metabolism (30). Due to the various changes of thyroid hormones in patients with CHD (31), the various effects of thyroid hormones in patients with CHD and EBG deserve attention. However, previous studies showed that higher TSH and lower FT4 (16,32,33), lower TSH (34), higher FT4 (35), and lower FT3 (16,34) levels were all associated with the risk of EBG. Therefore, TSH or thyroid hormone levels alone may not be sufficient to explain the relationship between the thyroid system and glycaemic disorders. Given these inconsistencies in previously proposed central thyroid hormone sensitivity indices (TSHI, TT4RI) and peripheral thyroid hormone sensitivity indices FT3/FT4 (28,(36)(37)(38), in 2019, Laclaustra et al. proposed a new resistance index of central thyroid hormones: TFQI and PTFQI, which approximates TFQI (29). These new indices may have smaller deviations and will not produce extreme values in cases of thyroid dysfunction, which will help to better explain the different associations between the changes in thyroid hormones and diabetes (29).
Based on the new indices, recent studies have found that the increase in TSHI, TT4RI, and PTFQI was associated with reduced prediabetes risk, and the increased FT3/FT4 ratio was associated with an increased risk of prediabetes (23). Unlike our study, the latter had no significant correlation after adjusting for multiple confounding factors. Moreover, our results also suggest that elevated TFQI is associated with a reduced risk of EBG. Laclaustra et al. reported that the cross-section of the TSHI level was not associated with diabetes (29), which is contrary to our results and conclusion. The reasons for these differences are not clear, but may be attributable to confounding factors, differences in study partitions, and sample sizes. Therefore, further studies are needed to validate and confirm these results. Several pathways may explain the observed association between the thyroid hormone central resistance index and EBG. Previous studies have shown that thyroid dysfunction can increase insulin resistance in muscle and adipose tissue and reduce glucose transport in muscle cells (39,40). FT3 may also affect the expression of glucose-secreted insulin and an important protein of lipid metabolism (41); lower FT3 and FT4 levels can promote higher insulin resistance in tissues (42). The change in serum TSH may directly affect metabolic parameters and stimulate leptin secretion (43,44). It is well known that hepatic glucose output is critical for maintaining fasting glucose homeostasis. Leptin has been shown to stimulate hepatic glucose production in vivo and in vitro (45). Moreover, the loss of leptin can lead to problems such as overeating, decreased energy expenditure, and severe obesity, which are important risk factors affecting glucose homeostasis (46). Therefore, the sensitivity of central thyroid hormones may change leptin secretion, affect insulin resistance, and lead to a change in glucose metabolism level. However, the exact regulatory mechanism underlying the relationship between central thyroid hormone sensitivity and leptin remains unclear.
Interestingly, our study also found that when used as a reference in the T1 group, the level of peripheral thyroid hormone sensitivity FT3/FT4 was negatively associated with EBG in the T2 group, but significantly positively associated with EBG when used as a continuous variable. Previously, many studies confirmed the positive effects of elevated FT3/FT4 on diabetes, gestational diabetes, obesity-related inflammatory markers, cardiovascular risk, and arterial stiffness markers (38,(47)(48)(49). Regarding the result of the negative correlation of FT3/FT4 with EBG in the T2 group, the promotion of peripheral deiodinase activity may increase the level of FT3/FT4 (38), and the inhibition of peripheral deiodinase activity may reduce the basal metabolic rate, which is closely related to the pathogenesis of diabetes (50,51). To address sex-and age-specific differences noted in previous studies (35,52,53), we analysed the relationship between thyroid hormone sensitivity and diabetes by sex and age, respectively. The results showed that TSHI, PTFQI, and TFQI of females aged > 60 years had lower OR values for EBG risk, while FT3/FT4 had higher OR values. Sex hormones (such as oestrogen and testosterone) can regulate thyroid function, and oestrogen levels affect the development of diabetes, especially after 60 years of age (54)(55)(56). The differences in sex hormones may partly explain the sex differences in the relationship between thyroid hormone sensitivity and EBG found in this study. However, since this study did not measure the levels of sex hormones, further research is needed to explore this concept.
LIMITATIONS
There are several limitations to the present study. First, although large scale, this was a cross-sectional study, and causality cannot be inferred. However, the study supports the important hypothesis that adding the examination of thyroid hormone sensitivity levels to the examination of pure thyroid hormone levels may be helpful in assessing the risk of EBG. Second, although we have adjusted for many potential confounding factors, we cannot rule out the possibility that EBG is affected by other lifestyle variables, including iodine supplementation, which is intrinsically related to thyroid hormone levels. Third, this study was conducted among individuals who were Chinese, and racial differences may exist. Fourth, this study did not have a track record of whether participants were undergoing diabetes or thyroid disease-specific treatment. Therefore, well-designed randomized controlled trials are needed to validate these results.
CONCLUSION
The decrease in central thyroid hormone indices represents an increase in central thyroid hormone sensitivity. This study showed that thyroid hormone sensitivity is significantly associated with EBG in patients with CHD. This association is higher in females than in males, and the association in those aged over 60 years is higher than that in patients aged 60 years and below. This study provides reliable evidence which will improve the prevention strategies and clinical treatment of EBG in patients with CHD.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ACKNOWLEDGMENTS
We thank all the participants in the study and the members of the survey teams, as well as the financial support. | 2022-06-17T15:05:47.252Z | 2022-06-15T00:00:00.000 | {
"year": 2022,
"sha1": "9042e37710b4c03ab4dfb07cab52eb2e37089986",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2022.895843/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "08e1abbd349f5a8e98709c8168ff433a68d75c5a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232339811 | pes2o/s2orc | v3-fos-license | Barriers to and Facilitators of User Engagement With Digital Mental Health Interventions: Systematic Review
Background: Digital mental health interventions (DMHIs), which deliver mental health support via technologies such as mobile apps, can increase access to mental health support, and many studies have demonstrated their effectiveness in improving symptoms. However, user engagement varies, with regard to a user’s uptake and sustained interactions with these interventions. Objective: This systematic review aims to identify common barriers and facilitators that influence user engagement with DMHIs. Methods: A systematic search was conducted in the SCOPUS, PubMed, PsycINFO, Web of Science, and Cochrane Library databases. Empirical studies that report qualitative and/or quantitative data were included. Results: A total of 208 articles met the inclusion criteria. The included articles used a variety of methodologies, including interviews, surveys, focus groups, workshops, field studies, and analysis of user reviews. Factors extracted for coding were related to the end user, the program or content offered by the intervention, and the technology and implementation environment. Common barriers included severe mental health issues that hampered engagement, technical issues, and a lack of personalization. Common facilitators were social connectedness facilitated by the intervention, increased insight into health, and a feeling of being in control of one’s own health. Conclusions: Although previous research suggests that DMHIs can be useful in supporting mental health, contextual factors are important determinants of whether users actually engage with these interventions. The factors identified in this review can provide guidance when evaluating DMHIs to help explain and understand user engagement and can inform the design and development of new digital interventions.
Introduction
Background Nearly 1 in 5 adults in the United States experience a mental illness at some moment in their life [1]. Yet, accessing treatment for mental health problems can be difficult. Common barriers to mental health care include stigma, lack of available and evidence-based services, and inability to afford services [2,3]. In addition, people not diagnosed with a mental illness can experience periods of poor mental health and may benefit from support, although they have not sought professional treatment with a mental health provider. For instance, 73% of people surveyed in the United States experience stress related to money, work, and family responsibilities at a level that affects their mental health [4]. The translation of psychosocial interventions into digital formats, deemed digital mental health interventions (DMHIs), has the potential to overcome some existing barriers to traditional care and increase access to mental health support and resources.
DMHIs can be delivered via smartphone apps, internet websites, wearable devices, virtual reality, or video games [5] and range from self-guided DMHIs to those integrated with human support or traditional therapy [6]. Although some DMHIs have been shown to be as effective as traditional mental health services (eg, psychotherapy and pharmacotherapy) in improving mental health conditions such as depression [7] and can lead to greater reductions in anxiety compared with usual care [8], engagement with these technologies remains to be an ongoing issue, varies from study to study, and is typically lower in real-world use than research studies [9]. For example, a review in 2018 found that participant adherence to internet-delivered cognitive behavioral therapy (CBT) can range from 6% to 100% [10]. Similarly, systematic comparisons in 2018 and 2019 on self-help DMHIs found that real-world uptake varies widely [9,11], and acceptability can be lower than traditional treatment [12].
This paper aims to systematically review the literature on DMHIs to identify common barriers and facilitators that may influence user engagement with these interventions. There are different ways to define user engagement. For example, engagement can be referred to as the time a user spends on an intervention. However, the time spent on an intervention varies between different types of interventions, and little time spent using a DMHI does not have to be a negative feature per se. To get a comprehensive understanding of people's use of DMHIs, we use a broader definition of user engagement. In this review, user engagement refers to a user's uptake and sustained interactions with a digital intervention, which includes interest in adopting an intervention as demonstrated by signing up for the digital intervention, initial uptake as demonstrated by engaging with features of the digital intervention as part of the study, at a minimum during a demonstration as part of the study, and continued use of an intervention.
Understanding User Engagement With DMHIs
A range of factors can influence engagement with DMHIs, such as the relevance of information to the user provided by a digital intervention [13], a lack of user motivation to persist with a self-guided intervention [14], and poor user experience with the technology [15]. Although previous studies have each reported on some factors that can influence engagement, given a particular technology or context, a review is lacking that brings all these findings together. It is important to investigate the multitude of factors to fully understand the reasons for high versus low engagement. Previous reviews have highlighted the variability in engagement and uptake, analyzing both DMHIs published in the academic literature [9][10][11] and publicly available mental health apps in app stores [16]. However, these analyses did not report on factors related to this variability in engagement. This review seeks to address this gap by identifying the most common overarching factors that affect engagement.
Although analyzing engagement metrics of commercial apps can be used to examine variability in engagement, user studies are valuable to understand the underlying reasons why people may engage with some interventions more than others. For the purpose of this review, we focus on reviewing the academic literature.
Researchers and developers of DMHIs can use this knowledge to inform evaluations of engagement and the development of new digital interventions. In addition, it may provide insights into what services and facilitating conditions need to surround DMHIs to promote technology-enabled services and may help mental health service providers in selecting suitable interventions for their clients.
This review focuses on common mental health issues, such as depression, anxiety, psychological well-being and distress, and stress. There may be different barriers or facilitators for user engagement with other specific, serious mental illness interventions (eg, psychosis intervention) that are beyond the scope of this paper.
Inclusion Criteria
The inclusion and exclusion criteria of articles for this review are presented in Textboxes 1 and 2, respectively. Textbox 1. Inclusion criteria.
•
Report on an intervention aimed to improve mental health, psychological well-being, anxiety, depression, stress, and/or mood • Report on an intervention delivered in a digital format, such as a smartphone app or website • Report on some aspects of user experience (eg, usability, user satisfaction, and user feedback) • Report on factors that affected user experience • Include participants aged ≥16 years (eg, child and adolescent samples were excluded) • Report on an empirical study (eg, literature reviews that synthesized findings from other articles, columns, opinion pieces, comments or replies, and editorials were excluded) • Be a peer-reviewed article (eg, dissertations were excluded) • Be written in English Textbox 2. Exclusion criteria.
• Report on interventions that have a mental health component but do not have mental health as a primary intervention target (eg, an app that is primarily focused on physical pain symptoms, with a mental health component) • Report on interventions that only serve as an appointment booking system for in-person therapy • Report on interventions that are used as a component during an in-person session but cannot be used remotely outside of these sessions • Articles published before January 1, 2010 The first exclusion criterion was added to identify barriers and facilitators that would be applicable to DMHIs. For example, a study that tests an app primarily focused on physical pain symptoms, with a mental health component, may find physical pain issues as a barrier to engaging with the app. It may not be clear from the study whether this is a common barrier related to DMHIs or interventions addressing physical pain.
The second and third exclusion criteria were added, as these types of interventions were designed to be a part of in-person sessions. It may not be clear from these studies whether users would be willing or able to engage with DMHIs apart from existing and traditional in-person sessions.
Finally, digital health interventions evolve rapidly [17,18], and the review was focused on the current state of DMHIs. Therefore, to avoid discussing on interventions or technologies that are now potentially out of date, the review was limited to contemporary studies published within the last 10 years (January 2010 to December 2019), a time frame that has been applied previously for systematic reviews on digital health technologies for mental illness [18].
Search Strategy
A literature search was conducted in multiple databases, including SCOPUS, PubMed, PsycINFO, Web of Science, and the Cochrane Library. On the basis of the inclusion criteria, a search query was developed to include an article if its title or abstract contained at least one keyword related to mental health, at least one keyword related to digital interventions, and at least one keyword related to user experience (Textbox 3; PRE/5 means that keywords were separated by a maximum of 5 words, for example, online PRE/5 intervention means there were 5 or less words between online and intervention).
The search query was built on keywords used in previous reviews on the uptake of mental health technologies [11,19], and additional keywords were added for the specific focus of this review (ie, the third part of the query with keywords related to user experience). The search terms for each database are included in Multimedia Appendix 1. Searches were not limited to the study design.
Study Selection
The search results were uploaded to Rayyan [20], a web-based software program for facilitating systematic reviews. Titles and abstracts were screened against the inclusion criteria, and excluded articles were labeled with reasons for exclusion.
The first author reviewed all titles and abstracts. Explicit inclusion criteria were determined between the first 3 authors a priori article selection to reduce coder bias. The coder (JB) was a PhD researcher with years of research expertise in user experience and thematic analysis.
A total of 6146 papers were extracted for the review. After the removal of 77 duplicates, 6069 article titles and abstracts were screened by the first author and discussed with the second and third authors. Uncertainties about inclusion were resolved by discussion among the first 3 authors, and reasons for exclusion or inclusion of these articles were discussed.
Furthermore, 480 full-text articles were reviewed, of which 208 met the inclusion criteria. Figure 1 shows a flow diagram of the screening papers. The same inclusion and exclusion criteria were used for reviewing the titles and abstracts in the screening phase and reviewing the full-text articles in the eligibility phase.
Articles that were not available were either not available on the web or were behind a paid firewall. Article types that were out of scope did not report on an empirical study.
Although there is a risk of bias in studies, the review considered all studies that met the inclusion criteria and included a large variety of different study methodologies, including qualitative studies with no reported quantitative outcomes. The primary focus of this review was to establish themes across the literature rather than extract the outcomes of quantitative studies. Therefore, the risk of publication bias with significant results is small compared with a meta-analysis of outcomes [21].
Data Extraction
A data extraction template (Multimedia Appendix 2) was developed for this review and piloted on 5 full-text papers. The main data elements extracted included reported factors, barriers, and facilitators to use and usage, such as retention and/or completion rate of the research study. The data were used to address the objective of the review to identify barriers and facilitators that influence user engagement.
Other extracted data were intended to document study and intervention characteristics, such as the type of technology and whether the intervention was publicly available, the target population, and the length of time that participants were able to engage with the intervention during the study.
Quality Assessment
To account for the methodological variety of studies, the quality of reporting tool by Carroll et al [22] was used to assess quality. This tool has been used earlier in systematic reviews that include qualitative and quantitative studies [23]. Using this tool, articles were assessed on 4 criteria: (1) was the study design explained, (2) was the recruitment and selection of participants explained (eg, random sampling and convenience sampling), (3) were details of the data collection method provided (eg, topic guides for interviews, number of items in a survey, use of open or closed items), and (4) were details of the analysis method provided (ie, form of analysis rather than merely reporting data were analyzed). Following the tool's guidelines, studies were considered to be adequately reported if a "yes" was assigned to 2 or more of these criteria.
Analysis
An inductive thematic analysis [24] was used to identify common themes among these factors. This means that no preexisting coding scheme was used; rather, codes were created based on what emerged from the data.
We used a single coder approach, in which the first author iteratively identified codes from the data and refined themes throughout the analysis. Single coder approaches are methodologically sound when they include checks on validity and reliability [25]. For our analysis, validity and reliability were assessed by reviewing a selection of codes and their corresponding text with the second and third authors and by refining the codes. This process is common in qualitative research [26]. As we used emergent coding and there was no a priori codebook, a single coder approach also allowed for consistency of coding and interpretation of codes, and this approach has been used in systematic reviews [21].
The first author began the analysis by systematically reviewing each paper. For each paper, the following sections were analyzed: abstract, results or findings, and discussion. Individual codes were created each time a factor was described that affected engagement with DMHIs.
Factors were considered a barrier or facilitator if it was explicitly defined as a facilitator or barrier by the authors of the paper and/or the description in the paper pointed to it being a barrier or facilitator. For example, "participants reported they did not use mental health apps because they had privacy concerns on what would happen with their information." In this instance, privacy concerns are identified as barriers.
A spreadsheet was used to keep track of the emerging codes. Each spreadsheet row corresponds to a single paper. The row contains the raw text of the paper that includes the identified factors and the initial codes. These codes were iteratively reviewed and compared with the raw text they were extracted from. Codes that referred to similar concepts, such as the ability to personalize an intervention and customize an intervention, were grouped together and given more descriptive names. As an understanding of the data was developed, earlier data were revisited to refine and combine codes, revalidating the previously coded material. Finally, the final codes were grouped into broader themes (eg, the roles of age, gender, and employment status were grouped into the theme demographic variables).
Study Characteristics
As seen in Multimedia Appendix 2, the 208 articles included in this review reported on 2 types of user studies: (1) 69 studies were needs assessments that aimed to understand user needs and attitudes toward DMHIs without or before engaging with a specific intervention as part of a study and (2) 135 studies were evaluation studies that assessed users' experience with a specific intervention over the course of the study. In total, 4 articles included both needs assessment and evaluation. Overall, 35 articles explored general user attitudes about DMHIs without focusing on a specific technology, whereas 173 studies focused on a specific technology (Table 1). Although all studies involved interventions for mental health, some studies focused on a particular area: 45 studies focused on depression, 22 studies focused on stress, 9 studies focused on anxiety, 6 focused on eating disorders such as bulimia nervosa, 4 studies focused on mood, and 2 studies focused on loneliness. Measures related to user engagement included time spent using an intervention, number of log-ins, usability, acceptability, and feasibility. The usability and acceptability of the technology were assessed using qualitative methods and standard measures, such as the survey based on the Unified Theory of Acceptance and Use of Technology [27], the Mobile Application Rating Scale [28], and the System Usability and After-Scenario Questionnaire [29]. Feasibility was defined in these studies as either completion of a program offered through the intervention or retention rate, which is the number of people who completed the research study as a proportion of the people who started the study. In total, 42 studies employed qualitative interviews to understand people's user engagement.
The number of participants involved in these studies ranged from 6 to more than 2 million. In total, 6 studies conducted a secondary analysis of the usage data of an existing intervention or health database. For these 6 studies, the sample size was relatively large, ranging from 3158 to 2,171,325 users. Among the remaining 202 studies, the sample size ranged between 6 and 1558 users. For instance, 25% (52/208) of the studies had <18 participants, 49.5% (103/208) had <40 participants, and 75% (156/208) had <177 participants. The extent to which participants were exposed to an intervention ranged from a short demonstration before a focus group or survey to up to 1 year of usage.
Quality Assessment
All studies were assessed as adequately reported (Multimedia Appendix 3 [143,172]). Each study reported on the research question, study design, and method of data collection. Overall, 11 studies did not report the recruitment and/or selection process of study participants [30][31][32][33][34][35][36][37][38][39][40]. In addition, 11 studies did not specify the analysis method used to analyze the data [33,[41][42][43][44][45][46][47][48][49][50]. One study reported on the analysis method for the quantitative data that were collected but not qualitative data [51]. Table 2 shows the types of technologies studied in the articles, and Table 3 shows the types of treatments and/or resources offered by the technology. Web-and smartphone-based interventions were the most common, reported in 38.5% (80/208) and 27.4% (57/208) of the papers, respectively. The most common type of treatment is internet-based CBT. Other treatments and features included acceptance and commitment therapy, psychotherapy, positive psychological interventions, meditation, peer support, resources, monitoring of symptoms, and journaling.
Intervention Characteristics
The target population included students, transitional age youth (aged 16-24 years), refugees, people who were homeless, veterans diagnosed with post-traumatic stress disorder, mothers with postpartum depression, patients being treated for a mental illness or another health concern, older adults, and caregivers and workers experiencing stress. Not all interventions specified the target population.
Constructs Associated With User Engagement
Textbox 4 shows the high-level constructs derived from the thematic analysis influencing user engagement with DMHIs, where the numbers in parentheses show the number of articles in which the constructs were identified. We caution that the most frequently occurring constructs are not necessarily the most important but rather indicate that more studies have reported on this topic. Table 4 summarizes the main findings associated with each construct. After several iterations of grouping and coding, 16 larger groups remained: demographic variables, personal traits, mental health status, beliefs, mental health and technology experience and skills, integration into life, type of content, perceived fit, perceived usefulness, level of guidance, social connectedness, impact of intervention, technology factors, privacy and confidentiality, social influence, and implementation. These themes fit into 3 categories: user-related factors, program-related factors, and factors related to the technology and implementation environment. The next section provides more detailed explanations. The full list of factors belonging to each construct is included in Multimedia Appendix 4. Textbox 4. The constructs influencing user engagement, grouped as constructs related to the user, the program offered by the intervention, and the technology and (implementation) environment. The numbers in parentheses indicate the number of articles in which the constructs occurred.
User-related constructs
Overall, women were more likely to engage with DMHIs a than men Demographic variables (sociodemographic factors, such as age, gender, and education) The personality traits neuroticism, agreeableness, openness, and resistance to change were associated with higher engagement, whereas extraversion was associated with lower engagement Personal traits (factors related to personality traits, such as neuroticism and extraversion) Severity of mental health symptoms increased the interest in DMHIs, but symptoms related to depression, mood, and fatigue were a barrier to actual engagement Mental health status (factors related to the current mental health status of the user, such as the type and severity of symptoms) People's positive beliefs about mental health help-seeking and technology-facilitated engagement Beliefs (beliefs held by the user with regard to technology, mental health, and mental health services) Digital health literacy and positive experiences with mental health services and technology were facilitators to engagement Mental health and technology experience and skills (previous experience the user has had with technology, mental health technology, and mental health services and skills related to their digital or mental health or digital health literacy) Engagement was facilitated if people were able to integrate DMHI use into their daily lives Integration into life (the extent to which the user is able to find time and space to use the intervention and make the intervention part of their routine or life)
Program-related constructs
Engagement was facilitated if content was credible and if activities offered by the DMHI were of an appropriate length (ie, not too short or too long) Type of content (the type of content and features offered by the intervention) Engagement was facilitated if information offered by a DMHI was customizable and relevant to the user Perceived fit (factors related to how well the intervention is appropriate to the user's culture and values and is adaptable to the user's needs rather than a one-size-fits-all solution) Participants were more likely to engage with DMHIs if they understood the data and knew how to use it Perceived usefulness (factors related to expected benefits of using the digital intervention over existing resources) Guided interventions, either through a human therapist or automated reminders to use a DMHI, had higher engagement than unguided interventions Level of guidance (the level of guidance offered by the intervention on how [eg, when, how often] to use it, for example, through notifications or a coach) Being able to connect with other people through a DMHI facilitated engagement Social connectedness (the extent to which the intervention connects or isolates the user with or from others) DMHI engagement was facilitated if participants experienced a positive impact as a result of using a DMHI, such as the improvement of symptoms Impact of intervention (the impact that intervention usage had on the user, such as an improvement or exacerbation of mental health symptoms [as measured by a validated survey scale])
Technology-and environment-related constructs
Technical issues were a common barrier to engagement Technology-related factors (factors related to the technology through which the intervention is offered, such as the resources and costs required to use it, usability, and technical issues experienced by the user) Engagement was facilitated if participants had a sense that the digital platform was private and anonymous, and they could safely disclose information Privacy and confidentiality (factors related to data security, storage, confidentiality, and privacy of the digital intervention) Participants were more likely to use DMHIs if people close to them, such as family and friends, thought they should use DMHIs Social influence (factors from the users' social environment, such as perceptions held by their peers, family, and health care provider, that influence their intention to use an intervention) DMHI engagement was facilitated if people were trained on how to use it Implementation (factors related to the implementation of the intervention that affects use, such as the availability of user training, the phase of the user's mental health care-seeking process during which the intervention is introduced or accessed and characteristics of the health care organization supporting the DMHI) a DMHI: digital mental health intervention.
User-Related Constructs
User-related factors refer to factors related to the user, such as personal beliefs, skills, and experiences.
Demographic Variables
Some demographic variables were found to be associated with DMHI engagement. Studies that found an effect of gender showed that women were more likely to adopt and engage with interventions [44,[52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68]. Overall, 8 studies saw an effect of age: 2 studies found that people aged ≤50 years engaged more with interventions than older adults [66,67]. These 2 studies used a relatively large sample size (1,139 and 2,171,325), and participants were exposed to the intervention for up to 1 year. A total of 6 studies found that higher engagement was seen with adults aged ≥30 years [54,57,64,65,[69][70][71]. These studies had a smaller sample size (samples ranged from 74 to 577 people), and participants engaged with the intervention for shorter periods (up to 12 weeks). Age was also found to influence interest and expectations: users' interest in using digital therapy interventions increased with age [72], and Krause et al [58] found that older people have higher expectations of interventions.
Chudy-Onwugaje et al [73] found that age has an interacting effect with people's depression symptoms. For people aged ≤40 years, adherence increased with depressive symptoms, but there was no association between depressive symptoms and adherence in people aged >40 years. Although the reasons for this interaction were unclear from the study, the authors of the article theorize that the effect of symptoms may interact with familiarity with technology, with younger people being more comfortable using technology.
Other demographic variables associated with user engagement were as follows: (1) employment status, with people who worked full time more likely to use the intervention than people who were retired [66] or unemployed [54,68]; (2) education, with participants with higher education reporting more acceptance of interventions than people with lower education (a high school diploma or lower) [74][75][76]; and (3) housing situation, with people who were experiencing homelessness responding less to messages sent by a phone intervention compared with individuals with stable housing [55].
Personal Traits
Certain personality traits were associated with willingness and interest in using DMHIs. People who scored high on neuroticism and agreeableness of the Big 5 personality traits were more interested in using smartphone apps to reduce stress [77]. In a different survey reported in the same article, neuroticism was strongly linked to self-reported stress. The cooperative nature of agreeable people made it easier to accept new technology.
In addition, extraversion was a predictor of lower likelihood to prefer web-based mental health services over in-person services [72]. People who scored high on extraversion preferred to meet and connect with a doctor in person. Other personality characteristics associated with user engagement were resistance to change and openness to experience [56]. Higher openness predicted higher engagement with mindfulness and relaxation interventions. Contrary to the hypothesis made by the authors of the article that higher resistance to change would lead to resistance to adopting a new health behavior, higher resistance instead predicted higher adherence. Once people started using the intervention, a higher resistance to change facilitated commitment to continue using the intervention.
Mental Health Status
A total of 59 studies reported that people's mental health status plays a role in participants' interest in and use of a digital intervention. First, certain mental health symptoms appeared to inhibit people's motivation and/or ability to interact with an intervention. Depressive symptoms [78] and low mood [79], as measured by validated scales, have been reported as barriers for people to access and use web-based resources. Study participants reported that feeling tired also negatively affected their motivation and ability to use an intervention [44,80]. Second, the severity of these symptoms was related to engagement with digital interventions. In needs assessment studies, participants were more willing to use DMHIs if their symptoms were more severe [38,53,62,71,81,82]. However, evaluation studies have shown that more severe symptoms hamper actual engagement with digital interventions [51,56,[83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][100][101]. Depending on the type and severity of a person's mental health symptoms, studies that involved health care providers supporting digital intervention use reported that there was sometimes a need for face-to-face contact, as issues could be difficult to address remotely via a digital platform [102][103][104].
Beliefs
Beliefs refer to preexisting beliefs the user has about mental health help-seeking [88], their need for help [51,[105][106][107], the acknowledgment of having mental health needs [88], and using technology for mental health treatment [38,93,108,109]. For example, preexisting beliefs of needing help for mental health needs and having a positive perception about mental health help-seeking facilitated participants' engagement with an intervention. However, even if people acknowledged a perceived need for help and were willing to seek help, engagement with a particular intervention was then affected by a person's preconceived belief about whether a digital intervention would be effective [79,104,[110][111][112]. In 2 studies, participants did not want to use a digital intervention because technology was seen as a stimulant and distracting [113,114].
Mental health literacy refers to knowledge about mental health symptoms and appropriate treatment options [238]. Digital literacy refers to the skills required to use technology [239]. Digital health literacy refers to the ability to use technology to find and use health resources [240]. Participants' mental health literacy [123], digital literacy [88,103,104,122,[124][125][126][127], and digital health literacy [103,127] influenced the extent to which they were able to adapt and engage with DMHIs: for each type of literacy; higher literacy was associated with higher engagement.
Integration Into Life
Users reported that their engagement was affected by the extent to which they were able to integrate an intervention into their daily lives. Barriers that limited use included that participants felt they lacked time [44,[128][129][130][131] or constantly forgot to use an intervention [93,95,129], participants felt the intervention took too much time to use [132][133][134], and participants experienced difficulties establishing a routine of use that worked for them [130,135].
Access to a private space to access mental health resources also affected the extent to which participants could integrate an intervention into their lives. In 3 studies, participants mentioned that as opposed to going to a health care provider office, it was challenging to find a private space at home or work to use an intervention, which formed a barrier to engaging with it [136][137][138].
Studies have also found difficulties among users in integrating the information and tips offered by the intervention into their lives. For example, Jonathan et al [139] evaluated a smartphone app for people with serious mental illness. Participants who spent most of their day indoors without leaving their house had a hard time trying to use the tips in actual real-life scenarios.
Summary of User-Related Constructs
In summary, user engagement with DMHIs is partly influenced by factors related to the users themselves. Demographic variables such as age, gender, employment, education, and housing situation can affect user engagement. The personality traits neuroticism, agreeableness, openness, and resistance to change facilitated engagement, whereas extraversion was a barrier.
If mental health symptoms were more severe, participants were more interested in using DMHIs, but symptoms related to depression, low mood, and tiredness prevented engagement.
People's beliefs about and past experiences with mental health services and technology were facilitators if these beliefs and experiences were positive, and they formed a barrier if these beliefs and experiences were negative. Participants' literacy in understanding mental health and using technology facilitated their ability to use DMHIs, and any further engagement depended on the extent to which people were able to integrate it into their daily lives.
Program-Related Constructs
The second group of constructs is related to the type of therapy or content offered through the DMHI.
Type of Content
Higher satisfaction with the type of content and features offered increased user engagement. Uncertainty about the credibility of the information, which related to the evidence base of the intervention and the source of information, was a barrier [74,88,127,[140][141][142][143]. Other factors related to the modality through which content was delivered, with some participants preferring to have audio or video options in addition to text-only information [144] and whether the content was considered by users to have a supportive, nonjudgmental tone [51,145,146].
Some interventions offered programs of a fixed length or time commitment, such as a CBT program consisting of 8 weekly sessions. The length of the program as well as the length of individual sessions played a role in participants' satisfaction and their motivation to continue with the program [88,138,[147][148][149][150][151]. In 2 studies evaluating a self-guided CBT program for 8 weeks, the length and pace of modules negatively impacted user motivation [88], and participants reported preference for more concise modules [148], although the articles did not attempt to identify an ideal module length. In other studies evaluating a CBT program that included in-person sessions with a therapist, some participants reported preference for both longer individual therapy sessions (greater than the standard 50-70 minutes) [151] and duration of treatment to maximize benefit [150].
Perceived Fit
Perceived fit refers to the extent to which users felt the intervention was appropriate and relevant to their culture and values and/or targeted to people similar to them, rather than a one-size-fits-all solution. This fit was, for example, facilitated by relevance of information to their current situation [14,46,150,[152][153][154][155][156][157] and the ability to customize or personalize the intervention [30,46,83,84,122,134,135,138,[158][159][160][161][162][163][164][165]. A facilitating factor was whether users were able to identify with the people presented in the intervention [166], which could be coaches or instructors, or examples of people with similar experiences. Factors that make the information relevant and in a language suitable to the user included culturally appropriate content [133,167,168], reading level suitable to the user [168], and content presented with limited jargon or technical language [169].
Perceived Usefulness
Perceived usefulness refers to the user's experience with an intervention and their perceptions of whether the intervention would be useful to them. This perception was facilitated by whether users were able to understand the data presented to them [104,117,170], whether it was clear what action they should take [129,133,154,155,166], and whether the intervention provided a clear advantage over past or current care received [103,117,121,155,171]. Identified facilitators were easier access to services that users would otherwise not have access to [103,173] and the eliminated need to travel a long distance to a health center [121].
Level of Guidance
The level of guidance refers to the extent to which users were guided to use an intervention, for example, through reminders or a web-based supporter, holding them accountable to regularly engage with the content. A facilitating factor in using DMHIs was whether the use of the intervention increased locus of control, meaning that users felt more ownership over their own health [14,84,95,124,174,175]. However, for interventions that were completely self-guided, participants experienced difficulty engaging with them and at times neglected to use the intervention [44,95]. Participants expressed a need to receive more structured use, for example, through app reminders or a human coach checking on them on a regular basis [49,50,113,122,133,137,139,148,150,163,[175][176][177][178][179][180][181][182]. In 6 studies, users stated that they would prefer if an intervention served as a complement to existing, in-person therapy rather than a replacement for in-person therapy [30,122,134,139,166,183].
Social Connectedness
The effect that an intervention had on participants' sense of social connectedness was found to facilitate user engagement. For example, being able to connect to peers or have regular contact with a personal therapist through DMHIs facilitated engagement in 18 studies [14,32,33,104,114,122,125,133,156,[184][185][186][187][188][189][190][191][192][193][194]. In 6 studies, a noted barrier among both users and service providers was a concern about social avoidance, that is, a concern that people might use self-guided interventions in lieu of coming into a clinic in person and engaging in therapy or group sessions [78,104,122,133,138,195]. For therapy interventions, where study participants were introduced to therapists who they did not know before using the intervention, the extent to which participants could connect emotionally (therapeutic alliance) with the therapist-influenced engagement. Participants' ratings of the quality of the emotional connection were positively related to the number of log-ins, frequency of self-monitoring mood, and completion of therapy [196].
Impact of Intervention
Participants reported that the perceived changes they experienced in their mental health as a result of using an intervention affected their further engagement. Perceived symptom improvement facilitated further engagement [89,95,103,145,146,189,[197][198][199], whereas exacerbation of symptoms negatively impacted engagement [49,93,104,200].
Other negative impacts of the intervention were also observed as barriers to ongoing user engagement. For example, in 2 studies, some information that was shared within the digital intervention was found to trigger difficult memories or emotions [150], participants were uncomfortable with exercises or information [80], or participants were exposed to negative comments by other users [150].
Summary of Program-Related Constructs
In summary, the content offered by a DMHI had to be credible and ideally offered in more than one modality. Participants engaged with DMHIs if they felt the intervention was a good fit, which could be facilitated if content was relevant, and the DMHI was customizable, culturally appropriate, and used a language that was understandable to the participant. Engagement was facilitated by participants' perception of whether a DMHI was useful, which included whether they were able to understand the data and how to use it, and whether a DMHI provided a clear advantage over resources they already had access to.
Guided DMHIs had higher engagement than unguided interventions, and participants liked being able to connect with other people, although some studies identified concerns that DMHIs could be used to avoid in-person contact. The negative and positive impacts of DMHI use could form barriers and facilitators, respectively, to further engagement.
Technology-and Environment-Related Constructs
The third group of constructs refers to factors related to the technology itself or the implementation of the technology.
Technology-Related Factors
Technology-related factors refer to factors related to the technology through which the intervention was offered. The primary barrier to engagement noted in 25 studies was users' experience of technical issues [44,50,80,92,100,103,118,129,138,155,172,179,185,195,205,208,[212][213][214][215][216][217][218][219][220], such as a mobile app crashing and shutting down unexpectedly; in 3 studies, participants did not have the resources required to use an intervention [171,221,222]. In 7 studies, participants expressed concerns over the eventual costs associated with using an intervention [85,93,104,123,165,223,224]. Costs could be related to the need for a smartphone, having internet access, or making purchases through the app. Usability issues formed a barrier to engaging with an intervention [46,50,78,84,[148][149][150]157,159,170,[224][225][226][227][228]. Examples of usability issues were difficulty finding information in an intervention [78], a time-consuming process to log in to an intervention [159], and difficulty navigating within an intervention [150,157].
In addition to technical issues that formed barriers to engagement, there were also factors related to technology that facilitated the use of mental health resources and support. Facilitating factors made possible by the technology used were the flexibility of being able to access resources at any location [47,127] at any time [41,93,97,124,129,134,167,[229][230][231][232][233] and having a temporal record of health data, such as symptoms, that users were able to track and access over time [176,180].
Privacy and Confidentiality
Privacy and confidentiality relate to how data were stored and shared and whether users felt safe and comfortable to disclose confidential information through an intervention. In 2 studies, participants were uncomfortable about their physical location being recorded [180,234], and in a study by Nicholas et al [234], participants were more comfortable with health information being recorded such as sleep and mood than personal data being recorded such as social activity and communication logs.
Accessing mental health resources via a digital platform raised concerns regarding privacy. Facilitators of user engagement and feeling safe to disclose information included assurance that the digital platform was private and participants' information could not be easily accessed by third parties [129,158,205,235].
Participants in 5 studies expressed that concerns about confidentiality formed a barrier to engagement [51,104,127,236,237]. A facilitator to create a safe environment was moderation of the intervention [140], which means that a person was monitoring and moderating the content shared by and between users within an intervention.
Anonymity was found to be both a facilitator and a barrier to engagement. Overall, 7 studies listed anonymity, meaning that users could share and receive information anonymously, as a facilitating factor to engage and encourage disclosure of information [41,88,129,137,141,148,232]. However, anonymity could also make it more difficult for participants to trust a coach who they did not know [85,127]. In these studies, participants interacted with the coach through text, and there was the option to disclose names, but neither side could see each other. Other study participants were concerned about whether an intervention was truly anonymous if it was used in a small setting, with a limited number of known users [137]. Anonymity was also more important for people who were older and who had previous experience with medical treatment [127].
Social Influence
Users' engagement was facilitated by whether the intervention was endorsed by other users [43] or peers [211,241], their friends and family [97], or their current health care provider [152,161,242]. However, if participants felt forced by others to use an intervention, it deterred them from using it [103]. If an intervention was used as part of ongoing in-person therapy, the way that therapists used or were willing to use an intervention influenced participants' engagement with an intervention [84,103,127,150,169,212,243]. The adoption of an intervention as part of therapy depended on the therapist's digital literacy skills [244,245], their past experience with mental health technology [120], and the ability to easily integrate its use into their practice as a provider [132,195,205,211,224,226,246].
Implementation
Although most studies in this review (93%, 194/208) primarily focused on factors related to the user and the intervention itself, 14 studies also described factors related to the implementation of the intervention. Examples included whether users received training on how to use the intervention [115,247] and if it was introduced early on or at a later stage in ongoing therapy. Participants in a study by Graham et al [68] used an intervention to support their mental health while in treatment for substance use. These participants found the intervention more useful at a later stage, as they felt the user was likely more familiar with their health and better able to make sense of the information provided by the intervention. Two other studies found that participants engaged more with an intervention if they were just starting treatment [104,178]. Two studies found that the way in which the intervention was labeled and introduced to users also mattered. For example, the term mental health was disliked by participants [248], and participants reported that they would be more likely to use an app if it was meant for well-being and mental fitness rather than mental health [37]. Other implementation factors were administrative barriers [42,118,129,211] and barriers related to the organization in which the intervention was or would be implemented [118,122,135]. Examples of administrative barriers were inadequate staffing and poor communication among staff members. An example of organizational barriers was a lack of support for DMHIs among managers.
Summary of Technology-and Environment-Related Constructs
In summary, although DMHIs introduced technical and usability issues that could form a barrier for participants to engage, the digital format also provided flexibility to access resources anywhere at any time and to have a record of health data. It was important that information was private and that participants could safely disclose information anonymously, although complete anonymity also made it more difficult to trust other people on the platform. Negative and positive opinions held by other people about DMHIs could form a barrier and facilitator, respectively, to engagement, and if DMHIs were to be used as part of ongoing therapy, the therapists' past experience with DMHIs and the ability to integrate it into their practice played a role in user engagement. Finally, successful implementation facilitated user engagement. Providing training on how to use DMHIs and labeling an intervention for well-being or mental fitness (as opposed to mental health) can help users engage with DMHIs more. Participants may be more engaged with DMHIs if they are just starting treatment, but the identified benefit of introducing DMHIs at a later stage is that users may be more knowledgeable about their health and better able to make sense of their health information.
Principal Findings
This study aims to synthesize the literature on DMHIs and summarize the identified factors affecting user engagement with DMHIs. This review identifies 3 key areas that all contribute to DMHI engagement: (1) user characteristics, such as severe mental health symptoms, can form a barrier to engagement; (2) users' experience of the program or content, with participants more likely to engage if they perceive the program to be useful and a good fit to them; and (3) the technology and implementation environment, such as technical issues being a common barrier to engaging with DMHIs. Providing content that is relevant and customizable according to personal preferences and offering technical assistance and/or training are important to achieve engagement. However, although these considerations may increase interest and uptake of DMHIs, it is important to understand whether characteristics specific to the user, such as their symptoms, will affect motivation to engage with these interventions. We first discuss the 3 key areas in more detail in the following three subsections; compare our constructs with other models on user engagement; and then discuss implications for researchers, developers, and health service providers.
User Constructs
Individual differences among users can affect engagement, including demographic variables such as age and gender, personality traits, mental health status, beliefs about mental health and DMHIs, experience with technology and mental health, and people's ability to integrate DMHI use into their lives. Although the severity of symptoms may increase interest in engaging with health interventions [249], symptoms related to depression, mood, and tiredness were found to hamper actual engagement. This contrast may point to the unique implications that mental health symptoms can have on engagement with DMHIs.
The contrasting role of symptom severity between studies highlights the importance of understanding how people who would be more interested in DMHIs and may benefit more from its use are not limited by their symptoms to actually engage with these interventions. The contrast also illustrates the importance of including users at various stages of the design process, as people may be interested in the concept of a DMHI but may not be able to actually engage with it because of the nature of their symptoms.
Although studies looking at DMHI usage over 1 year found that younger people were more engaged with DMHIs, shorter research studies (ie, up to 12 weeks) found that older people were more engaged. Potentially, older adults perform better on study adherence, and younger people continue to engage more with an intervention long term, although the different interventions and settings make it difficult to make a direct comparison between these studies.
Program Constructs
Engagement with DMHIs was facilitated if participants liked the type of content; they perceived a DMHI to be a good fit for them and perceived it to be useful; there was a level of guidance on how to use it, it facilitated social connectedness, and it had a positive impact, such as improvement of symptoms.
Level of Guidance
Guided interventions typically have higher engagement than unguided interventions. However, human guidance can be resource intensive, and it may not always be possible or feasible to provide the desired level of guidance. Although human support enhances engagement more than automated means such as email reminders [250], several studies included in our review found that such automated reminders not only facilitated engagement but were also experienced positively by users. Automated reminders to use an intervention may therefore be a low-cost alternative to human support. The benefits of automated reminders may depend on the type of support and the type of barriers they are designed to address. Short text-based reminders may be suitable for in-the-moment interventions [251] and may be useful to address barriers to forgetting to use an intervention. On the other hand, human support may be more suitable for addressing the lack of motivation and facilitating social connectedness.
Furthermore, appropriate time commitments differ for self-guided exercises versus guided sessions. Participants across studies preferred shorter self-guided modules but longer guided therapy sessions. Finally, personalization may also meet different preferences. People who find videos or text-based material time consuming may be more engaged with shorter actionable exercises, whereas people with a preference for synchronous communication may engage more when they get dedicated time on one-on-one sessions. It would be worthwhile to further explore how engagement can be encouraged in self-guided interventions.
Social Connectedness
An important facilitator was whether a DMHI facilitated social connectedness and enabled the user to interact with other people. Previous work has shown that social support through social networks not only increases engagement but may also have a positive effect on depression symptoms [221,222]. However, in some studies in this review, mental health service users and providers were concerned that technology would facilitate social avoidance if people were to use a digital intervention in lieu of engaging in face-to-face individual or group therapy. It appears that it is important that an intervention allows users to connect with other people with whom they may have otherwise not connected, rather than replacing any existing face-to-face contact. For example, people can access a mental health app if they are not able to speak to someone in person about their concerns [175].
Technology and Environment Constructs
Offering mental health resources through technology offers both barriers and facilitators. Technical issues and concerns about privacy were common barriers, but technology also offered flexibility and could facilitate anonymity. Furthermore, the environmental context in which DMHIs are to be used are important to consider. Participants were more likely to use DMHIs if people close to them thought they should use it and if they received training on how to use it.
Anonymity
Anonymity was a prominent topic among studies but engaging with an intervention anonymously was seen as both a barrier to and facilitator of engagement, sometimes within the same study. This difference can be explained by factors related to the user, the type of implementation setting, and the type of intervention features that were anonymous, as outlined in the following paragraph.
First, a facilitating aspect of an anonymous intervention was that study participants found it less stigmatizing than seeing a live or in-person therapist. Anonymity may be an important facilitator for people who have experienced stigma and embarrassment, which is known to be a barrier to help-seeking for mental health concerns [2,3]. Similarly, prior work on mental health discourse on the web found that anonymity does not hinder the social support that people receive on their posts, which can facilitate open conversations, and that social media may be particularly useful for stigmatic illnesses such as mental health [252]. Second, the study setting matters. Interventions that are used in a relatively small setting may give a false sense of anonymity if it is possible for users to find out who else is using the intervention, for example, through content shared within the intervention or by seeing someone use it [108], which is important to consider for intimate settings, such as schools, workplaces, or small communities. Third, on community forums, where users could share their experiences and comment on other users' posts, overall anonymity was seen as a facilitator to safely disclose information. In one-on-one sessions, however, where the user interacted with a coach or therapist but neither side could see each other, anonymity made it more difficult to establish a relationship and trust compared with a face-to-face session.
These differing perceptions shed light on an important trade-off. Should an intervention strive to be anonymous to address stigma and potential embarrassment or focus on allowing people to establish a trusted relationship with someone? This decision may depend on the objective of the intervention and whether anonymity is possible in the context in which it is to be used. Alternatively, a hybrid form or multiple options can be considered and offered. For example, forums with a larger number of users can be anonymous, whereas a private one-on-one session with a therapist can include telehealth options to allow for therapeutic alliance building between the user and therapist. The Supportive Accountability Model [250] also proposes that engagement is enhanced if human coaches are seen as trustworthy and that users may disclose more in computer-mediated than face-to-face communication. Although Mohr et al [250] argue that providing additional information about individuals, such as photographs, may reduce these types of benefits of this type of mediated communication, it may be important to establish initial trust with a coach or therapist. Additional research is needed to understand how best to support trust in DMHIs.
Privacy
A previous review of user engagement with mental health apps theorized that one reason for low engagement is that these apps do not consider user privacy [15]. In our review, privacy was discussed in terms of data storage and sharing but also with respect to the physical environment in which these interventions were to be used. Delivering mental health support through a digital platform was found to increase a sense of privacy in some studies, but in other studies, it was associated with a decreased sense of privacy. The study participants stated that they could access care more privately, without anyone knowing about it. In line with previous work [2], this again indicates that privacy can be important for people who experience stigma for or reluctance to help-seeking. Although participants' living situation was not explicitly discussed in these studies, when compared with other studies, it is likely that participants were able to engage with these interventions in physically private settings. In other studies where people did not have access to a private space, a lack of privacy was a barrier to engagement. For example, study participants evaluating an app that delivered remote web-based therapy felt that they could disclose more in a closed therapist office than through a web-based intervention at home where other people in their household could see or disrupt them [136]. Study participants who used a mental health intervention in the workplace [137] said privacy was not possible, as colleagues could see what someone was doing at their desk and when they were interacting with the intervention.
These differing experiences highlight that technology can overcome existing privacy barriers of seeking mental health care but can also introduce other privacy issues, and users' situational context (ie, where they are physically accessing the digital intervention) should be taken into account.
Comparison With Other Models of Technology and Digital Health Intervention Engagement
Some of the themes identified in this review overlap with previous models conceptualizing engagement with digital health interventions, as well as general technology acceptance and health behavior, such as the Efficiency Model of Support [251], Technology Acceptance Model [253], and Health Belief Model [249].
For instance, the Efficiency Model of Support [251] states that human support increases engagement in the context of the use of digital health interventions when it addresses 1 of 5 failure points: usability, implementation, fit, engagement, and knowledge. These broadly map to our constructs of technology factors, integration into life, perceived fit, beliefs, and experience, and skills. The implementation failure point in the efficiency model pertained to whether the user can apply knowledge gained from an intervention in their lives. Our review extends this concept, in that we found that an important issue is whether users can integrate the actual use of the intervention into their everyday routines.
Our findings are in line with the Technology Acceptance Model, which explains that users' decisions to accept and use a technology are influenced by perceived usefulness, ease of use, and social influence of others. The Health Belief Model explains that adoption of health interventions is, among things, influenced by a person's belief in the severity of their illness or health symptoms and the perceived benefits of seeking treatment for these symptoms, which map onto our constructs of beliefs and impact of the intervention. Themes revealed in this review, which have not been highlighted in these previous models, are the level of guidance, integration into life, and social connectedness. This gap may be explained by the way in which mental health interventions were intended to be used. To be effective, most DMHIs were intended to be used regularly by users on their own. This characteristic introduces the challenge for people to integrate it into their routine and have the discipline to use it regularly; therefore, the level of guidance provided within the intervention may have a particularly salient effect on engagement. Social connectedness may be especially important for mental health interventions, as it can improve mood [254] and help combat depression [255].
Implications
In this review, we have synthesized the literature on DMHIs to identify common factors influencing user engagement. This synthesis can be described as follows.
• Researchers can use these factors to develop constructs that are important to measure when evaluating DMHIs. More concretely, it is important to capture user characteristics, users' experience of the program and content, and details regarding the implementation setting. These constructs may help explain why someone would use one DMHI over another and may help evaluate how engaging a DMHI will be.
• Developers can use these factors to facilitate engagement with DMHIs. Specifically, when developing a DMHI, it is important to understand the specific characteristics of the target audience, for example, if the severity of the audience's symptoms can form a barrier to engagement; to tailor the program to the audience, such as offering the option to customize content; and to address issues related to the technology and environment, for example, by mitigating technical issues and providing technical assistance.
• Mental health service providers, such as clinicians, can use this overview as guidance to select interventions that are appropriate for their clients or help guide their clients in selecting suitable interventions. For example, it is important to consider whether an intervention can be easily integrated into clients' lives and routines. In addition,Multimedia Appendix 2, which shows the full data set, can be used to filter the study setting, target population, and symptoms to see which barriers and facilitators have been observed for similar settings and populations.
The themes highlighted in this review identify factors that can facilitate engagement and barriers that should be considered to facilitate the successful implementation of a digitally mediated mental health intervention.
Limitations
We did not limit this review to particular study designs. As such, this review takes a much broader look at what factors influence engagement with digital mental health technologies rather than focusing on a single research method or technology. However, because of the heterogeneity of the included studies, we were unable to conduct a meta-analysis. In addition, there was inconsistency across studies in measures used to assess user engagement, such as the number of log-ins to an intervention, the length of continuing to engage with it, the total time spent using an intervention, or a self-reported measure of engagement by participants. This inconsistency has been found to be an issue in previous reviews on the user engagement of DMHIs [11,256]. This review was limited to peer-reviewed empirical articles. Although the review included articles that evaluated people's experience with both research interventions and commercially available DMHIs, it is possible that some interventions may have been missed.
Finally, this review was conducted before the global COVID-19 pandemic. There may be unique factors that are pandemic related that make DMHI engagement more or less likely. For example, stay-at-home orders may exacerbate feelings of social isolation and make people more likely to engage with apps that increase social connectedness. On the other hand, it may also introduce additional barriers to finding a private space to use DMHIs if sheltering in place with others. The results presented in this review should be interpreted and used to understand DMHI engagement before and after the pandemic. A future review could be conducted solely during the pandemic period, and it could be compared with this review to understand DMHI use outside versus during a pandemic.
Conclusions
Previous studies have shown the potential of DMHIs to improve mental health. However, for these interventions to be clinically effective, they require engagement by users in real-world settings. Across the studies reviewed, we identified 16 common factors that affect user engagement. Further research on DMHIs can use these factors as guidelines when evaluating interventions with users, and future interventions can be developed with these factors in mind. By understanding the factors that affect engagement, targeted strategies can be developed to overcome addressable barriers and work toward the successful implementation of these interventions.
Acknowledgments
The authors thank Vicky Yu who helped with data entry for this manuscript.
Conflicts of Interest
SMS has received consulting payments from Otsuka Pharmaceuticals. All other authors declared no conflicts of interest.
Multimedia Appendix 4
Overview of barriers and facilitators for each theme. | 2021-03-25T06:16:39.475Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "893b0a390c82140e66abca65226ea876a93696cf",
"oa_license": "CCBY",
"oa_url": "https://www.jmir.org/2021/3/e24387/PDF",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c4c44674f0849892107f2391646cc4fa3431324b",
"s2fieldsofstudy": [
"Psychology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221094629 | pes2o/s2orc | v3-fos-license | New strong convergence method for the sum of two maximal monotone operators
This paper aims to obtain a strong convergence result for a Douglas–Rachford splitting method with inertial extrapolation step for finding a zero of the sum of two set-valued maximal monotone operators without any further assumption of uniform monotonicity on any of the involved maximal monotone operators. Furthermore, our proposed method is easy to implement and the inertial factor in our proposed method is a natural choice. Our method of proof is of independent interest. Finally, some numerical implementations are given to confirm the theoretical analysis.
Introduction
Let H be a real Hilbert space with scalar product ⟨., .⟩ and induced norm ‖ ⋅ ‖ . An operator A ∶ H → 2 H with domain D(A) is said to be monotone if A is maximal monotone if its graph is not properly contained in the graph of any other monotone operators.
Let us consider the inclusion problem of the form where A and B are set-valued maximal monotone operators in H. Throughout this paper, we assume that the set of solution, denoted by S, of (1) is nonempty. The proximal point algorithm (PPA) is the well-known method for solving inclusion problem (1) (see, Lions and Mercier 1979;Martinet 1970;Moreau 1965;Rockafellar 1976). The PPA for solving (1) is expressed as where > 0 is the proximal parameter. Now, implementing PPA (2) to solve (1) requires computing the resolvent operator of the sum A + B exactly. This is very difficult to implement and could be as hard as the original inclusion problem (1). This difficulty has led many authors to consider the operator splitting approach to solve (1). The aim of operator splitting method is to circumvent the computation of J A+B when implementing (2) but rather consider the computation of J A and J B (Eckstein and Bertsekas 1992;Glowinski and Le Tallec 1989;Lions and Mercier 1979).
When both A and B are single-valued linear operators in (1), Douglas and Rachford (1956) proposed the following method for solving heat conduction problems: We can eliminate u k+ 1 2 in (3) above and obtain Define z k ∶= J B −1 u k ⇔ u k = J B (z k ) . Then, (4) reduces to the following splitting method (known as Douglas-Rachford splitting method) (5) z k+1 = J A (2J B − I)z k + (I − J B )z k . Boţ et al. (2015) gave the following method for solving (1): z 0 = z 1 ;
Motivations and contributions
where { k } is a non-decreasing sequence with 0 ≤ k ≤ < 1, ∀k ≥ 1 and , , > 0 such that Boţ et al. (2015) obtained weak convergence analysis of algorithm (6) for finding common zeros of the sum of two maximal monotone operators and illustrate their results through some numerical experiments. The same conditions (a) and (b) above have been used in recent works in Dong et al. (2018), Shehu (2018) and other associated papers. When k = 0 , it was proved in Bauschke and Combettes (2011, Thm. 25.6(vii)) that {z k } in (6) converges strongly to a solution of (1) if either A or B is uniformly monotone (A is uniformly monotone if ⟨x − y, u − v⟩ ≥ (‖x − y‖), ∀u ∈ Ax, v ∈ Ay , where ∶ [0, ∞) → [0, ∞) is increasing and vanishes only at zero) on every nonempty bounded subset of its domain. When k = 1 and B ≡ 0 , then (6) reduces to the inertial proximal point method proposed by Alvarez and Attouch (2001). In this case, Alvarez and Attouch (2001) assumed that the inertial factor k satisfies the condition 0 ≤ k ≤ k+1 ≤ < 1 3 in their convergence result. However, the assumption on the inertial factor k imposed in (6) does not appear as simple as condition 0 ≤ k ≤ k+1 ≤ < 1 3 , assumed by Alvarez and Attouch (2001).
Problems arise in infinite dimensional spaces in many disciplines like economics, image recovery, electromagnetics, quantum physics, and control theory. For such problems, strong convergence of sequence of iterates z k of the proposed iterative procedure is often much more desirable than weak convergence. This is because strong convergence translates the physically tangible property that the energy ‖z k − z‖ of the error between the iterate z k and a solution z eventually becomes arbitrarily small. Another importance of strong convergence is also underlined in the works of Güler (1991), where a convex function f is minimized through the proximal point algorithm. Güler (1991) showed that the rate of convergence of the value sequence {f (z k )} is better when {z k } converges strongly than when it converges weakly. For more details on importance of strong convergence, please see Bauschke and Combettes (2001).
Strong convergence methods for solving problem (1) when B is set-valued maximal monotone operator and A is a single-valued -inverse strongly monotone operator (i.e., ⟨Ax − Ay, x − y⟩ ≥ ‖Ax − Ay‖ 2 , ∀x, y ∈ H ) have been studied extensively in the literature (see, for example, Boikanyo 2016;Chang et al. 2019;Cholamjiak 2016;Cholamjiak et al. 2018;Dong et al. 2017;Gibali and Thong 2018;López et al. 2012;Riahi et al. 2018;Shehu 2016Shehu , 2019Shehu and Cai 2018;Thong and Cholamjiak 2019;Wang and Wang 2018). However, there are still few results on the strong convergence results concerning more general case of problem (1) when A and B are set-valued maximal monotone operators. This is the gap that this paper aims to fill in.
Our aim in this paper is to prove the strong convergence analysis of the inertial Douglas-Rachford splitting method with different conditions from the conditions (a) and (b) assumed in Boţ et al. (2015) without assuming uniform monotonicity on either maximal monotone operator A or B. Furthermore our assumptions on the inertial factor k here in this paper are the same assumptions in the results of Alvarez and Attouch (2001) (which is a special case of our result). In summary, • We prove strong convergence analysis of inertial Douglas-Rachford splitting method without using the conditions (a) and (b) assumed in Boţ et al. (2015). Our inertial conditions are the same as the ones assumed in Alvarez and Attouch (2001) for finding zero of a set-valued maximal monotone operator using inertial proximal method. • We obtain strong convergence results without assuming that any of the involved maximal monotone operators is uniformly monotone on every nonempty bounded subset. Our strong convergence results are much more general than the current ones in Bauschke and Combettes (2011) and other associated works where strong convergence is obtained. • Some numerical examples are given to confirm the importance of the presence of inertial term in our method.
The paper is therefore organized as follows: We first recall some basic explanations of Douglas-Rachford splitting method and introduce our inertial Douglas-Rachford splitting method alongside some results in Sect. 2. The analysis of strong convergence of our proposed method is then investigated in Sect. 3. We give numerical implementations in Sect. 4 and conclude with some final remarks in Sect. 5.
Preliminaries
Let us first recall some basics that are required to derive and analyze the Douglas-Rachford splitting method; for the corresponding details, we refer, Eckstein and Bertsekas (1992), He and Yuan (2015), Svaiter (2011) and Zhang and Cheng (2013). Let > 0 be a fixed parameter, and let us denote by the resolvents of A and B, respectively, which are known to be firmly nonexpansive (operator T is firmly-nonexpansive if ⟨x − y, Tx − Ty⟩ ≥ ‖Tx − Ty‖ 2 , ∀x, y ∈ H ). Furthermore, let us write for the corresponding reflections (also called Cayley operators), and note that the reflections are nonexpansive operators (T is nonexpansive if ‖Tx − Ty‖ ≤ ‖x − y‖, ∀x, y ∈ H). In Eckstein and Bertsekas (1992) and He and Yuan (2015), the maximal monotone operator S ,A,B is defined as It was shown in Eckstein and Bertsekas (1992) that the Douglas-Rachford splitting method (5) can be converted to By Eckstein and Bertsekas (1992, Thm. 5), for any given zero z * of S ,A,B , J B (z * ) is a zero of A + B . Therefore, J B (z * ) is a solution of (1) whenever z * satisfies Consequently, the Douglas-Rachford splitting method (5) can be rewritten as where e(z k , ) ∶= 1 2 (z k − R A oR B (z k )).
We next recall some properties of the projection. For any point u ∈ H , there exists a unique point P C u ∈ C such that P C is called the metric projection of H onto C. We know that P C is a nonexpansive mapping of H onto C. It is also known that P C satisfies In particular, we get from (10) that Furthermore, P C x is characterized by the properties
This characterization implies that
The following result is obtained (Shehu et al. 2020) but we give the proof for the sake of completeness.
Lemma 2.1 Let S ⊆ H be a nonempty, closed, and convex subset of a real Hilbert space H. Let u ∈ H be arbitrarily given, z ∶= P S u , and Proof By definition, it follows immediately that z ∈ Ω ∩ S . Conversely, take an arbitrary y ∈ Ω ∩ S . Then, in particular, we have y ∈ Ω , and it therefore follows that Using z = P S u together with the characterization (12), we also have In particular, since y ∈ S , we therefore have ⟨u − z, z − y⟩ ≥ 0 . Hence (14) implies ‖y − z‖ 2 ≤ 0 , so that y = z . This completes the proof. ◻ Finally, we state some basic properties that will be used in our convergence theorems.
Analysis of the convergence
For the rest of this paper, we assume that Lemma 3.1 Let {z k } be the sequence generated by (9). For any z satisfying (7), we have Proof By (9), we get We know that e(y k , ) = 1 2 (y k − R A oR B (y k )) , where > 0 is the proximal parameter, is firmly-nonexpansive (see, He and Yuan 2015, lem. 2.2). Thus, In particular, for z = R A oR B (z) , we obtain Putting (17) into (16), we have Recall that k e(y k , ) = y k − z k+1 implies that Using (19) in (18) and the condition that 0 < ≤ k ≤ 1 , we have ◻ Lemma 3.2 Let {z k } be the sequence generated by (9). For any z satisfying (7), we have Proof Moreover, from the definition of y k , we obtain using Lemma 2.2 (a) that and, similarly, with z replaced by z k+1 in the previous formula, Substituting (21) and (22) into (15) and eliminating identical terms, we get Therefore, we obtain where the last identity exploits Lemma 2.2 (a) twice. We therefore have Using the fact that { k } is non-decreasing and { k } is non-increasing, we then obtain which is the desired inequality. ◻ Our first central result below shows that the sequence {z k } generated by (9) is bounded.
Lemma 3.3
The sequence {z k } generated by (9) is bounded.
Proof A simple re-ordering of (20) implies that where the equality uses once again Lemma 2.2 (a). Hence, by cancellation, re-ordering, and neglecting a non-positive term on the right-hand side, we obtain Then (29) consequently implies that Since { k } is non-increasing in (0,1), this implies It then follows from (28) and (30) that Since k ≤ k+1 , k+1 = k e k+1 and { k } is non-increasing in (0,1), we therefore get which can be rewritten as (since { k } is non-increasing in (0,1)) Since the sequence { k } belongs to the interval [0, ] , we have Using lim k→∞ k = 0 and ∈ [0, 1∕3) , it follows that the right-hand side is eventually bounded from below by a positive number, i.e., there is a constant > 0 such that 1 − k+1 3 + 2(e k+1 − 1) − k ≥ for all k ∈ ℕ sufficiently large, say, for all k ≥ k 0 . Hence, we have This implies that for k ≥ k 0 , Thus, dividing by k+1 and omitting a non-positive term, we get where t k ∶= ∑ k i=1 i . Since k ∈ (0, 1) for all k ∈ ℕ , it is easy to see that k e t k+1 ≤ e 2 (e t k − e t k−1 ) for all k ≥ 2 , so that which, by (32), e −t k+1 ≤ 1 , and the fact that { k } belongs to the interval [0, ] ⊂ [0, 1 3 ) , yields Using (33), ∈ [0, 1) , and the convergence of the geometric series, a simple calculation gives Using once again that < 1 , this shows that {z k } is bounded. ◻ Next, we formulate a simple lemma that turns out to be useful for proving the strong convergence result.
Lemma 3.4 Let {z k } be the sequence generated by (9). Define for all k ∈ ℕ . Then u k ≥ 0 for all k ∈ ℕ.
Proof Since { k } is non-decreasing with 0 ≤ k < 1 3 , and by Lemma 2.2 (a), we have and this completes the proof. ◻ Before we prove our main strong convergence result, we state another preliminary result which provides sufficient conditions for the strong convergence of the sequence {z k } generated by our method (9). In our strong convergence result, we will then show that these sufficient conditions automatically hold.
Lemma 3.5 Let {z k } be the sequence generated by (9). Assume that and Then the entire sequence {z k } converges strongly to the solution z.
Proof By assumption, we have
We claim that this already implies from which the strong convergence of the entire sequence {z k } to z follows immediately. Assume this limit does not hold. Then there is a subset K ⊆ ℕ and a constant > 0 such that Since lim k→∞ ‖z k+1 − z k ‖ = 0 by the assumption and 0 ≤ < 1 , then (recall that if {a k } and {b k } are bounded sequences in ℝ and one of either {a k } or {b k } converges, then lim sup k→∞ (a k + b k ) = lim sup k→∞ a k + lim sup k→∞ b k ) Using (34) and k ≤ < 1 , we get Consequently, we have lim sup k∈K ‖z k − z‖ ≤ 0. Since lim inf k∈K ‖z k − z‖ ≥ 0 obviously holds, it follows that lim k∈K ‖z k − z‖ = 0. This implies [by (35)] for all k ∈ K sufficiently large, a contradiction to the assumption that lim k→∞ ‖z k+1 − z k ‖ = 0. This completes the proof. ◻ We are now ready to obtain strong convergence of the sequence {z k } generated by (9) to an element of S.
Theorem 3.6 The sequence {z k } generated by (9) strongly converges to z, where z = P S z 0 .
Proof Let u k denote the nonnegative number defined in Lemma 3.4, and let us apply Lemma 3.2. We obtain from (20) that We now consider two cases. Case 1 Suppose {u k } is eventually a monotonically decreasing sequence, i.e. for some k 0 ∈ ℕ large enough, we have u k+1 ≤ u k for all k ≥ k 0 . Then, since u k is nonnegative for all k ∈ ℕ by Lemma 3.4, we obviously get that {u k } is a convergent sequence. Consequently, it follows that lim k→∞ u k = lim k→∞ u k+1 . Since {z k } is bounded by Theorem 3.3, there exists M > 0 such that 2�⟨z k − z, z k − z 0 ⟩� ≤ M. Moreover, it follows that there exist N ∈ ℕ and 1 > 0 such that 1 − 3 k+1 − k ≥ 1 for all k ≥ N . Therefore, for k ≥ N , we obtain from (36) that Hence Together with k → 0 , the boundedness of {z k } , and the convergence of {u k } , we therefore obtain from the definition of u k that the limit exists and is equal to lim k→∞ u k+1 . In particular, Lemma 3.4 therefore implies that ≥ 0 . We will show that = 0 holds; then (37) together with the fact that k ≤ < 1 for all k ∈ ℕ yields the strong convergence of the sequence {z k } to the solution z.
By contradiction, assume that > 0 . Since {z k } is bounded by Theorem 3.3, it is easy to see that we can choose a subsequence {z k j } which converges weakly to an element p ∈ H and such that We show that p ∈ S . Observe that the updating rule for y k implies This yields Let Ty ∶= 1 2 y + 1 2 R A oR B (y), y ∈ H . Then it is clear that T is nonexpansive and z ∈ F(T) ∶= {x ∈ H ∶ x = Tx} if and only if z = R A oR B (z) . Similarly, it is easy to see that e(y k , ) = 1 2 (y k − R A oR B (y k )) = y k − Ty k . Therefore, Demiclosedness Principle of T implies that p ∈ F(T) . Hence, p ∈ S . This implies that where the inequality follows from the characterization (12) of a projection applied to z = P S z 0 and p ∈ S . Since (37) yields and since > 0 by assumption, we have for some sufficiently large k 1 ∈ ℕ . Using the identity we therefore get from (38). Using once again the assumption that > 0 , this implies for some sufficiently large k 2 ∈ ℕ, k 2 ≥ k 1 . From (36), we therefore obtain This implies lim k→∞ ‖y k − Ty k ‖ = lim k→∞ ‖e(y k , )‖ = 0.
where the second inequality follows from Lemma 3.4. Since > 0 , this gives the summability of the sequence { k } , a contradiction to our assumption. Hence we must have = 0 , and this yields the strong convergence of the sequence {z k } to z. Case 2 Assume that {u k } is not eventually monotonically decreasing. Then let ∶ ℕ → ℕ be the map defined for all k ≥ k 0 (for some k 0 ∈ ℕ large enough) by Clearly, (k) is a non-decreasing sequence such that (k) → ∞ for k → ∞ and u (k) ≤ u (k)+1 for all k ≥ k 0 . Hence, similar to the proof of Case 1, we therefore obtain from (36) that for some constant M > 0 . Thus, Using the same technique of the proof as in Case 1, one can also derive the limits Again observe that for j ≥ 0 by (36), we have u j+1 < u j when x j ∉ Ω ∶= {x ∈ H ∶ ⟨x − z 0 , x − z⟩ ≤ 0} (note that this Ω is the same set as in Lemma 2.1). Hence x (k) ∈ Ω for all k ≥ k 0 since u (k) ≤ u (k)+1 . Since {x (k) } is bounded, we may choose a subsequence (which we again call {x (k) } ) which converges weakly to some x * ∈ H . As Ω is a closed and convex set, it is then weakly closed and so x * ∈ Ω . Using (43), one can see as in Case 1 that z (k) ⇀ x * and x * ∈ S . Consequently, we have x * ∈ Ω ∩ S . In view of Lemma 2.1, however, the intersection Ω ∩ S contains z as its only element. We therefore get x * = z . Furthermore, we have since x (k) ∈ Ω . Taking lim sup in this last inequality gives
Hence
We claim that this implies lim k→∞ u (k)+1 = 0 . By definition, u (k)+1 is equal to Adding and subtracting x (k) inside the norm of the first term, and using (41), (44), we see that the first term goes to zero. The second term converges to zero also in view of (44), taking into account the boundedness of { k } . The third term vanishes in the limit because of (41) and noting once again that { k } is a bounded sequence. Finally, the last term goes to zero since { k } converges to zero and the sequence {z k } is bounded by Theorem 3.3.
We next show that we actually have lim k→∞ u k = 0 . To this end, first observe that, On the other hand, Lemma 3.4 implies that lim inf k→∞ u k ≥ 0 . Together we obtain lim k→∞ u k = 0.
Consequently, the boundedness of {z k } , assumptions on our iterative parameters and (36) show that Hence the definition of u k yields Using our assumption, it is not difficult to see that this implies the strong convergence of the entire sequence {z k } to the particular solution z. The statement therefore follows from Lemma 3.5. ◻ In the special case when B is a set-valued maximal monotone operator and A is a single-valued -inverse strongly monotone operator in problem (1), iterative procedure (9) reduces to the following: z 0 , z 1 ∈ H, with 0 < < 2 . Moreover, we obtain strong convergence for this special case of monotone inclusion for which its proof can be easily obtained by following line of arguments of previous lemmas and Theorem 3.6.
Corollary 3.7 Suppose B is a set-valued maximal monotone operator and A is a single-valued -inverse strongly monotone operator. Assume that
Then {z k } strongly converges to z, where z = P S z 0 .
We next relate our results to some existing results from the literature.
Remark 3.8
(a) In the results of Thong and Vinh (see Thong and Vinh 2019, Thm. 3.5), strong convergence for monotone inclusion was obtained under some assumptions on the iterative sequence. The monotone inclusion studied in Thong and Vinh (2019) involves sum of a set-valued maximal monotone operator and singlevalued inverse-strongly monotone operator. In this paper, our method is proposed such that no assumption is made on the iterative sequence even for a more general result considered here. (b) The Algorithm (45) could be taken as the inertial strong convergence version of some recent results in Attouch and Cabot (2019), Boţ and Csetnek (2016), Lorenz and Pock (2015) and Villa et al. (2013). ◊
We compared the algorithm (9), Algorithm 3 in Thong and Vinh (2019) and the algorithm (26) in Shehu (2016). From Fig. 2, we know that the performance of the algorithm (9) is better than that of the other two algorithms.
Example 4.3
Let us consider the following well known 1 -regularized least squares problem, which consists of finding a sparse solution to an underdetermined linear system. Suppose that we solve the following problem: where D ∈ ℝ m×n and b ∈ ℝ m . In this case, while We remark that there is a commercial software, based on the projected gradient method for solving problem (47), for example SPGL1 (van den Berg and Friedlander 2007; Lorenz 2013) and FISTA (Beck and Teboulle 2009), but this is beyond the scope of this paper. Our interest here is to demonstrate the efficiency of our proposed method (9) using problem (47). We generate random problems using different choices of for m = 100 and n = 1000 . In Algorithm 3 of Thong and Vinh (2019) and the algorithm (26) of Shehu (2016), we chose = 1 and = 1.9∕(max(eig(D T D))) , and in the algorithm (9), we chose = 0.5 . In addition, select r k = 0.2 in algorithm (26) of Shehu (2016). Table 3 shows that the algorithm (9) is better when k = 0.33 . The numerical result is described in Fig. 3, it illustrates that the performance of Algorithm (9) is better than the other two algorithms. (a) We point out that there are different strategies in the current literature to enforce strong convergence on proximal-like algorithms (in particular, DR splitting); see, e.g., Solodov and Svaiter (2000) and Hirstoaga (2006). In this regard, the results
Final remarks
In this paper we propose a Douglas-Rachford splitting method with inertial extrapolation step and give strong convergence analysis of the method. The method is much more applicable for a general class of maximal monotone operators and no uniform Fig. 3 Comparison of three algorithms monotonicity on any of the involved maximal monotone operators is assumed. Furthermore, the analysis of the algorithm is obtained under the natural condition of the inertial factor k being monotone non-decreasing and bounded away from 1/3. Some numerical illustrations are given to test the efficiency and implemnetation of the proposed scheme. The results obtained in this paper could serve as the strong convergence counterpart of already obtained weak convergence methods for inertial Douglas-Rachford splitting methods (Bauschke and Combettes 2011;Beck and Teboulle 2009;Boţ et al. 2015;Lorenz and Pock 2015;Thong and Vinh 2019) in the literature.
Our future project include the following: • to modify the proposed method (9) in this paper so that the bound of the inertial factor k could exceed 1/3 and possibly lead to a faster convergence; and • to obtain the rate of convergence of method (9). As far as we know, this has not been obtained before in the literature. | 2020-08-06T09:08:45.982Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "109330c7cbad56c2d9a7ac63ef626de8d5e9c728",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11081-020-09544-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "63aff04139c97c4fe76528b360711897ab6ba362",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
249062140 | pes2o/s2orc | v3-fos-license | Longer amplicons provide better sensitivity for electrochemical sensing of viral nucleic acid in water samples using PCB electrodes
The importance of monitoring environmental samples has gained a lot of prominence since the onset of COVID-19 pandemic, and several surveillance efforts are underway using gold standard, albeit expensive qPCR-based techniques. Electrochemical DNA biosensors could offer a potential cost-effective solution suitable for monitoring of environmental water samples in lower middle income countries. In this work, we demonstrate electrochemical detection of amplicons as long as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${503}\,\hbox {bp}$$\end{document}503bp obtained from Phi6 bacteriophage (a popular surrogate for SARS-CoV-2) isolated from spiked lake water samples, using ENIG finish PCB electrodes with no surface modification. The electrochemical sensor response is thoroughly characterised for two DNA fragments of different lengths (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${117}\,\hbox {bp}$$\end{document}117bp and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${503}\,\hbox {bp}$$\end{document}503bp), and the impact of salt in PCR master mix on methylene blue (MB)-DNA interactions is studied. Our findings establish that length of the DNA fragment significantly determines electrochemical sensitivity, and the ability to detect long amplicons without gel purification of PCR products demonstrated in this work bodes well for realisation of fully-automated solutions for in situ measurement of viral load in water samples.
. Platinum electrodes have also been used for an in-situ electrode during microfluidic PCR platform designed to electrochemically detect amplicons during reactions 8 . All these studies require surface modification of the electrode, thereby implying increased production and operating costs due to specialised storage requirements for stability of these functionalised electrodes.
We recently demonstrated electrochemical sensing of SARS-CoV-2 amplicons with low-cost printed circuit board (PCB) electrodes, based on change in DPV and cyclic voltammetry (CV) peak current due to adsorption of MB-DNA complex on the unmodified electrode surface 11 . We reported that longer DNA fragment (N1-N2, 943 bp ) formed using CDC-recommended N1 forward and N2 reverse primers exhibited better linearity in sensor response as compared to the shorter fragment (N1, 72 bp ) formed using N1 forward and N1 reverse primer set. These studies were reported using DNA dilutions prepared in nuclease free water. The platform was also used for detection of SARS-CoV-2 amplicons in simulated wastewater sample (obtained by spiking total RNA sample with SARS-CoV-2 RNA). The amplification of longer fragments with such heterogeneous samples is difficult, since RNA is susceptible to shearing during isolation and downstream processing 12,13 . Thus, the demonstration of electrochemical sensing of SARS-CoV-2 amplicons in wastewater was limited to the shorter 72 bp N1 fragment 11 .
In this work, we study the feasibility of ENIG PCB based electrochemical sensing of bacteriophage Phi6 concentrated and isolated from lake water samples (illustrated in Fig. 1). Phi6 phage is comparable to SARS-CoV-2 in size (80-100 nm), and also possesses lipid membrane and spike proteins. Because of these reasons, phage Phi6 is a popular surrogate for SARS-CoV-2 and other enveloped pathogenic RNA viruses 14,15 . RNA isolated from the phage particles was used as template for cDNA synthesis, that was then used as template for PCR to obtain two DNA fragments of lengths 117 and 503 base pairs. Given the challenges of amplifying 943 bp N1-N2 fragment in our previous work, we have targeted fragments of intermediate length ( 117 bp and 503 bp ) in this study, based on available primers. The electrochemical sensor response was systematically studied for a wide range of concentration ( 10 pg/µl to 20 ng/µl ) for both fragments in presence of MB, and the impact of salt on sensor response was characterised and cross-validated with spectrophotometry measurements. The key contributions of this work are as follows: Target amplification from environmental water sample spiked with bacteriophage Phi6. Water from a lake on IIT Bombay campus (Powai lake, Powai, Mumbai) was used for spiking with phage particles. The lake water was filtered through a 5 µm membrane to remove suspended particles prior to spiking with Phi6 phage. 1 ml of 10 6 PFU/ml of Phi6 phage preparation was added to 100 ml of filtered lake water and gently homogenised for 10 min at 4 • C . A small aliquot of the sample was kept aside for measuring virus load by plaque assay. We tested two different methods for concentrating the spiked Phi6 virus particles: (1) aluminium hydroxide adsorption-precipitation method 19 , that has been validated for concentration of several enveloped RNA viruses from environmental samples, and (2) a polyethylene glycol (PEG) based virus concentration method adapted from Flood et al. 20 . Since recovery efficiency of the PEG-based method was found to be better than the aluminium hydroxide method, PEG based method was used for concentrating the Phi6 particles from the lake water sample.
The PEG method used is as follows: PEG 8000 and NaCl were added to the Phi6-spiked lake water sample to obtain a sample with 8 % PEG 8000 and 0.2 M NaCl . The samples were incubated in a shaker at 4 • C for 4 h , followed by centrifugation at 4700 g for 45 min . The supernatant was discarded and the pellet was resuspended in 1 ml of the same supernatant. All spiking and virus concentration experiments were conducted in triplicate. After concentration, a small aliquot was kept aside for measuring the recovery efficiency by plaque assay method. RNA was isolated as described earlier and eluted in 40 µl of elution buffer provided with the kit. Since the RNA concentration would differ from sample to sample in the triplicate, 2 µl of RNA was used for cDNA synthesis for all three samples, irrespective of their concentrations. cDNA synthesis was carried out as described earlier. 1 µl cDNA was used as a template for 20 µl PCRs for 35 cycles, to amplify 117 bp and 503 bp fragments. These samples are denoted '1:1' i.e. no dilution. A no-template control (NTC) was set up as a negative control, while positive control (PC) was set up using the cDNA synthesized from RNA isolated from purified phages as template. Quantitative PCR (qPCR) was performed using Brilliant III Ultra-Fast SYBR Green QPCR Master Mix (Agilent Technologies) in a Stratagene Mx3000P RT-PCR instrument. Reactions were set up as mentioned earlier, in triplicates. Cycle threshold (Ct) values were recorded for all samples. Additionally, diluted samples were prepared using 1 µl of 1:100 dilution of cDNA in filtered lake water as template for 20 µl PCRs for 35 cycles. These samples are denoted '1:100' .
Electrochemical biosensor. The PCB electrodes were manufactured using commercially available, lowcost electroless nickel immersion gold (ENIG) process without additional gold electroplating. The ENIG PCB electrode specifications are detailed in our previous work 11 . Conventional cleaning recipes for electrodes, such as using piranha solution or cyclic voltammetry with sulphuric acid are not recommended for ENIG PCB electrodes, as they may cause stripping of the thin gold layer (thickness ≈ 100 nm ) and expose the underlying copper layer that is susceptible to corrosion [21][22][23][24][25] . The electrodes were therefore cleaned using lint-free wipes dampened with IPA. The samples to be measured are incubated with 50 µM MB at 4 • C for 1 h to facilitate intercalation. www.nature.com/scientificreports/ In our previous work, we observed that the sensitivity and linearity of the sensor improved by increasing the MB concentration 11 . Based on the optimisation reported in our earlier work, we used 50 µM MB concentration for intercalation with DNA in this study. Electrochemical detection of double stranded DNA (ds-DNA) can be achieved using anionic or cationic intercalators. Although anionic intercalators detect DNA with better selectivity they require overnight incubation, resulting in longer assay time. Cationic intercalators such as MB, on the other hand, require lesser incubation time of approximately 1 h for electrochemical detection of ds-DNA 6 . Each measurement involved dispensing 5 µl of the sample to be tested on the electrode, followed by cleaning with IPA dampened wipes before performing the next measurement with another sample. Each sample was tested on 5 different electrodes, unless otherwise mentioned. DPV and CV measurements were performed using PalmSens Sensit Smart potentiostat, and PSTrace software was used for potentiostat configuration and data acquisition, including peak current calculation. The following settings were used for DPV and CV measurements: Fig. S1 in supplementary information. Figure 2 shows the results (peak current) obtained for DPV and CV measurements with gel purified PCR products. DPV measurements show higher sensitivity (change in current with increase in DNA concentration) as compared to CV measurements, since the background capacitive current in CV measurement hides the Faradaic current 26 . The data for each box in the box plots contains measurements from 5 electrodes. The same set of electrodes were used for all measurements, to avoid measurement errors due to electrode-to-electrode variations. We observed increasing trend in the peak current for DPV and CV measurements for lower concentrations of DNA, with the increase more pronounced for the longer ( 503 bp ) DNA fragment as compared to the 117 bp fragment. This is consistent with the trend expected for adsorption on the electrode, reported in our previous work 11 . Adsorption of the MB-DNA complex facilitates charge transfer at the electrode, thus contributing to rise in peak current. Other studies have shown the influence of oligonucleotide size and sequence on MB-DNA intercalation [27][28][29][30] . The Guanine-Cytosine (GC) content of both the amplicons ( 117 bp and 503 bp ) is approximately 50 %, sug- Presence of salt in sample reduces electrochemical sensitivity at lower DNA concentrations. Salt present in PCR master mix can interfere with the electrostatic interactions between MB and DNA, and therefore the DPV and CV measurements on 5 electrodes were performed separately by adding 2 mM MgCl 2 to the gel purified products with 50 µM MB, to study the impact of salt on the MB-DNA interaction. As seen in Fig. 3, we observe that the decreasing trend in peak current for higher DNA concentrations ( > 2 ng/µl for 503 bp and > 10 ng/µl for 117 bp ), in both DPV and CV measurements, is not significantly impacted by addition of salt (see Fig. S2 in supplementary information for representative voltammograms). However, at lower DNA concentration, the sensitivity is greatly reduced by addition of salt, resulting in no significant change in current with DNA concentration. Other researchers have previously reported similar negative impact of salt on MB-DNA interactions and intercalation 33,34 . The Mg 2+ cation binds with the negative phosphate backbone of DNA thereby hindering the electrostatic interactions between MB and DNA 35 . At higher DNA concentrations, steric inhibition of redox active MB causes reduction in peak current, and therefore electrostatic interactions do not significantly influence the sensor response. The key takeaway is that such biosensors are more suitable for detection of higher DNA concentrations (few ng/µl or higher) for use-cases such as fully-automated processing of environmental water samples, where gel purification of the PCR products may not be feasible.
Longer DNA results in more pronounced change in optical absorption for higher concentrations. To further validate the results discussed above, we performed optical measurements using a UV/Vis spectrophotometer (Thermo Scientific Multiskan GO), with 50 µl of sample used for each measurement. The absorption signature decreases with increase in DNA concentration, as seen in the trend for area under the absorption curve for wavelength range 600 nm to 700 nm , shown in Fig. 4 (absorption spectra shown in Fig. S3 in supplementary information). There is no noticeable difference in the absorption for samples containing DNA compared to sample containing only MB, for DNA concentration lesser than 1 ng/µl (for both 503 bp and 117 bp length fragments), indicating the absence of steric inhibition of redox active MB. At higher DNA concentrations, we observed gradual decrease in the absorption signal, and noticed that the extent of reduction in absorption is lesser in presence of salt. These results are attributed to the molecular interactions with base stacks in the DNA hybrid, and steric inhibition. Our results are in concurrence with reports on spectroscopic studies of MB-DNA www.nature.com/scientificreports/ intercalation in literature, that have associated the hypochromism with decrease in the energy level of π-π * electron transition due to intercalation 36-38 . Longer DNA fragment from Phi6 bacteriophage isolated from lake water can be detected with high sensitivity using unmodified PCB electrode. We evaluated the utility of our sensor with water sample from Powai Lake spiked with Phi6 bacteriophage. The concentration of the RNA isolated from the phage spiked water samples ranged from 15.8-19.4 µg/ml , while the RNA isolated from purified phage suspension was estimated to be 1945 µg/ml , giving a recovery efficiency of approximately 1 %. RNA was reverse transcribed to cDNA and used as template for both PCR and qPCR. The products sizes were confirmed by agarose gel electrophoresis (Fig. 5) www.nature.com/scientificreports/ ous environmental samples is challenging given the shortcomings due to low efficiency of virus concentration, and RNA degradation. However, with our virus concentration and PCR amplification protocols, we were able to successfully amplify the 503 bp fragment for electrochemical sensing. Figure 6 show the electrochemical sensor results obtained for 503 bp fragment amplicons, both with undiluted cDNA as template (1:1) and hundred-fold diluted cDNA as template (1:100) for PCR, compared with NTC and PC (see Fig. S4 in supplementary information for representative voltammograms). Each box in the box plots in Fig. 6 comprises of measurements of triplicate samples on 5 electrodes. The same electrodes were used for measuring all samples, to avoid errors due to electrode-to-electrode variation. DPV measurements show better resolution for distinguishing test and PC samples from NTC, as compared to CV measurements, due to hiding of the Faradaic current by background capacitive current in the latter, as discussed before. For the longer amplicon we observed that the negative control (NTC) resulted in higher CV and DPV peak current relative to the positive control, whereas the positive and undiluted test samples revealed similar peak heights for DPV peak current. The mean and median value of the measurements for each undiluted (1:1) test sample and PC are clearly resolved from the sensor output for NTC samples, while the resolution for samples with 1:100 dilution are not as highly pronounced. For hundred-fold dilution of the cDNA, we did not observe any band during gel electrophoresis (lane not shown in Fig. 5) and the corresponding DPV and CV peak currents were similar to NTC, as expected. The results for 117 bp fragment are presented in supplementary information. The negative control prompts an electrochemical response from the PCB sensor due to free MB adsorbing on the electrode, and the interaction of MB with single stranded primer oligonucleotides. Thus, each time a sample is to be tested, one must run a negative control and compare the peak current for test sample to that obtained for the negative control in order to achieve a differential (relative) measurement 39,40 to classify the test sample as positive or negative.
Discussion.
Our findings illustrate different mechanisms that influence electrochemical sensor performance for different length of amplicons at varying DNA, concentrations validated by optical measurements using UV/ Vis spectrophotometer. Our observations highlight the insight that longer DNA fragments up to ≈ 500 bp can be detected with higher sensitivity, and the presence of salt in the sample does not compromise sensitivity for higher DNA concentrations (few ng/µl and beyond. Additionally, we studied the impact of different types of samples including gel purified amplicons with and without added salt, and spiked lake water samples, on DPV and CV measurements. We observed that DPV offers better resolution as compared to CV, since the background capacitive current also impacts CV measurement, rendering it less sensitive. Amplification of longer fragments depends on integrity of the viral genomic RNA. Several studies have shown that amplification of longer fragments is not always efficient due to degradation of RNA in the environment and Table 1. Ct values of samples as measured by qPCR 1 NTC has no template added in the reaction. 2 Positive control has cDNA isolated from purified Phi6 bacteriophage particles as template. 11,[41][42][43][44] . We observed that PEG-based virus concentration method works better than aluminium hydroxide based virus concentration method to concentrate bacteriophage Phi-6 spiked in lake water samples. Demonstration of ability to detect long DNA fragment overcomes the requirement of multiplex PCR for amplifying multiple shorter length templates, and reduces the possibility of cross-specificity. Biological samples are scarce, and therefore designing a biosensor that requires minimal sample for testing is desirable. The ENIG PCB electrodes used in this study require only 5 µl of sample for testing to cover the active area of the electrode. Moreover, the same electrode can be reused after cleaning, before dispensing the next sample. The amplified sample does not require addition of any chemicals other than methylene blue, an inexpensive and commonly available chemical. Since the cost of manufacturing of each electrode is approximately USD$0.55 (i.e. INR ₹ 40), this biosensor can be a cost effective alternative to existing detection techniques. Table 2 shows a comparison of this work with other sensors for long DNA fragments in heterogeneous samples reported in literature.
The main limitation of this method is the possibility of non-specific amplification in heterogeneous samples such as wastewater and lake water, or with low purity primers, given the reliance of the MB based electrochemical detection scheme on PCR for specificity. To improve the sensitivity of the electrochemical detection method using unmodified ENIG PCB electrodes for DNA detection with unpurified PCR products, it is necessary to better understand errors introduced by the ununsed dNTPs and primers, optimise the reaction conditions and testing protocols. Measurement of auxiliary physicochemical parameters such as pH, temperature and biological oxygen demand (BOD) of the water samples may also be necessary to improve the accuracy of measurements.
Conclusion
In summary, we have presented a low-cost electrochemical ENIG PCB sensor for detection of viruses from environmental (lake water) samples. Unlike electrodes with immobilised oligonucleotides that need low-temperature storage to retain sensitivity, or custom fabricated substrates for DNA sensing 53,54 , our technology utilises unmodified PCB electrodes, with longer shelf life and no specific storage needs, and therefore suitable for developing automated sample processing and measurement solutions for deployment in LMICs. The biosensor utilises an inexpensive DNA intercalating redox dye (MB) for rapid detection of target amplicon. Since MB non-specifically binds to single and double stranded oligonucleotides, the non-specific amplification commonly found in environmental samples would reduce the specificity of such sensing method. Thus, the specificity of this test depends on the optimisation of primers and PCR reaction conditions. Further, the CV and DPV peak currents obtained from sample under test should be interpreted relative to response obtained from negative control (NTC) for every test. The electrochemical sensor design and approach presented in this work can be integrated with an auto-sampler for developing a fully automated and low-cost solution, that can collect and analyse samples, and communicate the findings back to the lab wirelessly. | 2022-05-27T05:15:06.427Z | 2022-05-25T00:00:00.000 | {
"year": 2022,
"sha1": "1f392c19251bb653805cdcb221bb698d13449845",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-12818-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f392c19251bb653805cdcb221bb698d13449845",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234144405 | pes2o/s2orc | v3-fos-license | Strongyloides Presenting as an Ulcerated Gastrointestinal Mass and Pulmonary Nodules in a Chronic Alcoholic: A Case Report
Strongyloidiasis is a parasitic infection of various clinical manifestations by the helminth Strongyloides stercoralis, with immunocompromised patients at higher risk for more severe cases. It is uncommonly seen in the United States, with a majority of patients without symptoms. In immigrant patients of immunocompromised state that initially present with nonspecific clinical manifestations, the inclusion of parasitic infections on the differential is crucial to initiate appropriate treatment as early as possible. We report a case of hyperinfection of Strongyloidiasis, which unusually presented with a gastrointestinal ulcerated mass and pulmonary nodules.
Introduction
Strongyloidiasis is a soil-transmitted disease by the helminth Strongyloides stercoralis, or sometimes by S. fuelleborni fuelleborni and S. fuelleborni kelleyi [1]. Infections in humans manifest in a wide range of presentations, from asymptomatic to fatal, disseminated cases. Initial infection and transmission occur transcutaneously: eggs from contaminated soil hatch and mature into filariform larvae that penetrate human skin and migrate to the intestines, most classic migratory mechanism being via bloodstream and respiratory tracts [2]. However, S. stercoralis has the
Archives of Clinical and Medical Case Reports 241
survival advantage to cause asexual autoinfection within the human host intestine, allowing for chronic infections without a need for contact with another infective larvae [3]. Thus, patients can present with pulmonary, gastrointestinal or cutaneous symptoms depending on organ involvement that may persist for decades, especially in immunocompromised hosts that lack a robust immune system to prevent autoinfection [4].
The disease is currently estimated to have infected 30-100 million people worldwide [5]. The precise prevalence rate is unknown due to asymptomatic infections as well as low sensitivity of screening tests [6]. Infections occur in endemic or non-endemic regions, including pockets of United States that are concentrated with immigrant or veteran populations. High prevalence has been observed in Latin America and sub-Saharan Africa, with infection rates of up to 60% seen in such resource-poor countries [7]. Populations that are at higher risk of Strongyloidiasis include men, patients with HIV and HTLV-1 infections, as well as alcoholics [8,9].
Clinical manifestations of infected individuals vary widely, with acute infections within a week commonly presenting with cough, tracheal irritation, and gastrointestinal symptoms including watery diarrhea, nausea and vomiting, bleeding, and diffuse abdominal tenderness [1]. Some cases of cardiopulmonary symptoms such as chest pain, palpitations, and dyspnea have also been observed [10][11][12]. Hyperinfection of S. stercoralis and disseminated Strongyloidiasis differ in whether manifestations remain within the gastrointestinal and pulmonary systems or not, respectively, although both require intensive medical care and have high rates of mortality if untreated. Currently in medical literature, approximately 244 case reports of hyperinfection and disseminated Strongyloidiasis exist [13].
Among those cases, endoscopic findings revealed duodenal mucosal edema, erythema, ulcerations, and obstructions etc [14]. No cases of duodenojejunal mass with pulmonary nodules have yet to be reported, despite increasing numbers of unusual manifestations in S. stercoralis hyperinfection syndromes. Here, we report a 43 year-old male from Guatemala diagnosed with hyperinfection of Strongyloidiasis that presented with a duodenojejunal mass and pulmonary micronodules on imaging.
Case Presentation
A 43 year-old man from Guatemala with a history of chronic alcohol use and gastritis presented with 2 days of melena. The patient had no constitutional symptoms. He had a similar episode of melena 6 years prior in Guatemala, where he underwent an endoscopy and was told he had gastritis and anemia. The patient denied any other medical history or medication use, except for occasional over-the-counter pain medicine of name he was unable to recall.
Prior to immigrating to the United States in 2019, he reported to have worked on a banana farm for several years.
Upon arrival to the emergency department, the patient was hypotensive and tachycardic. His abdomen was soft but
Archives of Clinical and Medical Case Reports 242
blood loss anemia. CT chest, abdomen, and pelvis showed a 3 cm heterogeneously enhancing mass in the intestinal tract ( Figure 1A) and small, scattered pulmonary nodules ( Figure 1B). An EGD revealed an ulcerated, bleeding submucosal mass at the duodenojejunal junction ( Figure 2). Histopathology of the biopsied mass was remarkable for parasitic strongyloides worms (Figure 3). The patient was diagnosed with hyperinfection syndrome of Strongyloidiasis, and was started on an extended course of ivermectin 200 mcg/kg/day.
Discussion
Strongyloidiasis generally presents with gastrointestinal, pulmonary and cutaneous symptoms; most common symptoms observed have been abdominal pain, diarrhea and weight loss [15]. In regards to pulmonary involvement in hyperinfection syndrome or disseminated strongyloidiasis, observational studies have found that chest imaging frequently reveals fine miliary nodules or diffuse reticular infiltrates, especially in the initial phase of the infection [19,20]. Meanwhile, patchy, diffuse lobar infiltrates are seen with infection progression, as well as alveolar or mucosal hemorrhage of the pulmonary lining [1]. In this patient, the chest CT showed small, scattered pulmonary nodules, suggesting either an initial phase or an unusual presentation of a chronic, persistent infection. Another possible explanation is autoinfection by S. stercoralis in high-risk individuals--recurrent infections can result in cyclical findings of initial infection, depending on time of diagnosis.
Our patient had an initial absolute eosinophil count of only 270 cells/mL (with greater than 500 generally indicating eosinophilia in adults), however, studies have discovered that eosinophil count has a low sensitivity, although commonly used as a screening method [21]. Moreover, in comparison to stool and serological testing combined, eosinophil count did not significantly improve diagnostic accuracy [22]. Interestingly, serological IgG antibody
Archives of Clinical and Medical Case Reports 244
testing was also negative in our patient. This can be attributed to several factors: antibody levels decreasing with chronicity or immunocompromised status [6], or alcoholic patients producing different immunoglobulins in response to the helminth infection [23].
As mentioned above, severe cases occur more often in patients in immunocompromised states such as those with HIV and HTLV-1 infections. We hypothesize that our patient's chronic alcohol use promoted a relatively immunocompromised state. A meta-analysis of prior case-control and cross-sectional studies have found that alcoholics have a significantly higher rate of infection, with a pooled OR of 6.69 (95% CI 1.47-33.8) [8]. Chronic alcohol use is associated with poor hygiene in some patients as well as malnutrition that contribute to breakdown of immune defenses to eliminate the parasite. Several studies have shown that the level of immune system impairment in alcoholics can occur at multiple levels: reduced number of macrophages in the duodenal mucosa, loss of T-cell lymphocyte function and cytokine production, and most importantly, decreased activity of the complement system which has been proposed to be the first-line defense against the helminth [24]. Another study finding suggests chronic alcohol use can lead to alteration of corticosteroid metabolites that can increase infectivity of the S.
stercoralis larvae and accelerate autoinfection [25]. We believe the patient's environmental exposure on a banana farm in Guatemala as well as long-term alcohol use led to chronic Strongyloidiasis.
Hyperinfection syndromes, like in our patient, are fatal without prompt treatment; mortality rate has been calculated as high as 86% [15]. Thus, it is highly recommended to initiate treatment early. Parasitological cure has been shown to be higher with ivermectin than albendazole (RR 1.79, 95% CI 1.55 to 2.08) [26]. In immunocompetent patients, a single-dose or two-dose regimen of 200 mcg/kg per day is recommended; alternatively, immunocompromised patients will need a four-dose regimen of 200 mcg/kg daily for two days, repeated at two weeks, which is the time period of an autoinfection cycle. Our patient was started on the extended course of ivermectin, given his hyperinfection diagnosis and immunocompromised state due to chronic alcohol use.
Conclusion
This case of Strongyloides hyperinfection syndrome in a patient with a history of chronic alcohol use manifested in an unusual pattern of a bleeding, ulcerated duodenojejunal mass and scattered pulmonary micronodules.
Presentations of either findings are rare, and this specific case highlights the wide range of clinical manifestations of Strongyloidiasis. It is also important to note that Strongyloidiasis infection rate has been shown to be positively associated with chronic alcohol use, immunocompromised status, and immigrants, particularly from endemic regions. With the rise in unusual presentations of these infections, it may be appropriate to consider parasitic involvement in patients with risk factors, especially those with findings of gastrointestinal and pulmonary abnormalities. | 2021-05-11T00:05:54.075Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ad6733ef96fbfdb8266e1fc0997e1213170a6ed9",
"oa_license": null,
"oa_url": "https://doi.org/10.26502/acmcr.96550353",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dbf0a3951d4a230dc94bbb94f71645407221cf68",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245124069 | pes2o/s2orc | v3-fos-license | Leray-Hopf solutions to a viscoelastoplastic fluid model with nonsmooth stress-strain relation
We consider a fluid model including viscoelastic and viscoplastic effects. The state is given by the fluid velocity and an internal stress tensor that is transported along the flow with the Zaremba-Jaumann derivative. Moreover, the stress tensor obeys a nonlinear and nonsmooth dissipation law as well as stress diffusion. We prove the existence of global-in-time weak solutions satisfying an energy inequality under general Dirichlet conditions for the velocity field and Neumann conditions for the stress tensor.
Introduction
In this article we investigate the equations of motion that describe the flow of a viscoelastoplastic fluid with stress diffusion modeled in the following way. On a time interval (0, T ) and a bounded domain Ω ⊂ R 3 , we consider the system of equations which satisfies the additional evolution equation (1.1) 3 and is thus subject to a special transport encoded in ▽ S along the velocity field V , a nonlinear dissipation law via ∂P(S), and another diffusion process. Here ρ, η 1 , η 2 , µ and γ denote positive constants, and D(V ) := 1 2 (∇V + ∇V ⊤ ) denotes the symmetric rate-of-strain tensor. Following [MDM02, GeY07, HGv17, PH * 19], we choose S ∈ R 3×3 δ to be the deviatoric stress tensor (i.e. Tr S = 0), which corresponds to the incompressibility of the fluid encoded in (1.1) 2 , such that the pressure P contains the full spherical part of the Cauchy stress tensor. For other modeling choices we refer to Section 3.4.
System (1.1) is complemented by boundary and initial conditions. The former are given by V · n = 0, V − (V · n)n = g, n · ∇S = 0 on ∂Ω × (0, T ), (1.2) which means that there is no boundary flux, that the tangential part of the fluid velocity at the boundary coincides with some prescribed function g : ∂Ω × (0, T ) → R 3 and that S has vanishing normal derivative. The first two conditions can also be summarized as V = g on ∂Ω × (0, T ) for some g with g · n = 0. Note that, from a physical point of view, only the case g · n = 0 seems to fit to the Neumann boundary condition n · ∇S = 0. If we would allow for g · n = 0, more complicated boundary conditions for S would be needed. The initial conditions are V (·, 0) = V 0 , S(·, 0) = S 0 in Ω. (1.3) For a fluid velocity V , the material derivative is given by and as an objective derivative of a tensor S we use the Zaremba-Jaumann derivative also called co-rotational derivative, where W (V ) := 1 2 (∇V − ∇V ⊤ ). Note that this choice of the objective derivative is not canonical and there are different ways to define objective derivatives for tensors. However, the choice made here is commonly used in geodynamics (cf. [MDM02,GeY07,HGv17, PH * 19]) and comes along with special features that are very useful for the mathematical analysis, see below.
The mathematical study of viscoelastic fluids with different choices of the objective derivatives (including the upper and lower convected Maxwell derivatives) started in the middle 1980s, see e.g. [JRS85,ReR86,RHN87,CoS91,Ren00]. Because of the strong nonlinearities arising from the objective derivatives, a first global existence result was only established years later in [LiM00] based on the Zaremba-Jaumann derivative and a linear dissipation law ∂P(S) = aS with a > 0. More recently, the more difficult case of a Maxwell fluid with µ = 0 (and without stress diffusion, i.e. γ = 0) has also been considered, see [CL * 19] and references therein.
For more general nonlinear situations there is a series of works involving implicitly defined stress-strain relations of the type G(S, D(V )) = 0, see [BG * 12] and the references in the recent survey [BMR20]. Viscoelastic fluids have a constitutive relation of rate-type, i.e. they involve suitable convective derivatives of the strain tensor D(V ) or of the stress tensor, as in our equation (1.1) 3 . The treatment of such nonlinearities is possible by using the recently introduced regularization of stress diffusion, i.e. γ > 0, as first illustrated in [BM * 18] for a simplified model replacing the tensor evolution by a scalar problem. We refer to [MP * 18] for a careful thermodynamical modeling of such viscoelastic fluids and to [BBM21], where a large data global existence result for weak solutions was obtained for a one-parameter family of convected tensor derivatives including the (simpler) case of the Zaremba-Jaumann rate.
Our work is in a similar spirit as the latter one, but it generalizes the conventional linear or quadratic stress-strain relation by allowing in (1.1) 3 for subdifferentials S → ∂P(S) = A ∈ L 2 (Ω; R 3×3 δ ) P( S) ≥ P(S) + Ω A : ( S−S) dx for all S ∈ L 2 (Ω; R 3×3 δ ) of a general dissipation potential P : L 2 (Ω; R 3×3 δ ) → [0, ∞] that is convex, lower semicontinuous and satisfies P(0) = 0. The space L 2 (Ω; R 3×3 δ ) is a natural choice in virtue of the formal energy-dissipation balance (1.6) below. While smooth stress-strain relations of polynomial type seem to be sufficient for the modeling of polymeric fluids (cf. where the yield stress σ yield > 0 determines the onset of plastic flow behavior, see [MDM02,GeY07] and Section 3.4. Observe that the potential defined in (1.4) is indeed convex, lower semicontinuous in L 2 (Ω; R 3×3 δ ) and satisfies P(0) = 0. In the context of geodynamics, it is also crucial to allow for nontrivial boundary data g = 0 in (1.2), because often the prescribed drifts of tectonic plates act as boundary data for the specific region of interest.
The basic features of the model include, of course, all the difficulties of the threedimensional Navier-Stokes equations such that we cannot expect better solutions than Leray-Hopf solutions for the velocity component V . For γ > 0, the equation (1.1) 3 for the stress tensor S is a (semilinear) parabolic equation with linear source term D(V ), but, crucially, also coupled nonlinearly to V via the Zaremba-Jaumann derivative ▽ S. In our analysis we essentially exploit the fact that for sufficiently smooth functions V and S satisfying n · V = 0 on ∂Ω we have the identity (1.5) Exploiting this identity and assuming for the moment that V = 0 on ∂Ω, one can show that smooth solutions satisfy the energy-dissipation balance for all t ∈ (0, T ). For the more general case with nontrivial boundary data, we refer to (3.11). We clearly see how the quadratic energy consisting of the kinetic energy and an elastic energy associated with S can be changed by the external force F and is dissipated by three mechanisms: (i) a direct fluid viscosity given by µ > 0, (ii) a stress dissipation encoded in the dissipation potential P, and (iii) the stress diffusion associated with γ > 0.
To simplify the notation we will fix two constants and choose ρ = 1 and η 1 = η 2 = η subsequently. With this choice the quadratic energy is simply given as one half of the L 2 norm of (V, S). We should note that this choice of parameters rules out the (non-trivial) case η 1 = 0 and η 2 = 0, in which the system is still fully coupled. In this case, we need to replace the weight η 1 2η 2 in the energy density appearing in (1.6) by some positive constant, leading to an extra term of the form −η 1 t 0 Ω S : ∇V dx dτ on the right-hand side of (1.6). However, as will become clear from the proofs, our existence result can be extended to this case.
Brief description of the main results In this article, we perform a large-data globalin-time existence analysis for system (1.1)-(1.3) that is valid for the general class of convex potentials P specified above. See Theorem 3.4 for the main result. A key novelty with respect to existing literature is the ability to deal with P nonsmooth, in which case the differential inclusion (1.1) 3 cannot be replaced by an equation. Determining an appropriate notion of solution that allows for a reasonable existence theory is part of our results (see Sec. 3.1). The inhomogeneous time-dependent boundary data for the velocity field introduce additional technicalities. Here, we adapt a construction for the incompressible Navier-Stokes equations that allows us to deduce a global-in-time energy control provided that, in some averaged sense, the data decay to zero as t → ∞, see Corollary 3.5. The proof of the existence result for nonsmooth potentials relies on approximation by a suitable family of C 1,1 -smooth potentials with the property that S → ∂P(S) is monotone and globally Lipschitz continuous. As long as the potential P is smooth, somewhat better results can be obtained, see Theorem 3.7.
Outline In Section 2 we specify the notations used in the subsequent analysis. The statements of our main results including core definitions and hypotheses can be found in Section 3, which also provides some details on the strategy of the proofs and some further remarks on the modeling. The proof of Theorem 3.7 concerning smooth potentials is provided in Section 4. Section 5 contains the proof of our results concerning nonsmooth potentials, including the proof of our main existence result.
Notations
Here, we specify general notations, definitions and conventions required for the subsequent analysis.
General notations For two vectors a, b ∈ R 3 we denote their inner product by a·b = a j b j and their tensor product by a ⊗ b with (a ⊗ b) jk = a j b k . Here and in the following we use Einstein's summation convention and implicitly sum over repeated indices from 1 to 3. The inner product of two tensors A, B ∈ R 3×3 is denoted by A : B = A jk B jk . Moreover, A ⊤ and Tr A denote the transpose and the trace of A. We further set a ⊗ b : Usually, Ω ⊂ R 3 is a bounded Lipschitz domain and T ∈ (0, ∞]. Points (x, t) in the space-time cylinder Ω × (0, T ), consist of a spatial variable x ∈ Ω and a time variable t ∈ (0, T ). For a sufficiently regular function u, we denote its partial derivatives in time and space by ∂ t u and ∂ j u, j = 1, 2, 3, respectively. The symbols ∇ and ∆ denote (spatial) gradient and Laplace operator. If v is a vector-valued function, we let div v = ∂ j v j denote its divergence and set v ·∇u = v j ∂ j u. Symmetric and antisymmetric parts of ∇v = (∂ k v j ) are given by respectively. If S is a tensor-valued function, its divergence div S is given by (div S) j = ∂ k S jk . If T is another tensor-valued function, we define ∇T : Function spaces Let k ∈ N 0 ∪ {∞} and A ∈ {Ω, Ω}. Then the class C k (A) consists of all k-times continuously differentiable (real-valued) functions on A, and C k 0 (A) contains all compactly supported functions in C k (A). By L q (Ω) with q ∈ [1, ∞] we denote the classical Lebesgue spaces with corresponding norm · q , and H k (Ω) with k ∈ N denotes the L 2 -based Sobolev space of order k, equipped with the norm · k,2 . Moreover, H 1 0 (Ω) contains all elements of H 1 (Ω) with vanishing boundary trace, and H 1/2 (∂Ω) denotes the class of boundary traces of functions from H 1 (Ω). By H −1 (Ω) and H −1/2 (∂Ω) we denote the dual spaces of H 1 0 (Ω) and H 1/2 (∂Ω), respectively, where we use the distributional duality pairing.
The norm of a Banach space X is denoted by · X , and the same symbol is used for the norms of X 3 and X 3×3 . When the dimension is clear from the context, we simply write X instead of X 3 or X 3×3 . Moreover, X ′ denotes the dual space of X, and C 1,1 (X) is the set of all continuously Fréchet differentiable functions X → R with globally Lipschitz continuous derivative.
For an interval I ⊂ R, the class C 0 (I; X) consists of all continuous X-valued functions, and C w (I; X) consists of all weakly continuous X-valued functions. The Bochner-Lebesgue spaces of X-valued functions are denoted by L q (I; X) for q ∈ [1, ∞], and L q loc (I; X) denotes the class of all functions that belong to L q (J; X) for all compact subintervals J ⊂ I. When I = (0, T ), we set C 0 (0, T ; X) = C 0 (I; X) and L q (0, T ; X) = L q (I; X). For functions A on Ω × I we use the shorthand A(t) := A( · , t).
We further need spaces of solenoidal vector fields and of symmetric deviatoric tensor fields. The corresponding classes of smooth functions on Ω are given by We further set and we set C ∞ δ (Ω×I) := C ∞ 0,δ (Ω×I) if I is a compact interval. We define the associated L 2 spaces on Ω by Here the conditions div v = 0 and v| ∂Ω · n = 0 in the definition of L 2 σ (Ω) have to be understood in a weak sense; see [Gal11, Theorem III.2.3] for example. We further introduce the corresponding Sobolev spaces We can now define the solution spaces LH T := L ∞ (0, T ; L 2 σ (Ω)) ∩ L 2 (0, T ; H 1 (Ω) 3 ) for the fluid velocity and X T := L ∞ (0, T ; L 2 δ (Ω)) ∩ L 2 (0, T ; H 1 (Ω) 3×3 ) for the stress tensor. Observe that LH T is the classical Leray-Hopf class for weak solutions to the Navier-Stokes equations, and X T is the analog for semilinear parabolic equations taking values in deviatoric tensor fields. Moreover, for the introduction of the variational formulation of (1.1) below, the space will serve as the class of test functions.
Main results
The main contribution of our analysis to the large-data existence theory for rate-type viscoelastoplastic fluid models lies in its ability to deal with nonsmooth dissipation potentials P under the mild hypothesis that P : L 2 δ (Ω) → [0, ∞] is convex and lower semicontinuous with P(0) = 0. (3.1) Throughout this paper, we let T ∈ (0, ∞] and impose the following conditions on the initial data and external forcing (3.2) In our main results we focus on boundary data g that satisfy and g · n = 0 on ∂Ω × (0, T ). Section 3.4 will be devoted to the motivation for allowing for nonsmooth and setvalued stress-strain relations S → ∂P(S) and non-trivial boundary data g by discussing applications in the geodynamics of lithospheric plate motion.
Generalized solution concept
For nonsmooth potentials P the subdifferential ∂P may be multi-valued, and hence, line (1.1) 3 cannot be replaced with an equality but rather has to be understood as a suitable inclusion. Our notion of solution for problem (1.1) 3 adapts the weak solution concept in [Rou13, Chapter 10] for semilinear parabolic equations with nonsmooth potentials, which involves an evolutionary variational inequality that avoids the multi-valued function ∂P(S) by exploiting the inequality To avoid issues when passing to the limit along approximate solutions, we follow [Rou13, Chapter 10] and drop the positive term 1 2 S (T ′ ) − S(T ′ ) 2 2 on the right-hand side in our ultimate variational inequality for (1.1) 3 (cf. eq. (3.7) below).
Definition 3.1 (Generalized solution). Let P satisfy (3.1). We call a couple (V, S) a generalized solution to (1.1)-(1.3) if for all T ′ ∈ (0, T ) the following holds: Observe that (3.6) is obtained by multiplying (1.1) 1 by the respective test functions and formally integrating by parts. In particular, (3.6) is in accordance with the notion of weak solutions for the classical Navier-Stokes problem, since we take divergence-free test functions and omit the pressure term. Moreover, it is easy to see that the terms in (3.7) are well-defined. For the integrals involving the nonlinear terms of the Zaremba-Jaumann rate, this follows from the Sobolev embedding H 1 (Ω) ֒→ L 6 (Ω), the interpolation L ∞ (0, T ′ ; L 2 (Ω)) ∩ L 2 (0, T ′ ; L 6 (Ω)) ֒→ L 10 3 (0, T ′ ; L 10 3 (Ω)), (3.8) and the generalized Hölder inequality with inverse exponents 3 10 + 1 2 + 1 5 = 1. Remark 3.2. In the formulation (3.7) it is crucial that in the integral involving the convective part, only the term V · ∇S :S occurs, and not V · ∇S : (S − S), since under the natural regularity hypotheses of S in Def. 3.1, integrability of the term V · ∇S : S is not ensured.
It is worth noting that, despite the absence of the term 1 2 S (T ′ )−S(T ′ ) 2 2 in ineq. (3.7), generalized solutions in the sense of Def. 3.1 obey the natural partial energy dissipation inequality for S. Proposition 3.3 (Partial energy inequality). Any generalized solution (V, S) in the sense of Definition 3.1 satisfies the partial energy dissipation inequality for almost all T ′ ∈ (0, T ).
Observe that this proposition applies to any solution conforming to Def. 3.1 and is independent of the approximation scheme chosen in our construction. Its proof will therefore be postponed to Section 5.4.
Main results
The main achievement of this article is the following result, which shows global-in-time existence of generalized solutions in the sense of Definition 3.1.
Moreover, there exists an extension w of the boundary data g such that v = V −w satisfies the following energy-dissipation inequality: For a.a. t ∈ (0, T ) we have The function w can be chosen such that w ∈ L ∞ (0, and such that w satisfies the estimates and for all T ′ ∈ (0, T ) and q ∈ [1, ∞] and a suitable constant C = C(q, Ω, µ) > 0.
Note that (3.11) directly follows from summation of (3.10) and (3.9), which holds by Proposition 3.3, and using S : ∇v = S : D(v). One readily verifies that all terms in (3.11) are well defined when w has the stated regularity. Moreover, for g = 0 we have w = 0, in which case the right-hand side of the energy-dissipation inequality (3.11) simplifies significantly. Another consequence is the following result, that gives a class of data such that the total energy stays bounded as t → ∞.
Corollary 3.5. In the situation of Theorem 3.4, let T = ∞ and assume Then (V, S) ∈ LH T × X T for T = ∞, so that the total energy remains bounded as t → ∞.
The proof of both Theorem 3.4 and Corollary 3.5 will be given in Subsection 5.3.
Strategy of the proof and existence for smooth potentials
Moreau envelope The proof of Theorem 3.4 is based on approximating the nonsmooth potential P by its Moreau envelope (3.14) This regularization preserves the basic properties (3.1) imposed on our potentials, that is, for each ε ∈ (0, 1] the regularized potential P ε : L 2 δ (Ω) → [0, ∞) is convex and lower semicontinuous with P ε (0) = 0. Furthermore, each P ε is Fréchet differentiable and its differential ∂P ε is globally Lipschitz continuous with Lipschitz constant 1/ε (see [BaC17, Section 12.4] for example). Since S = 0 is a minimum of P ε and P ε (0) = 0, the Lipschitz continuity of ∂P ε implies that Weak solutions for smooth potentials In a first step of our analysis, we provide an existence result to system (1.1)-(1.3) for smooth potentials P ∈ C 1,1 (L 2 δ (Ω)) allowing us to use a standard concept of weak solutions.
Theorem 3.7. In addition to the hypotheses of Theorem 3.4, assume P ∈ C 1,1 (L 2 δ (Ω)). Then there exists a weak solution (V, S) to (1.1)-(1.3) in the sense of Definition 3.6. This solution is weakly continuous in L 2 (Ω) in the sense that where w is the same extension of g as in Theorem 3.4, such that for all t ∈ (0, T ) we have the partial energy-dissipation inequalities (3.10) and (3.17) Observe that the stress tensor S obtained in Theorem 3.7 enjoys better regularity properties than that in Theorem 3.4. In particular, we note that the fact that (V, S) satisfies the tensor evolution problem in the weak sense implies an estimate on the time derivative ∂ t S in L 8/7 (0, T ′ ; (H 1 (Ω)) ′ ) (cf. Remark 4.11) as well as the weak continuity of t → S(t) in L 2 δ (Ω). Further note that Hence, the partial energy-dissipation inequality (3.9), satisfied by any generalized solution due to Proposition 3.3, may in general be weaker than the corresponding inequality (3.17), which the weak solution constructed in Theorem 3.7 conform to. Our construction of solutions for smooth P is based on a Galerkin approximation that is manufactured in such a way that the global energy control in Corollary 3.5 holds true for data with suitable time decay. Exploiting the standard energy estimates and relying on compactness arguments of Aubin-Lions type for V and S allows us to pass to the weak limit even in the nonlinear terms V · ∇S and SW (V ). Of course, in the limit the energy-dissipation balance (1.6) will only survive as an energy-dissipation inequality.
When approaching the nonsmooth dissipation potentials P by their Moreau envelope P ε , we lose the compactness for S ε because ∂P ε (S ε ) cannot be controlled. Nevertheless, we are able to pass to the limit in the critical terms of the form S ε ∇V ε by integration by parts and relying on the boundary conditions, see Lemma 5.4. . The aim of those works is to understand the deformations of tectonic plates, the formation and development of faults, and the simulation of aseismic slipping processes. The modeling often concerns a smaller fault area of weaker material in the domain Ω that is driven by the more rigid tectonic plates around the weaker material. This situation is modeled in our case by prescribing the velocity g(t, ·) along the boundary ∂Ω.
Some remarks concerning the modeling
The important common feature of these geodynamical models is the plasticity threshold for the shear stress, which is defined in terms of the norm of the deviatoric stress tensor, namely |S| ≤ σ yield , because of our assumption Tr(S) = 0. This condition is mathematically formulated via dissipation potentials P of the form P(S) = Ω P(S(x)) dx with P satisfying P(S) = ∞ for |S| > σ yield .
Indeed the evolution equation (1.1) 3 for S has to be invariant under time-dependent changes of the observer (cf. [Ant98]) which means that P has to satisfy Thus, P can only depend on the three invariants of S ∈ R 3×3 sym , namely Tr S, Tr(S 2 ) − Tr(S) 2 , and det S. Since we have restricted our analysis to the case Tr S = 0 (implying that Tr(S 2 ) − Tr(S) 2 = |S| 2 ), a very typical choice is given in the form is a lower semicontinuous, convex function with p(σ) = 0 for σ ≤ 0. The choice p(σ) = a 2 σ 2 for σ ∈ [0, σ yield ] and p(σ) = ∞ for σ > σ yield gives the case in (1.4).
In fact, without changing much in our analysis, we could have allowed the internal stresses S to have a spherical part, i.e. we may have considered S = S + pI ∈ R 3×3 sym with S ∈ R 3×3 δ and p ∈ R. The analysis remains unchanged as long as we keep the constraint div V = 0 in (1.1) 2 . This would allow us to consider more general dissipation potentials allowing for a coupling of S and p, e.g. in the form where a, b, c are positive material parameters. However, p takes the position of an evolving additional pressure that should be modeled together with the true pressure taking compressibility of the fluid into account. Following [MDM02,GeY07,HGv17] the modeling assumption is then to identify p and the pressure P in (1.1) 1 . Moreover, P is then treated as an independent variable solving an evolution equation like 1 K D t P + 1 ξ P = − div V for positive constants K and ξ, which replaces (1.1) 2 . The arising model is quite different from ours and will be studied in subsequent work.
Existence for smooth potentials
In this section we focus on the case of a smooth potential P ∈ C 1,1 (L 2 δ (Ω)) and show existence of a weak solution to the system (1.1)-(1.3) as stated in Theorem 3.7. To this end, we decompose the velocity field V into a suitable extension of the boundary data g and a remainder v with homogeneous boundary conditions such that (v, S) satisfies a perturbed system. While the existence of w will follow from well-known results, we show existence of (v, S) by a Galerkin method. In the end, (V, S) = (v + w, S) will be a weak solution with the properties asserted in Theorem 3.7.
Decomposition of the velocity field
To show existence of a weak solution to the system (1.1)-(1.3), we decompose the velocity and pressure fields into two parts, V = v + w and P = p + q, where (w, q) is a solution to the Stokes initial-value problem with boundary data for some functions F and w 0 , and v satisfies the remaining system with homogeneous In this way, we have decomposed the question of existence of solutions to (1.1)-(1.3) into two separate problems.
This decomposition method is a common way to treat inhomogeneous boundary data g = 0, and was successfully used to show existence of weak solutions to the classical Navier-Stokes initial-value problem in different configurations, see [FGS06,FKS10,FKS11a,FKS11b] for example. Observe that this decomposition is by no means unique since the functions F and w 0 are not determined by the original problem (1.1)-(1.3). This freedom allows us to construct w as a divergence-free extension of the boundary data g satisfying (3.12) and (3.13) and to specifyF , w 0 afterwards.
Weak solutions to the modified system
In this section we prescribe a class of vector fields w such that the modified system (4.2) has a solution, see Assumption 4.1 below. In particular, for the moment we do not assume that the function w in (4.2) is related to some given boundary function g.
Observe that in our main results in Theorem 3.4 and Theorem 3.7 we consider boundary data g satisfying (3.3). As we will see in Subsection 4.3, this suffices to construct an extension w of g satisfying Assumption 4.1(c). However, since there is not much difference in the proof, we shall establish solutions to (4.2) in all cases described in Assumption 4.1.
Remark 4.2. Note that condition (a) cannot directly be generalized to the case r = 3, s = ∞, which is treated in (b) and (c). Moreover, the smallness of the extension w required in (b) naturally transfers to a smallness assumption on the associated boundary data g. In contrast, although condition (c) is a direct consequence of (b), it does not require such a condition. As we shall see in Subsection 4.3, we can find an extension w satisfying (c) without imposing a smallness assumption on g.
We shall provide a proof of the following existence result.
Theorem 4.4. Let v 0 , S 0 and f be as in (4.3), let P ∈ C 1,1 (L 2 δ (Ω)) satisfy (3.1), and let w be as in Assumption 4.1. Then there exists a weak solution (v, S) to (4.2) in the sense of Definition 4.3. Additionally, this solution is weakly continuous in L 2 (Ω), that is, with v(0), S(0) = v 0 , S 0 , and it satisfies the energy inequalities for all t ∈ [0, T ). In particular, we conclude the total energy inequality for all t ∈ [0, T ).
Approximate solutions
First, we construct a sequence of approximate solutions to (4.2). To this end, we introduce suitable basis functions.
With these bases at hand, we now construct a sequence of approximate solutions in the following way. For k ∈ N we call (v, S) an approximate solution (of order k) if there exist α r , β r ∈ C 1 (0, and for all ℓ ∈ {1, . . . , k} the pair (v, S) satisfies in (0, T ) and The existence of approximate solutions is guaranteed by the following result.
Lemma 4.8. For all k ∈ N there exists an approximate solution (v, S) = (v k , S k ), which satisfies the partial energy-dissipation equalities for all t ∈ (0, T ). Moreover, for 0 < T ′ < T , there exists a constant M T ′ > 0, only depending on the data and T ′ but independent of k and P, such that so that the sequence (v k , S k ) k∈N is bounded in LH T ′ × X T ′ .
Lemma 4.10. For any 0 < T ′ < T , the sequence (∂ t v k , ∂ t S k ) k∈N is bounded in where M ′ T ′ is independent of P, but M T ′ ,P depends on P. Proof. These bounds follow as for the classical Navier-Stokes equations, and we only give a brief sketch here, where we focus on the estimate of (∂ t S k ). The bound for (∂ t v k ) follows similarly. Let S = S k and recall the interpolation inequality for a.a. t ∈ (0, T ). Since S is a finite linear combination of elements of the basis (ψ ℓ ) of H 1 δ (Ω), one deduces from (4.16) that ∂ t S(t) ∈ H 1 (Ω) ′ for a.a. t ∈ (0, T ) and 1,2 ∇(v+w) 2 + S 2 + ∇S 2 + ∇(v+w) 2 , where we used the estimate P(S) 2 ≤ c 2 S 2 for some P-dependent constant c 2 > 0, which follows from the Lipschitz continuity of ∂P and ∂P(0) = 0. Using Hölder's inquality and (4.19), one concludes the asserted uniform bound for (∂ t S k ).
Existence of weak solutions to (4.2)
Based on the previous preparations we establish the existence of weak solutions to (4.2) as stated in Theorem 4.4.
Existence of a suitable extension
Since we have shown existence of a weak solution to (4.2) in Theorem 4.4, it remains to address the existence of solutions w to the Stokes initial-value problem (4.1). As mentioned above, the forcing term F and the initial value w 0 in (4.1) are not prescribed by the original problem, whence we have some freedom in their choice. For example, we can simply consider data F = w 0 = 0 and use existing theory for the Stokes initial-value problem with inhomogeneous Dirichlet data (see [Gru01,FGH02,Ray07] for example) to obtain a suitable extension w satisfying Assumption 4.1 in case (a). Proceeding like this for the cases (b) and (c) would require smallness of g. In the following we focus on case (c), which is more general than (b), and show that smallness of g is not necessary if we exploit the freedom we have in the choice of F and w 0 . For this purpose, we use the following well-known lemma.
In order to ensure (4.5), it is sufficient to assume smallness of g · n in a suitable norm.
Lemma 4.13. In the situation of Lemma 4.12 it holds for some constant C 2 = C 2 (Ω) > 0.
Proof. We have
Now Hölder's inequality and Sobolev embeddings imply
The statement now follows from a standard trace inequality.
We can further derive the following estimate, which will be needed for passing from smooth Moreau envelopes P ε to nonsmooth potentials P.
Lemma 4.14. For all 0 < T ′ < T there exists a constant M T ′ > 0, which is independent of P, such that (4.45) Proof. By (4.44) we have In view of Remark 4.11 and the identity V = v + w, this shows (4.45).
Generalized solutions for nonsmooth potentials
The primary purpose of this section is to prove our main result, Theorem 3.4, on the existence of generalized solutions in case of a nonsmooth potential P, see Section 5.3. Recall that throughout this manuscript P : L 2 δ (Ω) → [0, ∞] is assumed to be convex and lower semicontinuous with P(0) = 0. As outlined in Section 3.3, we proceed by regularization, approximating P by its Moreau envelope {P ε }, which allows us to take advantage of Theorem 3.7 providing existence for smooth potentials. The key to passing to the limit ε → 0 in the tensorial evolutionary variational inequality for the approximate solutions is the convergence result in Lemma 5.4.
Weak versus generalized solutions
Here, we provide a basic consistency analysis concerning our new notion of solution for rate-type viscoelastoplastic models involving a nonsmooth potential in the tensorial evolution. In particular, we will show that the weak solutions from Theorem 3.7 obtained in the case of a smooth potential satisfy a variational inequality (for S) and hence are generalized solutions in the sense of Definition 3.1. This result will further serve as a basic ingredient in the proof of Theorem 3.4. We will also provide sufficient regularity conditions for (V, S) that allow us to conclude that generalized solutions are already weak solutions in the sense of Definition 3.6. For this purpose, we need an approximation property for the induced potential P acting on Bochner functions P(S) := T ′ 0 P(S(t)) dt,S ∈ L 2 (0, T ′ ; L 2 δ (Ω)). (5.1) The approximation condition states: ∀S ∈ L 2 (0, T ′ ; L 2 δ (Ω)) ∃ (S n ) n∈N ⊂ Z T ′ : S n ⇀S in L 2 (0, T ′ ; L 2 (Ω)) and P(S n ) → P(S).
(5.2)
This property is certainly satisfied if P(S) = T ′ 0 Ω P(S(x, t)) dx dt for a continuous convex function P : R 3×3 δ → [0, ∞) with P(0) = 0 and a quadratic upper bound. In this case it suffices to choose an arbitrary sequence (S n ) converging toS strongly in L 2 (0, T ′ ; L 2 (Ω)), because P is norm continuous. If P is the sum of such a function and an indicator function for a closed and convex subset A ⊂ R 3×3 δ , a suitable sequence (S n ) can be constructed by means of mollification taking into account the convexity and closedness of the set A as well as the fact that P(0) = 0 (which implies 0 ∈ A). In particular, (5.2) holds for the plasticity potential P defined in (1.4). for all Ψ ∈ C ∞ 0,δ (Ω × [0, T )), where β ∈ L 2 (0, T ; L 2 δ (Ω)) with β(t) ∈ ∂P(S(t)) for a.a. t ∈ (0, T ).
But this is exactly the definition of β ∈ ∂P(S), and the special definition of P in terms of P (cf. (5.1)) implies β(t) ∈ ∂P(S(t)) a.e. on (0, T ′ ). The definition of β in (5.6) implies the desired weak equation (5.4).
Properties of the Moreau envelope
Recall the definition of the Moreau envelope {P ε } in (3.14). As an immediate consequence of (3.14), we find that P ε (S) ≤ P(S) and hence lim sup ) dt for all S ∈ L 2 (0, T ; L 2 δ (Ω)). (5.10) The proof of Theorem 3.4 further makes use of the following version of the classical approximation properties of the Moreau envelope [BaC17,Rou13].
Proof. Let δ > ε > 0. Then, by definition, P ε ≥ P δ . Hence The second step follows from the fact that for δ > 0 the functional is convex and continuous, and thus weakly lower semicontinuous. The convexity of F δ is inherited from the convexity of P δ , while continuity follows from standard theory on Nemytskii operators (see e.g. [Rou13, Theorem 1.43]) together with the growth condition 0 ≤ P δ (S) ≤ S 2 L 2 δ (Ω) /(2δ), which is a consequence of the definition of the Moreau envelope.
Proof of the main result (Theorem 3.4)
In this subsection, we will prove our main theorem. Let us start by highlighting the following elementary result, which presents the crucial idea for passing to the limit, along approximate solutions, in the tensorial evolutionary variational equality and in particular in the nonlinear terms arising from the Zaremba-Jaumann derivative.
Proof. We first show that (5.14) holds for ψ ∈ C 1 (Ω × [0, T ′ ]). In this case, integration by parts with respect to the spatial variable gives where we have already exploited the boundary conditions V ε = g on ∂Ω.
Because of the continuity of the trace operator from H 1 (Ω) to L 2 (∂Ω), we have S ε ⇀ S in L 2 (0, T ′ ; L 2 (∂Ω)) and can pass to the limit in the first term on the right-hand side.
For the last term we use the weak convergence of S ε to S in L 2 (0, T ′ ; H 1 (Ω)) as well as the strong convergence of V ε to V in L 2 (Ω×(0, T ′ )). Undoing the spatial integration by parts, we obtain the desired result for ψ ∈ C 1 (Ω × [0, T ′ ]).
We are now in a position to complete the proof of Theorem 3.4 and show existence of generalized solutions to (1.1).
The passage to the limit ε → 0 in the weak form (3.6) of the equation for the velocity field V ε follows from standard arguments based on the above convergence properties. As a result, the limiting vector field V satisfies eq. (3.6). Moreover, the fact that V ε | ∂Ω×(0,T ) = g combined with the above convergence properties easily yields V | ∂Ω×(0,T ) = g. Thus, it remains to show that S satisfies inequality (3.7) for allS ∈ Z T ′ .
By Lemma 5.1 (A), S ε satisfies the variational inequality (5.16) We will deduce ineq. (3.7) by estimating the lim sup ε→0 of the left-hand side. First, the weak convergence S ε ⇀ S in L 2 (0, T ′ ; H 1 (Ω)) 3×3 implies that The term (S ε W (V ε ) − W (V ε )S ε ) :S + ηD(V ε ) : S ε consists of a finite linear combination of terms handled in Lemma 5.4. This lemma can be applied thanks to the convergence properties of (V ε , S ε ) and the interpolation (3.8) ensuring the boundedness of (S ε ) in L 10 3 (Ω×(0, T ′ )). Hence we can pass to the limit with all remaining parts in the left-hand side.
The above observations allow us to estimate the lim sup ε→0 of the left-hand side of (5.16) above by the left-hand side of inequality (3.7). Hence (V, S) is a generalized solution to (1.1)-(1.3) in the sense of Definition 3.1.
Since (3.11) directly follows from (3.10) and (3.9), and the latter follows from Proposition 3.3, which is proved below, it remains to establish the partial energy-dissipation inequality (3.10). But this is a simple consequence of the previously established partial energy-dissipation inequality. In particular, we note that the boundary extension w constructed for Theorem 3.7 depends only on g and, hence, is independent of the regularization parameter ε. Thus, we can use the partial energy-dissipation inequality (3.10) for v ε = V ε − w and S ε . With the given weak and strong convergences, we can pass to the limit ε → 0 and obtain the corresponding inequality for the limits v = V −w and S.
Finally, we show that, under suitable decay assumptions on the data, the total energy of the solution remains bounded. | 2021-04-13T01:15:58.134Z | 2021-04-12T00:00:00.000 | {
"year": 2021,
"sha1": "3b8bc4cb9bbb73dc15fb675cbc8f6fccb8edc53b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3b8bc4cb9bbb73dc15fb675cbc8f6fccb8edc53b",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.